text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Need to add:
#define GLUT_DISABLE_ATEXIT_HACK
#include < windows.h>
#define GLUT_DISABLE_ATEXIT_HACK
#include < windows.h>
Read More:
- Configuration of OpenGL under CodeBlocks and solutions to problems encountered
- After the vs2013 + OpenGL environment is set up, there is an error in running the first program
- A strange problem in compiling OpenGL program
- Vc2010 configuring OpenGL environment
- Using the third party OpenGL in codeblock
- Two problems in OpenGL Programming
- CodeBlocks configuring OpenGL
- Configuring OpenGL in VS
- |-OpenGL – | some small problems about the glut Library
- Using glut in CodeBlocks
- CodeBlocks configuring OpenGL environment
- The first day of OpenGL [vs2017 + OpenGL environment configuration]
- Configuring OpenGL with win 10 + CodeBlocks
- Vs compiling OpenGL project, the solution of unable to open the source file “GL / glaux. H” appears
- Error in header file when calling OpenGL to open obj file in vs2013: unable to open include file: “GL / glut. H”: no such file or directories
- Error: unable to open include file: ‘GL / glut. H’
- Configuring OpenGL in Chinese version of VS2010 and problem solving
- Visual studio 2017, OpenGL program running prompt glut32.dll missing solution
- 0028opengl program running prompt glut32.dll missing one of the solutions
- OpenGL program running prompt glut32.dll missing problem | https://programmerah.com/on-the-problem-that-codeblocks-cant-compile-glut-17482/ | CC-MAIN-2021-10 | refinedweb | 195 | 50.06 |
Heroku Connect is a service offered by Heroku which performs 2-way data synchronization between force.com and a Heroku Postgres database.
When we first built Heroku Connect, we decided to use polling to determine when data had changed on either side. Polling isn't pretty, but its simple and reliable, and those are "top line" features for Heroku Connect. But polling incurs two significant costs: high latency and wasted resources. The more you poll the more you waste API calls and database queries checking when there are no data changes. But if you lengthen your polling interval then you grow the latency for the data synchronization.
Events all around
The solution is conceptually simple - register an event handler with each data source so that it can send you an event when data has changed. Of course, most people that have done event-driven programming know that the practice is often harder than it sounds in theory.
In this post we will show how to use the force.com Streaming API to subscribe to event notifications from force.com. Then we will show how to do something similar to listen for events from Postgres.
force.com events
The force.com Streaming API is devoted to sending real-time events of data changes inside force.com.
The Streaming API supports a publish/subscribe model. In order to create a subscription, we first need to construct a
PushTopic. This creates a named query in the force.com system that we can subscribe to. Whenever a record changes which satisfies the query, the streaming API will publish those changes as events.
We construct the PushTopic automatically based on the force.com table that you are synchronizing with Heroku Connect. This is the Python code which accesses the force.com SOAP API (using the salesforce-python-toolkit):
def create_push_topic(self, object_name): topic = self.h.generateObject('PushTopic') topic.ApiVersion = 30.0 topic.Name = "hconnect_{0}".format(object_name) topic.NotifyForFields = "All" topic.NotifyForOperationCreate = True topic.NotifyForOperationUpdate = True topic.NotifyForOperationDelete = True topic.query = "select Id from {0}".format(object_name) res = self.h.create(topic) if hasattr(res, 'errors'): raise InvalidConfigurationException("PushTopic creation failed: {0}".format(res.errors[0]))
Now we need to setup a subscriber to this topic. We are using Node.js for most of our event-driven needs because it handles that architecture so well. The best force.com client library for Node.js is called NForce. NForce supports the Streaming API and makes it incredibly easy to subscribe to a PushTopic:
var nforce = require("nforce"); var org = nforce.createConnection(...); org.authenticate(...); var pt_name = "hconnect_" + object_name; // Create a connection to the Streaming API var str = org.stream({ topic: pt_name }); str.on('connect', function(){ console.log('connected to pushtopic'); }); str.on('error', function(error) { console.log('error: ' + error); }); str.on('data', function(data) { // Data will contain details of the streaming notification: console.log(data); });
For the purposes of Heroku Connect, whenever we receive a streaming event we fire off a task through Redis which commands a worker to query for the latest data from force.com and sync it to Postgres. Although it's possible to receive record data over the Streaming API, there are various limitations. Also, if our subscriber is down for some reason then we may miss events. For these reasons we use streaming events as a notifier that some data has changed and we should query for those changes using the force.com SOAP API.
Listen / Notify
Next up is how we listen for change events on the Postgres side.
Postgres supports a very cool publish/subscribe system using the built-in Listen and Notify commands.
Listen is the "subscribe" part. You issue this command through your database client to create a subscription to a named channel.
Inside the database, you use the
Notify command to publish events to the channel.
Sounds simple enough, right?
The publisher
One easy way to call
Notify within the database is to create a function which performs the call and to create a trigger which calls your function when data is modified.
Here's the PLSQL code for our trigger function:
CREATE FUNCTION table1_notify_trigger() RETURNS trigger AS $$ DECLARE BEGIN PERFORM pg_notify('channel1'); RETURN new; END; $$ LANGUAGE plpgsql; CREATE TRIGGER table1_insert_trigger AFTER INSERT ON table1 FOR EACH ROW EXECUTE PROCEDURE table1_notify_trigger();
This code defines a new function called
table1_notify_trigger. When this function is executed it calls
pg_notify with our channel argument
channel1.
Finally, we create an AFTER INSERT trigger on
table1 that executes our function whenever a row is inserted in the table.
If we want to track other DML events we can create additional AFTER UPDATE or AFTER DELETE triggers.
The subscriber
Again we've chosen to use Node.js to listen for events from Postgres because the LISTEN command is well support by the node-postgres module.
The code is very simple:
pg.connect(db_url, function(err, client) { if (err) { console.log("Error connecting to database: " + err); } else { client.on('notification', function(msg) { console.log("DATABASE NOTIFY: ", msg.payload); // Move some data... }); var query = client.query("LISTEN channel1"); } });
After connecting to the database, we just call
client.on to register a handler for notifications, and then execute the SQL command
LISTEN channel1 to setup the subscription. You can pass data to your subscriber and access it through the
msg.payload attribute.
Yay for events
The notifications system in Postgres works very well. It could be even better if Postgres supported a "default event set" that avoided the need to create the trigger. However, having those primitives gives you lots of control.
You can easily envision some really cool things you could do, like streaming change events out over a websocket to the browser. Give it a try and let us know what cool things you come up with. | https://blog.heroku.com/sf-streaming-api | CC-MAIN-2018-26 | refinedweb | 963 | 60.11 |
XPath
Camel supports XPath to allow an Expression or Predicate to be used in the DSL or Xml Configuration. For example you could use XPath to create an Predicate in a Message Filter or as an Expression for a Recipient List.
If the message body is stream based, which means the input.
Namespaces
You can easily use namespaces with XPath expressions using the Namespaces helper class.
Variables
Variables in XPath is defined in different namespaces. The default namespace is.
Camel will resolve variables according to either:
- namespace given
- no namespace given
Namespace Given
If the namespace is given then Camel is instructed exactly what to return. However when resolving either
IN or
OUT Camel will try to resolve a header with the given local part first, and return it. If the local part has the value
body then the body is returned instead.
No Namespace Given
If there is no namespace given then Camel resolves only based on the local part. Camel will try to resolve a variable in the following steps:
- From
variablesthat has been set using the
variable(name, value)fluent builder.
- From
message.in.headerif there is a header with the given key.
- From
exchange.propertiesif there is a property with the given key.
Functions
Camel adds the following XPath functions that can be used to access the exchange:
function:propertiesand
function:simpleis not supported when the return type is a
NodeSet, such as when using with a Splitter EIP.
Here's an example showing some of these functions in use.
Using XML Configuration
If you prefer to configure your routes in your Spring XML file then you can use XPath expressions as follows
Notice how we can reuse the namespace prefixes,
foo in this case, in the XPath expression for easier namespace based XPath expressions! See also this discussion on the mailinglist about using your own namespaces with XPath.
Setting the:
In Spring DSL you use the
resultType attribute to provide a fully qualified classname:
In
@XPath:
Available as of Camel 2.1
Where we use the XPath function
concat to prefix the order name with
foo-. In this case we have to specify that we want a
String as result type so the
concat function works.
Using XPath on Headers
Available as of Camel 2.11
Some users may have XML stored in a header. To apply an XPath statement to a header's value you can do this by defining the
headerName attribute.
In XML DSL:
headerName as the 2nd parameter as shown:
Examples
Here is a simple example using an XPath expression as a predicate in a Message Filter
NamespaceBuilder as shown in this example
choice construct. The first choice evaulates if the message has a header key
type that has the value
Camel. The 2nd
choice evaluates if the message body has a name tag
<name> which values is
Kong.
If neither is true the message is routed in the otherwise block:
XPath Injection
You can use Bean Integration to invoke a method on a bean and use various languages such as XPath to extract a value from the message and bind it to a method parameter.
The default XPath annotation has SOAP and XML namespaces available. If you want to use your own namespace URIs in an XPath expression you can use your own copy of the XPath annotation to create whatever namespace prefixes you want to use.
Example::
This will match the given predicate.
You can also evaluate for example as shown in the following three examples:
Evaluating with a String result is a common requirement and thus you can do it a bit simpler:
Using Saxon with XPathBuilder
Available as of Camel 2.3
You need to add
camel-saxon as dependency to your project. It's now easier to use Saxon with the
XPathBuilder which can be done in several ways as shown below. Where as the latter ones are the easiest ones.
Using a factory:
INFO level if it uses a non default
XPathFactory such as:
To use Apache Xerces you can configure the system property:
Enabling Saxon from Spring DSL
Available as of Camel 2.10
Similarly to Java DSL, to enable Saxon from Spring DSL you have three options:
Specifying the factory
Specifying the object model
Shortcut
Namespace Auditing to Aid Debugging
Available as of Camel 2.10
A large number of XPath-related issues that users frequently face are linked to the usage of namespaces. You may have some misalignment between the namespaces present in your message and those that your XPath expression is aware of or referencing. XPath predicates or expressions that are unable to locate the XML elements and attributes due to namespaces issues may simply look like "they are not working", when in reality all there is to it is a lack of namespace definition.
Namespaces in XML are completely necessary, and while we would love to simplify their usage by implementing some magic or voodoo to wire namespaces automatically, truth is that any action down this path would disagree with the standards and would greatly hinder interoperability.
Therefore, the utmost we can do is assist you in debugging such issues by adding two new features to the XPath Expression Language and are thus accessible from both predicates and expressions.
Logging the Namespace Context of Your XPath Expression/Predicate
Every time a new XPath expression is created in the internal pool, Camel will log the namespace context of the expression under the
org.apache.camel.builder.xml.XPathBuilder logger. Since Camel represents Namespace Contexts in a hierarchical fashion (parent-child relationships), the entire tree is output in a recursive manner with the following format:
Any of these options can be used to activate this logging:
- Enable
TRACElogging
INFOlevel.
Auditing namespaces
Camel is able to discover and dump all namespaces present on every incoming message before evaluating an XPath expression, providing all the richness of information you need to help you analyse and pinpoint possible namespace issues. To achieve this, it in turn internally uses another specially tailored XPath expression to extract all namespace mappings that appear in the message, displaying the prefix and the full namespace URI(s) for each individual mapping.
Some points to take into account:
- The implicit XML namespace (xmlns:xml="") is suppressed from the output because it adds no value.
- Default namespaces are listed under the
DEFAULTkeyword:
Spring DSL:
The result of the auditing will be appear at the
INFO level under the
org.apache.camel.builder.xml.XPathBuilder logger and will look like the following:
Available as of Camel 2.11
You can externalize the script and have Camel load it from a resource such as:
classpath:,
file: or
http:.
This is done using the following syntax:
resource:scheme:location, e.g., to refer to a file on the classpath you can do:
Dependencies
The XPath language is part of camel-core. | https://cwiki.apache.org/confluence/display/CAMEL/XPath | CC-MAIN-2018-47 | refinedweb | 1,146 | 57.81 |
Should Class.getAnnotations fail if an annotation type is not available?
While evaluating a GlassFish bug, I discovered a discrepancy in behavior of Class.getAnnotations() between IBM JRE and Sun JRE. the complex GlassFish issue boiled down to a simple test case as discussed below. The question is what should be the behavior of Class.getAnnotations() if one or more annotation class is not available at runtime. Consider the following test case:
// Main.java
import java.lang.annotation.*;
@Retention(RetentionPolicy.RUNTIME)
@interface Bar {}
@Bar
class Foo {}
class Main {
public static void main(String[] args) throws Exception{
Annotation[] as = Foo.class.getAnnotations();
System.out.println("Found " + as.length + " no. of annotations");
}
}
When you compile this, you shall obviously get Main.class, Foo.class and Bar.class. Remove Bar.class and run:
java Main
On IBM JRE (I am using 1.6.0 SR5 on AIX platform), it results in
Exception in thread "main" java.lang.TypeNotPresentException: Type Bar not present
at com.ibm.oti.reflect.AnnotationHelper.getAnnotation(AnnotationHelper.java:38)
at com.ibm.oti.reflect.AnnotationHelper.getDeclaredAnnotations(AnnotationHelper.java:50)
at java.lang.Class.getDeclaredAnnotations(Class.java:1628)
at java.lang.Class.getAnnotations(Class.java:1589)
at Main.main(Main.java:13)
Caused by: java.lang.ClassNotFoundException: Bar
at java.lang.Class.forName(Class.java:169)
at com.ibm.oti.reflect.AnnotationHelper.getAnnotation(AnnotationHelper.java:33)
... 4 more
where as while using Sun JRE (1.6.0_07), the program prints:
Found 0 no. of annotations
Conclusion:
I believe it is a bug in IBM JRE. There used to be a similar bug in Sun JDK. See for details. It says that Class.getAnnotations() is supposed to ignore when a annotation type can't be loaded. Sun JDK has been fixed. Now time for IBM JDK to be fixed.
- Login or register to post comments
- Printer-friendly version
- ss141213's blog
- 3376 reads
already fixed.
by dims - 2009-10-09 03:20Sahoo, if you see the sun jdk bug, you will see it was fixed in 5.0u6 and you tried SR5 for IBM JRE...see the disconnect? Yes, it's already fixed in 1.5 SR6b :) -- dims
What's the connection betn IBM JDK 1.6.0 SR5 and Sun JDK 5.0_u6?
by ss141213 - 2009-10-09 04:40
dims:
It is good to know that it has been fixed in IBM JDK, but I don't see the connection you made. What's the relation between Sun JDK 5.0_u6 and IBM JDK 1.6.0 SR5? One if Java 1.5 and the other one is Java 1.6 implementation. The bug was fixed in Sun JDK some 3 years ago. When did IBM JDK 1.6.0 SR5 come out? Can't be 3 years back, can it?
Sahoo | https://weblogs.java.net/node/287813/atom/feed | CC-MAIN-2014-10 | refinedweb | 462 | 55.1 |
Has anyone on here done the Healthshare HIE? I cannot get an ORU to file in a chart. There are no errors but it doesn't show.
HealthShare
Continue Reading Post by Karen Geyer 17 hours 10 min ago
Last comment 12 hours 53 min ago
I would like to know if there is a built in function that checks a value in the rule editor to see if it is numeric.
Continue Reading Post by Sam S 6 days ago
Last answer 5 days ago Last comment 1 days ago
I loaded 2017.2 onto a windows desktop that I was going to use for testing.
Continue Reading Post by Scott Roth 1 days ago
Last answer 1 days ago
Analytics, Analyzer, API, Graph, VIsualization, Web Development, Caché, Ensemble, HealthShare, iKnow, InterSystems IRIS, Open Exchange
Continue Reading Post by Nikita Savchenko 10 days ago
Last comment 8 days ago
FHIR (HealthShare), HL7 (HealthShare), REST API (HealthShare), REST Services (HealthShare), HealthShare (HealthShare)
Is there any good documentation/tutorials on creating gateways in both directions between FHIR and Hl7v2 (for Health Connect)?
the scenarios I'm most interested in
Continue Reading Post by Stephen De Gabrielle 27 December 2018
Last answer 12 days ago Last comment 8 days ago
How to use remote database in Healthshare , can anyone suggest steps to connect remote database into our local namespace.
Continue Reading Post by Karthikeyan G 19 December 2018
Last answer 19 December 2018 Last comment 20 December 2018
Hello,
Continue Reading Post by Harkirat Dhillon 25 October 2018
Last answer 28 November 2018 Last comment 25 October 2018
Hi,
Continue Reading Post by Bachhar Tirthankar 3 December 2018
Last answer 4 December 2018 Last comment 4 December 2018
How can I have HealthShare respect relative paths for javascript and css files that are referenced in the index.html file?
I am new to InterSystems.
I have set up a simple application in HealthShare.
Continue Reading Post by Lakin Ducker 28 November 2018
Last answer 29 November 2018 Last comment 29 November 2018
Hi everyone! My name is Bruno Soares and I work with HealthShare. I have one question and would be very grateful if someone help me. Can somebody tell me where HealthShare parameters weight came from?
Continue Reading Post by Bruno Soares 25 October 2018
Last answer 28 November 2018 Last comment 14 November 2018
I am trying to get SDA3.Container object from a Stream object like following code:
Continue Reading Post by Gaolai Peng 8 November 2018
Last answer 26 November 2018
Hi,
I need to create a role that is able to monitor the production, but isn't allowed to shut down the production or individual services.
Continue Reading Post by Joost Houwen 20 November 2018
Last answer 20 November 2018 Last comment 20 November 2018
Hi,
Continue Reading Post by Arun Madhan 15 August 2018
Last comment 14 November 2018
Does anyone use EnsLib.ITK.DTS.Framework.Operation.DTSTransferFile ? and how did you manage the transition to MESH?
Does anyone use EnsLib.ITK.DTS.Framework.Operation.DTSTransferFile ? and how id you manage the transition to MESH?
I just noticed this PR from 2011
Continue Reading Post by Stephen De Gabrielle 17 January 2018
Last answer 18 January 2018 Last comment 8 November 2018
I am currently using InterSystems for patients data management related to intake treatment planning and delivery of dose.
Continue Reading Post by Farhan Shariff 24 September 2018
Last answer 31 October 2018 Last comment 10
Hi,
Continue Reading Post by Stephen De Gabrielle 29 October 2018
HealShare 2017
Hi dev community,
I am currently working on an interface that needs to communicate to a SOAP/ITK endpoint.
Continue Reading Post by Arun Madhan 29 October 2018
My main goal is to be able to create reports and alerts in my SIEM based on what individual searched for and accessed what patient records, and when.
Continue Reading Post by Shaun Murray 11 October 2018
Last answer 24 October 2018
I looking to gain insight on if I can trigger a SMS text to a Clinician
New to HealthShare and trying to get the process for setting up a message.
Simple use case
Continue Reading Post by Edward Jalbert 5 October 2018
Last answer 5 October 2018 Last comment 23 October 2018
I want to do some logic based on what environment code is running in. I can't find a built-in function to retrieve this so I'd like to write a custom function.
Continue Reading Post by Scott Beeson 21 January 2016
Last answer 21 January 2016 Last comment 20 October 2018
Does anyone have a tool or standard to anonymize CCDAs?
Continue Reading Post by Paul Riker 19 October 2018
I just downloaded and installed the latest WebTerminal into my local copy of Healthshare 2016.2.1
I installed via studio and everything compiled without error.
Continue Reading Post by Mike Dawson 10 October 2018
Last answer 15 October 2018 Last comment 19 October 2018
Continue Reading Post by RB Omo 12 October 2018
Hello,
Continue Reading Post by Timothy Rea 12 October 2018
Last answer 12 October 2018 Last comment 12 Oct
Hi everyone,
Continue Reading Post by Hieu Dien Nguyen 4 October 2018
Last answer 4 October 2018 Last comment 4 October 2018
We have started to see Journal Daemon inactive and DBLatency warnings in the Console log of our Healthshare server. OS is Windows Server 2008 running in a VM. See below
Continue Reading Post by Mike Dawson 3 October 2018
Last answer 3 October 2018
.
Continue Reading Post by Gevorg Arutunyan 6 July 2018
Last comment 3 October 2018
Hi I'm just trying to make sure I'm not missing a trick here.
Continue Reading Post by Richard Housham 20 August 2018
Last comment 26 September 2018 | https://community.intersystems.com/tags/healthshare?filter=questions | CC-MAIN-2019-04 | refinedweb | 970 | 52.53 |
jip 0.8.1
jip installs packages, for Jython
Jip is the jython equivalent of pip to python. It will resolve dependencies and download jars for your jython environment.
License
jip itself is distributed according to MIT License .
Install
jip is recommended to run within virtualenv, which is a best practice for python/jython developers to created a standalone, portable environment. From jip 0.7, you can use jip.embed in the global installation.
Install jip within virtualenv
Create virtualenv with jython:
virtualenv -p /usr/local/bin/jython jython-env
Activate the shell environment:
cd jython-dev source bin/activate
Download and install jip with pip:
pip install jip
Usage
Install a Java package
jip will resolve dependencies and download jars from maven repositories. You can install a Java package just like what you do python with pip:
jip install <groupId>:<artifactId>:<version>
Take spring as example:
jip install org.springframework:spring-core:3.0.5.RELEASE
Resolve dependencies defined in a pom
jip allows you to define dependencies in a maven pom file, which is more maintainable than typing install command one by one:
jip resolve pom.xml
Resolve dependencies for an artifact
With jip, you can resolve and download all dependencies of an artifact, without grab the artifact itself (whenever the artifact is downloadable, for example, just a plain pom). This is especially useful when you are about to setup an environment for an artifact. Also, java dependencies for a jython package is defined in this way.
jip deps info.sunng.gefr:gefr:0.2-SNAPSHOT
Update snapshot artifact
You can use update command to find and download a new deployed snapshot:
jip update info.sunng.bason:bason-annotation:0.1-SNAPSHOT
Run jython with installed java packages in path
Another script jython-all is shipped with jip. To run jython with Java packages included in path, just use jython-all instead of jython
List
Use jip list to see artifacts you just installed
Remove a package
You are suggested to use jip remove to remove an artifact. This will keep library index consistent with file system.
jip remove org.springframework:spring-core:3.0.5.RELEASE
Currently, there is no dependency check in artifact removal. So you should be careful when use this command.
Clean
jip clean will remove everything you downloaded, be careful to use it.
Search
You can also search maven central repository with a jip search [keyword]. The search service is provided by Sonatype’s official Maven search .
Persist current environment state
Before you distribute you environment, you can use freeze to persist current state into a pom file.
jip freeze > pom.xml
Configuration
You can configure custom maven repository with a dot file, jip will search configurations in the following order:
- $VIRTUAL_ENV/.jip, your virtual environment home
- $HOME/.jip, your home
Here is an example:
[repos:jboss] uri= type=remote [repos:local] uri=/home/sun/.m2/repository/ type=local [repos:central] uri= type=remote
Be careful that the .jip file will overwrite default settings, so you must include default local and central repository explicitly. jip will skip repositories once it finds package matches the maven coordinator.
From 0.4, you can also define repositories in pom.xml if you use the resolve command. jip will add these custom repositories with highest priority.
Distribution helpers
From 0.4, you can use jip in your setup.py to simplify jython source package distribution. Create pom.xml in the same directory with setup.py. Fill it with your Java dependencies in standard way. In this file, you can also define custom repositories. Here is an example:
<project xmlns="" xmlns: ... > ... </dependencies> <repositories> <repository> <id>sonatype-oss-sonatype</id> <url></url> </repository> </repositories> </project>
And in your setup.py, use the jip setup wrapper instead of the one provided by setuptools or distutils. You can add keyword argument pom to specify a custom name of the pom file.
from jip.dist import setup
Other than the traditional pom configuration, jip also allows you to describe dependencies in python. You can define a data structure in your setup.py like:
requires_java = { 'dependencies':[ ## (groupdId, artifactId, version) ('org.slf4j', 'slf4j-api', '1.6.1'), ('org.slf4j', 'slf4j-log4j12', '1.6.1'), ('info.sunng.soldat', 'soldat', '1.0-SNAPSHOT'), ('org.apache.mina', 'mina-core', '2.0.2') ], 'repositories':[ ('sonatype-oss-snapshot', '') ] }
And pass it to jip setup as keyword argument requires_java. Once jip found this argument, it won’t try to load a pom file.
from jip.dist import setup setup( ... requires_java=requires_java, ...)
Another resolve command was added to setuptools, you can use this command to download all dependencies to library path
jython setup.py resolve
All dependencies will be installed when running
jython setup.py install
So with jip’s setup() wrapper, pip will automatically install what your package needs. You can publish your package to python cheese shop, and there is just one command for everything
pip install [your-package-name]
Embedded dependency helper
jip.embed is available for both virtualenv and global installation. You can descirbe Java dependency in you code, then it will be resolved on the fly. jip.embed is inspired by Groovy’s @Grab.
from jip.embed import require require('commons-lang:commons-lang:2.6') from org.apache.commons.lang import StringUtils StringUtils.reverse('jip rocks')
If you have any problem using jip, or feature request for jip, please feel free to fire an issue on github issue tracker. You can also follow @Sunng on twitter.
Change Notes
0.7 (2011-06-11)
- All new jip.embed and global installation
- enhanced search
- dry-run option for install, deps and resolve
- exclusion for install command and jip.dist
- local maven repository is disabled by default
- improved dependency resolving speed
- jip now maintains a local cache of jars and poms in $HOME/.jip/cache/
- use argparse for better command-line ui
- add some test cases
0.5.1 (2011-05-14)
- Artifact jar package download in paralell
- User-agent header included in http request
- new command freeze to dump current state
- bugfix
0.4 (2011-04-15)
- New commands available: search, deps, list, remove
- New feature jip.dist for setuptools integration
- Dependency exclusion support, thanks vvangelovski
- Allow project-scoped repository defined in pom.xml and setup.py
- Code refactoring, now programming friendly
- README converted to reStructuredText
- Migrate to MIT License
0.2.1 (2011-04-07)
- Improved console output format
- Correct scope dependency management inheritance
- Alpha release of snapshot management, you can update a snapshot artifact
- Environment independent configuration. .jip for each environment
- Bug fixes
0.1 (2011-01-04)
- Initial release
Links
- Author: Sun Ning
- License: mit
- Categories
- Package Index Owner: sunng
- Package Index Maintainer: wikier
- DOAP record: jip-0.8.1.xml | https://pypi.python.org/pypi/jip/0.8.1 | CC-MAIN-2016-50 | refinedweb | 1,117 | 50.63 |
Does anyone know any good code for converting Arabic numbers to Roman numbers? I need c++ code that can be used on Windows C++ Visual Studio. Appreciate any help that I can get.
Does anyone know any good code for converting Arabic numbers to Roman numbers? I need c++ code that can be used on Windows C++ Visual Studio. Appreciate any help that I can get.
Go nuts with modulus. It will give you the remainders of each division, 1000, 100, 50, 5, whatever, and you can output the correct letter.. or set of letters.
smells like homework to me.
how about you try and do it yourself and post with specific questions.
Quidquid latine dictum sit, altum sonatur.
Whatever is said in Latin sounds profound.
Use switch statement as follows:
Code:while (loopCount < firstRoman.length()) { switch(firstRoman[loopCount++]) { case 'I': first += 1; break; case 'V': first += 5; break; case 'X': first += 10; break; case 'L': first += 50; break; case 'C': first += 100; break; case 'D': first += 500; break; case 'M': first += 1000; break; default: cout << endl << " You entered an invalid numeral. Goodbye." << endl << endl; return;
the capital letters MDCLXVI are constants and these converts the integer to the Roman numeral (adding style) ie no IV for 4. and it is for straight roman numerals no extended.
Code:string Convert(int digit) { string temp="\0"; while (digit != 0) { if (digit > 999) { digit = digit - M; temp = temp + 'M'; } else if (digit > 499) { digit = digit - D; temp = temp + 'D'; } else if (digit >99) { digit = digit - C; temp = temp + 'C'; } else if (digit > 49) { digit = digit - L; temp = temp + 'L'; } else if (digit > 9) { digit = digit - X; temp = temp + 'X'; } else if (digit > 4) { digit = digit - V; temp = temp + 'V'; } else { digit = digit - I; temp = temp + 'I'; } }//end while return (temp); }//end Convert ()
I know its ugly but....
Code:#include <iostream.h> /* i 1 v 5 x 10 l 50 c 100 d 500 m 1000 */ int main() { int equiv2[7]; char equiv1[7]; equiv2[0]=1; equiv2[1]=5; equiv2[2]=10; equiv2[3]=50; equiv2[4]=100; equiv2[5]=500; equiv2[6]=1000; equiv1[0]='I'; equiv1[1]='V'; equiv1[2]='X'; equiv1[3]='L'; equiv1[4]='C'; equiv1[5]='D'; equiv1[6]='M'; char enter[20]; int tmp[20], out=0, z; for(int i=0; i<20; i++) enter[i]=' '; for(i=0; i<20; i++) tmp[i]=0; cout<<"enter the roman numeral:"; cin>>enter; i=0; while(i<20&&enter[i]!=' ') { for(z=0; z<7; z++) { if(equiv1[z]==enter[i]) { tmp[i]=equiv2[z]; break; } } i++; } i=0; z=0; while(i<20&&enter[z]!=' ') { if(tmp[i+1]>tmp[i]) out+=tmp[i+1]-tmp[i]; else out+=tmp[i+1]+tmp[i]; i+=2; z++; } cout<<endl<<out<<endl; return 0; //notice the skillful use of int main, and how i return values ;) }
Chance favors the prepared mind.
Vis. C++ 6.0
this sounds like some one else in my class hmm... don't use arrays for they have not been taught in class yet *cough* You need at least 3 functions:
1 - Convert from integer to Roman Numeral
2 - Convert from Roman Numeral to integer
3 - Check Roman Numeral for errors (arrangement and false letters)
Of course they would be value returning like the string function I posted above.
"The most common form of insanity is a combination of disordered passions and disordered intellect with gradations and variations almost infinite."
I am evidently not in your class, because we had arrays weeks ago and i can use them in this function. The array function code that you posted is for converting from Roman to Arabic, not from Arabic to Roman. I presume I can just change some of the variables around and it will work. Thanks for the help.
I meant the string returning function up above the arrays that converts from an integer to a Roman Numeral. Not in my class , oh well. cheers. | https://cboard.cprogramming.com/cplusplus-programming/15933-roman-number-converter.html | CC-MAIN-2017-04 | refinedweb | 663 | 69.82 |
The.”
Step 1: BOM)
Tools:
Soldering Iron
Solder
Adafruit Standalone AVR ISP Programmer Shield (or any other programmer for burning bootloaders to AVRs)
FTDI USB Adaptor or FTDI USB Cable
Step 2: Design
The first thing to do is layout the placement of devices on the Perma-Proto board. makes the layout of parts and connections on the Perma-Proto a snap. The Fritzing file can be downloaded along with the Arduino sketch from GitHub.
Step 3: Program Blank AVR
The microcontroller used in this project is typically sold as a blank chip, but some vendors sell Atmel ATMega328P AVRs pre-programmed with the bootloader necessary to interface with the Arduino IDE. If you’re using a blank microcontroller, you’ll need to burn a bootloader to it with a some kind of programmer. There’s a bunch of devices out there for programming AVRs. I use the Adafruit Standalone AVR ISP Programmer Shield (pictured above). It comes in a user assembled kit that makes programming the chips super simple.
Step 4: Solder
The Fritzing diagram illustrates part placement and wiring. It shows the connections on only one side of the board and only illustrates what connections need to be made not the exact wire placement. It will be necessary to make some of the connections on the back side of the Perma-Proto board and some under the seven segment display.
Solder the seven segment display in last as it covers some of the components and wiring. It also over-hangs the ATMega328P so make sure you solder in the AVR, or solder in the socket and put the chip in it, before soldering in the display. In hindsight, soldering in a 4 pin female header for the display to plug into might not have been a bad idea. On the other hand, I'm not sure how sturdy that would have been.
Step 5: Kickstand
To make the board stand up on a desktop, cut a piece of a large paper clip and pop it into two of the empty ground bus holes that run along the top and bottom of the board. With the paper clip on the right and the barrel jack on the left, the board stands up at about 60 degrees.
8 Discussions
4 years ago on Introduction
help me pleaseeeeee
Reply 4 years ago
did you ever figure your problem out with declaring dht?
5 years ago on Step 2
Would it be possible to add the schematic view from Fritzing as well? It may make it clearer which components are connected.
Reply 5 years ago on Introduction
The Perma-Proto part does not act like a breadboard in Fritzing. Connections made on the Perma-Proto in the breadboard view do not translate to the schematic screen. The schematic view only has the components on it.
5 years ago on Introduction
Would you mind pasting the sketch here, instead of the picture?
Reply 5 years ago on Introduction
Sure.
=====================================
//tempduino (temperature and humidity display)
//by jeff clymer (jeff@techunboxed.com)
//2013.01.14
//version 3.1
#include "DHT.h"
#include
#include "Adafruit_LEDBackpack.h"
#include "Adafruit_GFX.h"
#define DHTPIN 2 //DHT22 is connected to digital pin 2
#define DHTTYPE DHT22 //set the type of sensor
DHT dht(DHTPIN, DHTTYPE);
Adafruit_7segment matrix = Adafruit_7segment();
void setup() {
dht.begin();
matrix.begin(0x70);
matrix.setBrightness(1);
}
void loop() {
float h = dht.readHumidity(); //h = DHT humidity
float t = dht.readTemperature(); //t = DHT temp
matrix.print(t*1.8+32); //display temp in fahrenheit
matrix.writeDisplay(); delay(30000); //wait 30 seconds
matrix.print(h); //display humidity percentage
matrix.writeDisplay();
delay(10000); //wait 10 seconds
}
Reply 5 years ago on Introduction
Thanks! (but I meant in the body of the instructable so everyone can read it on that step) :)
5 years ago on Introduction
Find similar Arduino based projects here | https://www.instructables.com/id/Tempduino-Arduino-Based-Temp-and-Humidity-Displa/ | CC-MAIN-2019-09 | refinedweb | 639 | 72.76 |
Note
Learning Objectives
By the end of this chapter, you will be able to:
Explain supervised machine learning and describe common examples of machine learning problems
Install and load Python libraries into your development environment for use in analysis and machine learning problems
Access and interpret the documentation of a subset of Python libraries, including the powerful pandas library
Create an IPython Jupyter notebook and use executable code cells and markdown cells to create a dynamic report
Load an external data source using pandas and use a variety of methods to search, filter, and compute descriptive statistics of the data
Clean a data source of mediocre quality and gauge the potential impact of various issues within the data source
The study and application of machine learning and artificial intelligence has recently been the source of much interest and research in the technology and business communities. Advanced data analytics and machine learning techniques have shown great promise in advancing many sectors, such as personalized healthcare and self-driving cars, as well as in solving some of the world's greatest challenges, such as combating climate change. This book has been designed to assist you in taking advantage of the unique confluence of events in the field of data science and machine learning today. Across the globe, private enterprises and governments are realizing the value and efficiency of data-driven products and services. At the same time, reduced hardware costs and open source software solutions are significantly reducing the barriers to entry of learning and applying machine learning techniques.
Throughout this book, you will develop the skills required to identify, prepare, and build predictive models using supervised machine learning techniques in the Python programming language. The six chapters each cover one aspect of supervised learning. This chapter introduces a subset of the Python machine learning toolkit, as well as some of the things that need to be considered when loading and using data sources. This data exploration process is further explored in Chapter 2, Exploratory Data Analysis and Visualization, as we introduce exploratory data analysis and visualization. Chapter 3, Regression Analysis, and Chapter 4, Classification, look at two subsets of machine learning problems â regression and classification analysis â and demonstrate these techniques through examples. Finally, Chapter 5, Ensemble Modeling, covers ensemble networks, which use multiple predictions from different models to boost overall performance, while Chapter 6, Model Evaluation, covers the extremely important concepts of validation and evaluation metrics. These metrics provide a means of estimating the true performance of a model.
A machine learning algorithm is commonly thought of as simply the mathematical process (or algorithm) itself, such as a neural network, deep neural network, or random forest algorithm. However, this is only a component of the overall system; firstly, we must define the problem that can be adequately solved using such techniques. Then, we must specify and procure a clean dataset that is composed of information that can be mapped from the first number space to a secondary one. Once the dataset has been designed and procured, the machine learning model can be specified and designed; for example, a single-layer neural network with 100 hidden nodes that uses a tanh activation function.
With the dataset and model well defined, the means of determining the exact values for the model can be specified. This is a repetitive optimization process that evaluates the output of the model against some existing data and is commonly referred to as training. Once training has been completed and you have your defined model, then it is good practice to evaluate it against some reference data to provide a benchmark of overall performance.
Considering this general description of a complete machine learning algorithm, the problem definition and data collection stages are often the most critical. What is the problem you are trying to solve? What outcome would you like to achieve? How are you going to achieve it? How you answer these questions will drive and define many of the subsequent decisions or model design choices. It is also in answering these questions that we will select which category of machine learning algorithms we will choose: supervised or unsupervised methods.
So, what exactly are supervised and unsupervised machine learning problems or methods? Supervised learning techniques center on mapping some set of information to another by providing the training process with the input information and the desired outputs, then checking its ability to provide the correct result. As an example, let's say you are the publisher of a magazine that reviews and ranks hairstyles from various time periods. Your readers frequently send you far more images of their favorite hairstyles for review than you can manually process. To save some time, you would like to automate the sorting of the hairstyles images you receive based on time periods, starting with hairstyles from the 1960s and 1980s:
Figure 1.1: Hairstyles images from different time periods
To create your hairstyles-sorting algorithm, you start by collecting a large sample of hairstyles images and manually labeling each one with its corresponding time period. Such a dataset (known as a labeled dataset) is the input data (hairstyles images) and the desired output information (time period) is known and recorded. This type of problem is a classic supervised learning problem; we are trying to develop an algorithm that takes a set of inputs and learns to return the answers that we have told it are correct.
Generally, if you are trying to automate or replicate an existing process, the problem is a supervised learning problem. Supervised learning techniques are both very useful and powerful, and you may have come across them or even helped create labeled datasets for them without realizing. As an example, a few years ago, Facebook introduced the ability to tag your friends in any image uploaded to the platform. To tag a friend, you would draw a square over your friend's face and then add the name of your friend to notify them of the image. Fast-forward to today and Facebook will automatically identify your friends in the image and tag them for you. This is yet another example of supervised learning. If you ever used the early tagging system and manually identified your friends in an image, you were in fact helping to create Facebook's labeled dataset. A user who uploaded an image of a person's face (the input data) and tagged the photo with the subject's name would then create the label for the dataset. As users continued to use this tagging service, a sufficiently large labeled dataset was created for the supervised learning problem. Now friend-tagging is completed automatically by Facebook, replacing the manual process with a supervised learning algorithm, as opposed to manual user input:
Figure 1.2: Tagging a friend on Facebook
One particularly timely and straightforward example of supervised learning is the training of self-driving cars. In this example, the algorithm uses the target route as determined by the GPS system, as well as on-board instrumentation, such as speed measures, the brake position, and/or a camera or Light Detection and Ranging (LIDAR), for road obstacle detection as the labeled outputs of the system. During training, the algorithm samples the control inputs as provided by the human driver, such as speed, steering angle, and brake position, mapping them against the outputs of the system; thus providing the labeled dataset. This data can then be used to train the driving/navigation systems within the self-driving car or in simulation exercises.
Image-based supervised problems, while popular, are not the only examples of supervised learning problems. Supervised learning is also commonly used in the automatic analysis of text to determine whether the opinion or tone of a message is positive, negative, or neutral. Such analysis is known as sentiment analysis and frequently involves creating and using a labeled dataset of a series of words or statements that are manually identified as either positive, neutral, or negative. Consider these sentences: I like that movie and I hate that movie. The first sentence is clearly positive, while the second is negative. We can then decompose the words in the sentences into either positive, negative, or neutral (both positive, both negative); see the following table:â©
Figure 1.3: Decomposition of the words
Using sentiment analysis, a supervised learning algorithm could be created, say, using the movie database site IMDb to analyze comments posted about movies to determine whether the movie is being positively or negatively reviewed by the audience. Supervised learning methods could have other applications, such as analyzing customer complaints, automating troubleshooting calls/chat sessions, or even medical applications such as analyzing images of moles to detect abnormalities ().
This should give you a good understanding of the concept of supervised learning, as well as some examples of problems that can be solved using these techniques. While supervised learning involves training an algorithm to map the input information to corresponding known outputs, unsupervised learning methods, by contrast, do not utilize known outputs, either because they are not available or even known. Rather than relying on a set of manually annotated labels, unsupervised learning methods model the supplied data through specific constraints or rules designed into the training process.
Clustering analysis is a common form of unsupervised learning where a dataset is to be divided into a specified number of different groups based on the clustering process being used. In the case of k-nearest neighbors clustering, each sample from the dataset is labeled or classified in accordance with the majority vote of the k-closest points to the sample. As there are no manually identified labels, the performance of unsupervised algorithms can vary greatly with the data being used, as well as the selected parameters of the model. For example, should we use the 5 closest or 10 closest points in the majority vote of the k-closest points? The lack of known and target outputs during training leads to unsupervised methods being commonly used in exploratory analysis or in scenarios where the ground truth targets are somewhat ambiguous and are better defined by the constraints of the learning method.
We will not cover unsupervised learning in great detail in this book, but it is useful to summarize the main difference between the two methods. Supervised learning methods require ground truth labels or the answers for the input data, while unsupervised methods do not use such labels, and the final result is determined by the constraints applied during the training process.
So, why have we chosen the Python programming language for our investigation into supervised machine learning? There are a number of alternative languages available, including C++, R, and Julia. Even the Rust community is developing machine learning libraries for their up-and-coming language. There are a number of reasons why Python is the first-choice language for machine learning:
There is great demand for developers with Python expertise in both industry and academic research.
Python is currently one of the most popular programming languages, even reaching the number one spot in IEEE Spectrum magazine's survey of the top 10 programming languages ().
Python is an open source project, with the entire source code for the Python programming language being freely available under the GNU GPL Version 2 license. This licensing mechanism has allowed Python to be used, modified, and even extended in a number of other projects, including the Linux operating system, supporting NASA (), and a plethora of other libraries and projects that have provided additional functionality, choice, and flexibility to the Python programming language. In our opinion, this flexibility is one of the key components that has made Python so popular.
Python provides a common set of features that can be used to run a web server, a microservice on an embedded device, or to leverage the power of graphical processing units to perform precise calculations on large datasets.
Using Python and a handful of specific libraries (or packages, as they are known in Python), an entire machine learning product can be developedâstarting with exploratory data analysis, model definition, and refinement, through to API construction and deployment. All of these steps can be completed within Python to build an end-to-end solution. This is the significant advantage Python has over some of its competitors, particularly within the data science and machine learning space. While R and Julia have the advantage of being specifically designed for numerical and statistical computing, models developed in these languages typically require translation into some other language before they can be deployed in a production setting.
We hope that, through this book, you will gain an understanding of the flexibility and power of the Python programming language and will start on the path of developing end-to-end supervised learning solutions in Python. So, let's get started.
One aspect of the data science development environment that distinguishes itself from other Python projects is the use of IPython Jupyter notebooks (). Jupyter notebooks provide a means of creating and sharing interactive documents with live, executable code snippets, and plots, as well as the rendering of mathematical equations through the Latex () typesetting system. This section of the chapter will introduce you to Jupyter notebooks and some of their key features to ensure your development environment is correctly set up.
Throughout this book, we will make frequent reference to the documentation for each of the introduced tools/packages. The ability to effectively read and understand the documentation for each tool is extremely important. Many of the packages we will use contain so many features and implementation details that it is very difficult to memorize them all. The following documentation may come in handy for the upcoming section on Jupyter notebooks:
The Anaconda documentation can be found at.
The Anaconda user guide can be found at.
The Jupyter Notebook documentation can be found at.
In this exercise, we will launch our Jupyter notebook. Ensure you have correctly installed Anaconda with Python 3.7, as per the Preface:
There are two ways of launching a Jupyter notebook through Anaconda. The first method is to open Jupyter using the Anaconda Navigator application available in the Anaconda folder of the Windows Start menu. Click on the Launch button and your default internet browser will then launch at the default address,, and will start in a default folder path.
The second method is to launch Jupyter via the Anaconda prompt. To launch the Anaconda prompt, simply click on the Anaconda Prompt menu item, also in the Windows Start menu, and you should see a pop-up window similar to the following screenshot:
Figure 1.4: Anaconda prompt
Once in the Anaconda prompt, change to the desired directory using the cd (change directory) command. For example, to change into the Desktop directory for the Packt user, do the following:
C:\Users\Packt> cd C:\Users\Packt\Desktop
Once in the desired directory, launch a Jupyter notebook using the following command:
C:\Users\Packt> jupyter notebook
The notebook will launch with the working directory from the one you specified earlier. This then allows you to navigate and save your notebooks in the directory of your choice as opposed to the default, which can vary between systems, but is typically your home or My Computer directory. Irrespective of the method of launching Jupyter, a window similar to the following will open in your default browser. If there are existing files in the directory, you should also see them here:
Figure 1.5: Jupyter notebook launch window
The Hello World exercise is a rite of passage, so you certainly cannot be denied that experience! So, let's print Hello World in a Jupyter notebook in this exercise:
Start by creating a new Jupyter notebook by clicking on the New button and selecting Python 3. Jupyter allows you to run different versions of Python and other languages, such as R and Julia, all in the same interface. We can also create new folders or text files here too. But for now, we will start with a Python 3 notebook:
Figure 1.6: Creating a new notebook
This will launch a new Jupyter notebook in a new browser window. We will first spend some time looking over the various tools that are available in the notebook:
Figure 1.7: The new notebook
There are three main sections in each Jupyter notebook, as shown in the following screenshot: the title bar (1), the toolbar (2), and the body of the document (3). Let's look at each of these components in order:
Figure 1.8: Components of the notebook
The title bar simply displays the name of the current Jupyter notebook and allows the notebook to be renamed. Click on the Untitled text and a popup will appear allowing you to rename the notebook. Enter Hello World and click Rename:
Figure 1.9: Renaming the notebook
For the most part, the toolbar contains all the normal functionality that you would expect. You can open, save, and make copies ofâor create newâJupyter notebooks in the File menu. You can search replace, copy, and cut content in the Edit menu and adjust the view of the document in the View menu. As we discuss the body of the document, we will also describe some of the other functionalities in more detail, such as the ones included in the Insert, Cell, and Kernel menus. One aspect of the toolbar that requires further examination is the far right-hand side, the outline of the circle on the right of Python 3.
Hover your mouse over the circle and you will see the Kernel Idle popup. This circle is an indicator to signify whether the Python kernel is currently processing; when processing, this circle indicator will be filled in. If you ever suspect that something is running or is not running, you can easily refer to this icon for more information. When the Python kernel is not running, you will see this:
Figure 1.10: Kernel idle
When the Python kernel is running, you will see this:
Figure 1.11: Kernel busy
This brings us to the body of the document, where the actual content of the notebook will be entered. Jupyter notebooks differ from standard Python scripts or modules, in that they are divided into separate executable cells. While Python scripts or modules will run the entirety of the script when executed, Jupyter notebooks can run all of the cells sequentially, or can also run them separately and in a different order if manually executed.
Double-click on the first cell and enter the following:
>>> print('Hello World!')
Click on Run (or use the Ctrl + Enter keyboard shortcut):
Figure 1.12: Running a cell
Congratulations! You just completed Hello World in a Jupyter notebook.
In the previous exercise, notice how the print statement is executed under the cell. Now let's take it a little further. As mentioned earlier, Jupyter notebooks are composed of a number of separately executable cells; it is best to think of them as just blocks of code you have entered into the Python interpreter, and the code is not executed until you press the Ctrl + Enter keys. While the code is run at a different time, all of the variables and objects remain in the session within the Python kernel. Let's investigate this a little further:
Launch a new Jupyter notebook and then, in three separate cells, enter the code shown in the following screenshot:
Figure 1.13: Entering code into multiple cells
Click Restart & Run All.
Notice that there are three executable cells, and the order of execution is shown in rectangular brackets; for example, In [1], In [2], and In [3]. Also note how the hello_world variable is declared (and thus executed) in the second cell and remains in memory, and thus is printed in the third cell. As we mentioned before, you can also run the cells out of order.
Click on the second cell, containing the declaration of hello_world, change the value to add a few more exclamation points, and run the cell again:
Figure 1.14: Changing the content of the second cell
Notice that the second cell is now the most recently executed cell (In [4]), and that the print statement after it has not been updated. To update the print statement, you would then need to execute the cell below it. Warning: be careful about your order of execution. If you are not careful, you can easily override values or declare variables in cells below their first use, as in notebooks, you no longer need to run the entire script at once. As such, it is good practice to regularly click Kernel | Restart & Run All. This will clear all variables from memory and run all cells from top to bottom in order. There is also the option to run all cells below or above a particular cell in the Cell menu:
Figure 1.15: Restarting the kernel
You can also move cells around using either the up/down arrows on the left of Run or through the Edit toolbar. Move the cell that prints the hello_world variable to above its declaration:
Figure 1.16: Moving cells
Click on Restart & Run All cells:
Figure 1.17: Variable not defined error
Notice the error reporting that the variable is not defined. This is because it is being used before its declaration. Also, notice that the cell after the error has not been executed as shown by the empty In [ ].
There are a number of additional features of Jupyter notebooks that make them very useful. In this exercise, we will examine some of these features:
Jupyter notebooks can execute commands directly within the Anaconda prompt by including an exclamation point prefix (!). Enter the code shown in the following screenshot and run the cell:
Figure 1.18: Running Anaconda commands
One of the best features of Jupyter notebooks is the ability to create live reports that contain executable code. Not only does this save time in preventing separate creation of reports and code, but it can also assist in communicating the exact nature of the analysis being completed. Through the use of Markdown and HTML, we can embed headings, sections, images, or even JavaScript for dynamic content.
To use Markdown in our notebook, we first need to change the cell type. First, click on the cell you want to change to Markdown, then click on the Code drop-down menu and select Markdown:
Figure 1.19: Running Anaconda commands
Notice that In [ ] has disappeared and the color of the box lining the cell is no longer blue.
You can now enter valid Markdown syntax and HTML by double-clicking in the cell and then clicking Run to render the markdown. Enter the syntax shown in the following screenshot and run the cell to see the output:
Figure 1.20: Markdown syntax
The output will be as follows:
Figure 1.21: Markdown output
While the standard features that are included in Python are certainly feature-rich, the true power of Python lies in the additional libraries (also known as packages in Python), which, thanks to open source licensing, can be easily downloaded and installed through a few simple commands. In an Anaconda installation, it is even easier as many of the most common packages come pre-built within Anaconda. You can get a complete list of the pre-installed packages in the Anaconda environment by running the following command in a notebook cell:
!conda list
In this book, we will be using the following additional Python packages:
NumPy (pronounced Num Pie and available at): NumPy (short for numerical Python) is one of the core components of scientific computing in Python. NumPy provides the foundational data types from which a number of other data structures derive, including linear algebra, vectors and matrices, and key random number functionality.
SciPy (pronounced Sigh Pie and available at): SciPy, along with NumPy, is a core scientific computing package. SciPy provides a number of statistical tools, signal processing tools, and other functionality, such as Fourier transforms.
pandas (available at): pandas is a high-performance library for loading, cleaning, analyzing, and manipulating data structures.
Matplotlib (available at): (available at): Seaborn is a plotting library built on top of Matplotlib, providing attractive color and line styles as well as a number of common plotting templates.
Scikit-learn (available at): in a Jupyter notebook cell:
!conda install <package name>
As an example, if we wanted to install Seaborn, we'd run this:
!conda install seaborn
To use one of these packages in a notebook, all we need to do is import it:
import matplotlib
As mentioned before, pandas is a library for loading, cleaning, and analyzing a variety of different data structures. It is the flexibility of pandas, in addition to the sheer number of built-in features, that makes it such a powerful, popular, and useful Python package. It is also a great package to start with as, obviously, we cannot analyze any data if we do not first load it into the system. As pandas provides so much functionality, one very important skill in using the package is the ability to read and understand the documentation. Even after years of experience programming in Python and using pandas, we still refer to the documentation very frequently. The functionality within the API is so extensive that it is impossible to memorize all of the features and specifics of the implementation.
Note
The pandas documentation can be found at.
pandas has the ability to read and write a number of different file formats and data structures, including CSV, JSON, and HDF5 files, as well as SQL and Python Pickle formats. The pandas input/output documentation can be found at. We will continue to look into the pandas functionality through loading data via a CSV file. The dataset we will be using for this chapter is the Titanic: Machine Learning from Disaster dataset, available from or, which contains a roll of the guests on board the Titanic as well as their age, survival status, and number of siblings/parents. Before we get started with loading the data into Python, it is critical that we spend some time looking over the information provided for the dataset so that we can have a thorough understanding of what it contains. Download the dataset and place it in the directory you're working in.
Looking at the description for the data, we can see that we have the following fields available:
Figure 1.22: Fields in the Titanic dataset
We are also provided with some additional contextual information:
pclass: This is a proxy for socio-economic status, where first class is upper, second class is middle, and third class is lower status.
age: This is a fractional value if less than 1; for example, 0.25 is 3 months. If the age is estimated, it is in the form of xx.5.
sibsp: A sibling is defined as a brother, sister, stepbrother, or stepsister, and a spouse is a husband or wife.
parch: A parent is a mother or father, a child is a daughter, son, stepdaughter, or stepson. Children that traveled only with a nanny did not travel with a parent. Thus, 0 was assigned for this field.
embarked: The point of embarkation is the location where the passenger boarded the ship.
Note that the information provided with the dataset does not give any context as to how the data was collected. The survival, pclass, and embarked fields are known as categorical variables as they are assigned to one of a fixed number of labels or categories to indicate some other information. For example, in embarked, the C label indicates that the passenger boarded the ship at Cherbourg, and the value of 1 in survival indicates they survived the sinking.
In this exercise, we will read our Titanic dataset into Python and perform a few basic summary operations on it:
Import the pandas package using shorthand notation, as shown in the following screenshot:
Figure 1.23: Importing the pandas package
Open the titanic.csv file by clicking on it in the Jupyter notebook home page:
Figure 1.24: Opening the CSV file
The file is a CSV file, which can be thought of as a table, where each line is a row in the table and each comma separates columns in the table. Thankfully, we don't need to work with these tables in raw text form and can load them using pandas:
Figure 1.25: Contents of the CSV file
In an executable Jupyter notebook cell, execute the following code to load the data from the file:
df = pd.read_csv('Titanic.csv')
The pandas DataFrame class provides a comprehensive set of attributes and methods that can be executed on its own contents, ranging from sorting, filtering, and grouping methods to descriptive statistics, as well as plotting and conversion.
Read the first five rows of data using the head() method of the DataFrame:
df.head()
Figure 1.26: Reading the first five rows
In this sample, we have a visual representation of the information in the DataFrame. We can see that the data is organized in a tabular, almost spreadsheet-like structure. The different types of data are organized by columns, while each sample is organized by rows. Each row is assigned to an index value and is shown as the numbers 0 to 4 in bold on the left-hand side of the DataFrame. Each column is assigned to a label or name, as shown in bold at the top of the DataFrame.
The idea of a DataFrame as a kind of spreadsheet is a reasonable analogy; as we will see in this chapter, we can sort, filter, and perform computations on the data just as you would in a spreadsheet program. While not covered in this chapter, it is interesting to note that DataFrames also contain pivot table functionality, just like a spreadsheet ().
Now that we have loaded some data, let's use the selection and indexing methods of the DataFrame to access some data of interest:
Select individual columns in a similar way to a regular dictionary, by using the labels of the columns, as shown here:
df['Age']
Figure 1.27: Selecting the Age column
If there are no spaces in the column name, we can also use the dot operator. If there are spaces in the column names, we will need to use the bracket notation:
df.Age
Figure 1.28: Using the dot operator to select the Age column
Select multiple columns at once using bracket notation, as shown here:
df[['Name', 'Parch', 'Sex']]
Figure 1.29: Selecting multiple columns
Select the first row using iloc:
df.iloc[0]
Figure 1.30: Selecting the first row
Select the first three rows using iloc:
df.iloc[[0,1,2]]
Figure 1.31: Selecting the first three rows
We can also get a list of all of the available columns. Do this as shown here:
columns = df.columns # Extract the list of columns print(columns)
Figure 1.32: Getting all the columns
Use this list of columns and the standard Python slicing syntax to get columns 2, 3, and 4, and their corresponding values:
df[columns[1:4]] # Columns 2, 3, 4
Figure 1.33: Getting the second, third, and fourth columns
Use the len operator to get the number of rows in the DataFrame:
len(df)
Figure 1.34: Getting the number of rows
What if we wanted the value for the Fare column at row 2? There are a few different ways to do so. First, we'll try the row-centric methods. Do this as follows:
df.iloc[2]['Fare'] # Row centric
Figure 1.35: Getting a particular value using the normal row-centric method
Try using the dot operator for the column. Do this as follows:
df.iloc[2].Fare # Row centric
Figure 1.36: Getting a particular value using the row-centric dot operator
Try using the column-centric method. Do this as follows:
df['Fare'][2] # Column centric
Figure 1.37: Getting a particular value using the normal column-centric method
Try the column-centric method with the dot operator. Do this as follows:
df.Fare[2] # Column centric
Figure 1.38: Getting a particular value using the column-centric dot operator
With the basics of indexing and selection under our belt, we can turn our attention to more advanced indexing and selection. In this exercise, we will look at a few important methods for performing advanced indexing and selecting data:
Create a list of the passengers' names and ages for those passengers under the age of 21, as shown here:
child_passengers = df[df.Age < 21][['Name', 'Age']] child_passengers.head()
Figure 1.39: List of the passengers' names and ages for those passengers under the age of 21
Count how many child passengers there were, as shown here:
print(len(child_passengers))
Figure 1.40: Count of child passengers
Count how many passengers were between the ages of 21 and 30. Do not use Python's and logical operator for this step, but rather the ampersand symbol (&). Do this as follows:
young_adult_passengers = df.loc[ (df.Age > 21) & (df.Age < 30) ] len(young_adult_passengers)
Figure 1.41: Count of passengers between the ages of 21 and 30
Count the passengers that were either first- or third-class ticket holders. Again, we will not use the Python logical or operator but rather the pipe symbol (|). Do this as follows:
df.loc[ (df.Pclass == 3) | (df.Pclass ==1) ]
Figure 1.42: Count of passengers that were either first- or third-class ticket holders
Count the passengers who were not holders of either first- or third-class tickets. Do not simply select the second class ticket holders, but rather use the ~ symbol for the not logical operator. Do this as follows:
df.loc[ ~((df.Pclass == 3) | (df.Pclass ==1)) ]
Figure 1.43: Count of passengers who were not holders of either first- or third-class tickets
We no longer need the Unnamed: 0 column, so delete it using the del operator:
del df['Unnamed: 0'] df.head()
Figure 1.44: The del operator
Now that we are confident with some pandas basics, as well as some more advanced indexing and selecting tools, let's look at some other DataFrame methods. For a complete list of all methods available in a DataFrame, we can refer to the class documentation.
Note
The pandas documentation is available at.
You should now know how many methods are available within a DataFrame. There are far too many to cover in detail in this chapter, so we will select a few that will give you a great start in supervised machine learning.
We have already seen the use of one method, head(), which provides the first five lines of the DataFrame. We can select more or less lines if we wish, by providing the number of lines as an argument, as shown here:
df.head(n=20) # 20 lines df.head(n=32) # 32 lines
Another useful method is describe, which is a super-quick way of getting the descriptive statistics of the data within a DataFrame. We can see next that the sample size (count), mean, minimum, maximum, standard deviation, and 25th, 50th, and 75th percentiles are returned for all columns of numerical data in the DataFrame (note that text columns have been omitted):
df.describe()
Figure 1.45: The describe method
Note that only columns of numerical data have been included within the summary. This simple command provides us with a lot of useful information; looking at the values for count (which counts the number of valid samples), we can see that there are 1,046 valid samples in the Age category, but 1,308 in Fare, and only 891 in Survived. We can see that the youngest person was 0.17 years, the average age is 29.898, and the eldest 80. The minimum fare was £0, with £33.30 the average and £512.33 the most expensive. If we look at the Survived column, we have 891 valid samples, with a mean of 0.38, which means about 38% survived.
We can also get these values separately for each of the columns by calling the respective methods of the DataFrame, as shown here:
df.count()
Figure 1.46: The count method
But we have some columns that contain text data, such as Embarked, Ticket, Name, and Sex. So, what about these? How can we get some descriptive information for these columns? We can still use describe; we just need to pass it some more information. By default, describe will only include numerical columns and will compute the 25th, 50th, and 75th percentiles. But we can configure this to include text-based columns by passing the include = 'all' argument, as shown here:
df.describe(include='all')
Figure 1.47: The describe method with text-based columns
That's betterânow we have much more information. Looking at the Cabin column, we can see that there are 295 entries, with 186 unique values. The most common values are C32, C25, and C27, and they occur 6 times (from the freq value). Similarly, if we look at the Embarked column, we see that there are 1,307 entries, 3 unique values, and that the most commonly occurring value is S with 914 entries.
Notice the occurrence of NaN values in our describe output table. NaN, or Not a Number, values are very important within DataFrames, as they represent missing or not available data. The ability of the pandas library to read from data sources that contain missing or incomplete information is both a blessing and a curse. Many other libraries would simply fail to import or read the data file in the event of missing information, while the fact that it can be read also means that the missing data must be handled appropriately.
When looking at the output of the describe method, you should notice that the Jupyter notebook renders it in the same way as the original DataFrame that we read in using read_csv. There is a very good reason for this, as the results returned by the describe method are themselves a pandas DataFrame and thus possess the same methods and characteristics as the data read in from the CSV file. This can be easily verified using Python's built-in type function:
Figure 1.48: Checking the type
Now that we have a summary of the dataset, let's dive in with a little more detail to get a better understanding of the available data.
Note
A comprehensive understanding of the available data is critical in any supervised learning problem. The source and type of the data, the means by which it is collected, and any errors potentially resulting from the collection process all have an effect on the performance of the final model.
Hopefully, by now, you are comfortable with using pandas to provide a high-level overview of the data. We will now spend some time looking into the data in greater detail.
We have already seen how we can index or select rows or columns from a DataFrame and use advanced indexing techniques to filter the available data based on specific criteria. Another handy method that allows for such selection is the groupby method, which provides a quick method for selecting groups of data at a time and provides additional functionality through the DataFrameGroupBy object:
Use the groupby method to group the data by the Embarked column. How many different values for Embarked are there? Let's see:
embarked_grouped = df.groupby('Embarked') print(f'There are {len(embarked_grouped)} Embarked groups')
Figure 1.49: Grouping the data by the Embarked column
What does the groupby method actually do? Let's check. Display the output of embarked_grouped.groups:
embarked_grouped.groups
Figure 1.50: Output of embarked_grouped.groups
We can see here that the three groups are C, Q, and S, and that embarked_grouped.groups is actually a dictionary where the keys are the groups. The values are the rows or indexes of the entries that belong to that group.
Use the iloc method to inspect row 1 and confirm that it belongs to embarked group C:
df.iloc[1]
Figure 1.51: Inspecting row 1
As the groups are a dictionary, we can iterate through them and execute computations on the individual groups. Compute the mean age for each group, as shown here:
for name, group in embarked_grouped: print(name, group.Age.mean())
Figure 1.52: Computing the mean age for each group using iteration
Another option is to use the aggregate method, or agg for short, and provide it the function to apply across the columns. Use the agg method to determine the mean of each group:
embarked_grouped.agg(np.mean)
Figure 1.53: Using the agg method
So, how exactly does agg work and what type of functions can we pass it? Before we can answer these questions, we need to first consider the data type of each column in the DataFrame, as each column is passed through this function to produce the result we see here. Each DataFrame is comprised of a collection of columns of pandas series data, which in many ways operates just like a list. As such, any function that can take a list or a similar iterable and compute a single value as a result can be used with agg.
As an example, define a simple function that returns the first value in the column, then pass that function through to agg:
def first_val(x): return x.values[0] embarked_grouped.agg(first_val)
Figure 1.54: Using the agg method with a function
One common and useful way of implementing agg is through the use of Lambda functions.
Lambda or anonymous functions (also known as inline functions in other languages) are small, single-expression functions that can be declared and used without the need for a formal function definition via use of the def keyword. Lambda functions are essentially provided for convenience and aren't intended to be used for extensive periods. The standard syntax for a Lambda function is as follows (always starting with the lambda keyword):
lambda <input values>: <computation for values to be returned>
In this exercise, we will create a Lambda function that returns the first value in a column and use it with agg:
Write the first_val function as a Lambda function, passed to agg:
embarked_grouped.agg(lambda x: x.values[0])
Figure 1.55: Using the agg method with a Lambda function
Obviously, we get the same result, but notice how much more convenient the Lambda function was to use, especially given the fact that it is only intended to be used briefly.
We can also pass multiple functions to agg via a list to apply the functions across the dataset. Pass the Lambda function as well as the NumPy mean and standard deviation functions, like this:
embarked_grouped.agg([lambda x: x.values[0], np.mean, np.std])
Figure 1.56: Using the agg method with multiple Lambda functions
What if we wanted to apply different functions to different columns in the DataFrame? Apply numpy.sum to the Fare column and the Lambda function to the Age column by passing agg a dictionary where the keys are the columns to apply the function to and the values are the functions themselves:
embarked_grouped.agg({ 'Fare': np.sum, 'Age': lambda x: x.values[0] })
Figure 1.57: Using the agg method with a dictionary of different columns
Finally, you can also execute the groupby method using more than one column. Provide the method with a list of the columns (Sex and Embarked) to groupby, like this:
age_embarked_grouped = df.groupby(['Sex', 'Embarked']) age_embarked_grouped.groups
Figure 1.58: Using the groupby method with more than one column
Similar to when the groupings were computed by just the Embarked column, we can see here that a dictionary is returned where the keys are the combination of the Sex and Embarked columns returned as a tuple. The first key-value pair in the dictionary is a tuple, ('Male', 'S'), and the values correspond to the indices of rows with that specific combination. There will be a key-value pair for each combination of unique values in the Sex and Embarked columns.
The quality of data used in any machine learning problem, supervised or unsupervised, is critical to the performance of the final model, and should be at the forefront when planning any machine learning project. As a simple rule of thumb, if you have clean data, in sufficient quantity, with a good correlation between the input data type and the desired output, then the specifics regarding the type and details of the selected supervised learning model become significantly less important to achieve a good result.
In reality, however, this can rarely be the case. There are usually some issues regarding the quantity of available data, the quality or signal-to-noise ratio in the data, the correlation between the input and output, or some combination of all three factors. As such, we will use this last section of this chapter to consider some of the data quality problems that may occur and some mechanisms for addressing them. Previously, we mentioned that in any machine learning problem, having a thorough understanding of the dataset is critical if we to are construct a high-performing model. This is particularly the case when looking into data quality and attempting to address some of the issues present within the data. Without a comprehensive understanding of the dataset, additional noise or other unintended issues may be introduced during the data cleaning process leading to a further degradation of performance.
Note
A detailed description of the Titanic dataset and the type of data included is contained in the Loading Data in pandas section. If you need a quick refresher, go back and review these details now.
As we discussed earlier, the ability of pandas to read data with missing values is both a blessing and a curse and arguably is the most common issue that needs to be managed before we can continue with developing our supervised learning model. The simplest, but not necessarily the most effective, method is to just remove or ignore those entries that are missing data. We can easily do this in pandas using the dropna method of the DataFrame:
complete_data = df.dropna()
There is one very significant consequence of simply dropping rows with missing data and that is we may be throwing away a lot of important information. This is highlighted very clearly in the Titanic dataset as a lot of rows contain missing data. If we were to simply ignore these rows, we would start with a sample size of 1,309 and end with a sample of 183 entries. Developing a reasonable supervised learning model with a little over 10% of the data would be very difficult indeed:
Figure 1.59: Total number of rows and total number of rows with NaN values
So, with the exception of the early, explorative phase, it is rarely acceptable to simply discard all rows with invalid information. We can be a little more sophisticated about this though. Which rows are actually missing information? Is the missing information problem unique to certain columns or is it consistent throughout all columns of the dataset? We can use aggregate to help us here as well:
df.aggregate(lambda x: x.isna().sum())
Figure 1.60: Using agg with a Lambda function to identify rows with NaN values
Now, this is useful! We can see that the vast majority of missing information is in the Cabin column, some in Age, and a little more in Survived. This is one of the first times in the data cleaning process that we may need to make an educated judgement call.
What do we want to do with the Cabin column? There is so much missing information here that it, in fact, may not be possible to use it in any reasonable way. We could attempt to recover the information by looking at the names, ages, and number of parents/siblings and see whether we can match some families together to provide information, but there would be a lot of uncertainty in this process. We could also simplify the column by using the level of the cabin on the ship rather than the exact cabin number, which may then correlate better with name, age, and social status. This is unfortunate as there could be a good correlation between Cabin and Survived, as perhaps those passengers in the lower decks of the ship may have had a harder time evacuating. We could examine only the rows with valid Cabin values to see whether there is any predictive power in the Cabin entry; but, for now, we will simply disregard Cabin as a reasonable input (or feature).
We can see that the Embarked and Fare columns only have three missing samples between them. If we decided that we needed the Embarked and Fare columns for our model, it would be a reasonable argument to simply drop these rows. We can do this using our indexing techniques, where ~ represents the not operation, or flipping the result (that is, where df.Embarked is not NaN and df.Fare is not NaN):
df_valid = df.loc[(~df.Embarked.isna()) & (~df.Fare.isna())]
The missing age values are a little more interesting, as there are too many rows with missing age values to just discard them. But we also have a few more options here, as we can have a little more confidence in some plausible values to fill in. The simplest option would be to simply fill in the missing age values with the mean age for the dataset:
df_valid[['Age']] = df_valid[['Age']].fillna(df_valid.Age.mean())
This is OK, but there are probably better ways of filling in the data rather than just giving all 263 people the same value. Remember, we are trying to clean up the data with the goal of maximizing the predictive power of the input features and the survival rate. Giving everyone the same value, while simple, doesn't seem too reasonable. What if we were to look at the average ages of the members of each of the classes (Pclass)? This may give a better estimate, as the average age reduces from class 1 through 3:
Figure 1.61: Average ages of the members of each of the classes
What if we consider sex as well as ticket class (social status)? Do the average ages differ here too? Let's find out:
for name, grp in df_valid.groupby(['Pclass', 'Sex']): print('%i' % name[0], name[1], '%0.2f' % grp['Age'].mean())
Figure 1.62: Average ages of the members of each sex and class
We can see here that males in all ticket classes are typically older. This combination of sex and ticket class provides much more resolution than simply filling in all missing fields with the mean age. To do this, we will use the transform method, which applies a function to the contents of a series or DataFrame and returns another series or DataFrame with the transformed values. This is particularly powerful when combined with the groupby method:
mean_ages = df_valid.groupby(['Pclass', 'Sex'])['Age'].\ transform(lambda x: x.fillna(x.mean())) df_valid.loc[:, 'Age'] = mean_ages
There is a lot in these two lines of code, so let's break them down into components. Let's look at the first line:
mean_ages = df_valid.groupby(['Pclass', 'Sex'])['Age'].\ transform(lambda x: x.fillna(x.mean()))
We are already familiar with df_valid.groupby(['Pclass', 'Sex'])['Age'], which groups the data by ticket class and sex and returns only the Age column. The lambda x: x.fillna(x.mean()) Lambda function takes the input pandas series, and fills the NaN values with the mean value of the series.
The second line assigns the filled values within mean_ages to the Age column. Note the use of the loc[:, 'Age'] indexing method, which indicates that all rows within the Age column are to be assigned the values contained within mean_ages:
df_valid.loc[:, 'Age'] = mean_ages
We have described a few different ways of filling in the missing values within the Age column, but by no means has this been an exhaustive discussion. There are many more methods that we could use to fill the missing data: we could apply random values within one standard deviation of the mean for the grouped data, we could also look at grouping the data by sex and the number of parents/children (Parch) or by the number of siblings, or by ticket class, sex, and the number of parents/children. What is most important about the decisions made during this process is the end result of the prediction accuracy. We may need to try different options, rerun our models and consider the effect on the accuracy of final predictions. This is an important aspect of the process of feature engineering, that is, selecting the features or components that provide the model with the most predictive power; you will find that, during this process, you will try a few different features, run the model, look at the end result and repeat, until you are happy with the performance.
The ultimate goal of this supervised learning problem is to predict the survival of passengers on the Titanic given the information we have available. So, that means that the Survived column provides our labels for training. What are we going to do if we are missing 418 of the labels? If this was a project where we had control over the collection of the data and access to its origins, we would obviously correct this by recollecting or asking for the labels to be clarified. With the Titanic dataset, we do not have this ability so we must make another educated judgement call. We could try some unsupervised learning techniques to see whether there are some patterns in the survival information that we could use. However, we may not have a choice of simply ignoring these rows. The task is to predict whether a person survived or perished, not whether they may have survived. By estimating the ground truth labels, we may introduce significant noise into the dataset, reducing our ability to accurately predict survival.
Missing data is not the only problem that may be present within a dataset. Class imbalance â that is, having more of one class or classes compared to another â can be a significant problem, particularly in the case of classification problems (we'll see more on classification in Chapter 4, Classification), where we are trying to predict which class (or classes) a sample is from. Looking at our Survived column, we can see that there are far more people who perished (Survived equals 0) than survived (Survived equals 1) in the dataset:
Figure 1.63: Number of people who perished versus survived
If we don't take this class imbalance into account, the predictive power of our model could be significantly reduced as, during training, the model would simply need to guess that the person did not survive to be correct 61% (549 / (549 + 342)) of the time. If, in reality, the actual survival rate was, say, 50%, then when being applied to unseen data, our model would predict not survived too often.
There are a few options available for managing class imbalance, one of which, similar to the missing data scenario, is to randomly remove samples from the over-represented class until balance has been achieved. Again, this option is not ideal, or perhaps even appropriate, as it involves ignoring available data. A more constructive example may be to oversample the under-represented class by randomly copying samples from the under-represented class in the dataset to boost the number of samples. While removing data can lead to accuracy issues due to discarding useful information, oversampling the under-represented class can lead to being unable to predict the label of unseen data, also known as overfitting (which we will cover in Chapter 5, Ensemble Modeling).
Adding some random noise to the input features for oversampled data may prevent some degree of overfitting, but this is highly dependent on the dataset itself. As with missing data, it is important to check the effect of any class imbalance corrections on the overall model performance. It is relatively straightforward to copy more data into a DataFrame using the append method, which works in a very similar fashion to lists. If we wanted to copy the first row to the end of the DataFrame, we would do this:
df_oversample = df.append(df.iloc[0])
The field of machine learning can be considered a branch of the larger field of statistics. As such, the principles of confidence and sample size can also be applied to understand the issues with a small dataset. Recall that if we were to take measurements from a data source with high variance, then the degree of uncertainty in the measurements would also be high and more samples would be required to achieve a specified confidence in the value of the mean. The sample principles can be applied to machine learning datasets. Those datasets with a variance in the features with the most predictive power generally require more samples for reasonable performance as more confidence is also required.
There are a few techniques that can be used to compensate for a reduced sample size, such as transfer learning. However, these lie outside the scope of this book. Ultimately, though, there is only so much that can be done with a small dataset, and significant performance increases may only occur once the sample size is increased.
In this activity, we will test ourselves on the various pandas functions we have learned about in this chapter. We will use the same Titanic dataset for this.
The steps to be performed are as follows:
Open a new Jupyter notebook.
Use pandas to load the Titanic dataset and describe the summary data for all columns.
We don't need the Unnamed: 0 column. In Exercise 7: Advanced Indexing and Selection, we demonstrated how to remove the column using the del command. How else could we remove this column? Remove this column without using del.
Compute the mean, standard deviation, minimum, and maximum values for the columns of the DataFrame without using describe.
What about the 33, 66, and 99% quartiles? How would we get these values using their individual methods? Use the quantile method to do this ().
How many passengers were from each class? Find the answer using the groupby method.
How many passengers were from each class? Find the answer by using selecting/indexing methods to count the members of each class.
Confirm that the answers to Step 6 and Step 7 match.
Determine who the eldest passenger in third class was.
For a number of machine learning problems, it is very common to scale the numerical values between 0 and 1. Use the agg method with Lambda functions to scale the Fare and Age columns between 0 and 1.
There is one individual in the dataset without a listed Fare value, which can be found out as follows:
df_nan_fare = df.loc[(df.Fare.isna())] df_nan_fare
The output will be as follows:
Figure 1.64: Individual without a listed Fare value
Replace the NaN value of this row in the main DataFrame with the mean Fare value for those corresponding with the same class and Embarked location using the groupby method.
In this chapter, we introduced the concept of supervised machine learning, along with a number of use cases, including the automation of manual tasks such as identifying hairstyles from the 1960s and 1980s. In this introduction, we encountered the concept of labeled datasets and the process of mapping one information set (the input data or features) to the corresponding labels.
We took a practical approach to the process of loading and cleaning data using Jupyter notebooks and the extremely powerful pandas library. Note that this chapter has only covered a small fraction of the functionality within pandas, and that an entire book could be dedicated to the library itself. It is recommended that you become familiar with reading the pandas documentation and continue to develop your pandas skills through practice.
The final section of this chapter covered a number of data quality issues that need to be considered to develop a high-performing supervised learning model, including missing data, class imbalance, and low sample sizes. We discussed a number of options for managing such issues and emphasized the importance of checking these mitigations against the performance of the model.
In the next chapter, we will extend upon the data cleaning process that we covered and will investigate the data exploration and visualization process. Data exploration is a critical aspect of any machine learning solution, as without a comprehensive knowledge of the dataset, it would be almost impossible to model the information provided. | https://www.packtpub.com/product/applied-supervised-learning-with-python/9781789954920 | CC-MAIN-2020-50 | refinedweb | 10,176 | 59.74 |
Copyright © 2002 W3C ®( MIT , INRIA , Keio), All Rights Reserved. W3C liability, trademark, document use and software licensing rules apply.
This document describes the Usage Scenarios guiding the development of the Web Service Description specification.
This is the first W3C Working Draft of the Web Services Description Usage Scenarios document. It is a chartered deliverable of the Web Services Description Working Group (WG), which is part of the Web Services Activity. The Working Group has agreed to publish this document, although this document does not necessarily represent consensus within the Working Group. This document may change substantially due to coordination and consolidation efforts with Web Services Usage Scenarios work undertaken in the Web Services Architecture Working Group. Messaging
2.1.1 UC0001 Fire-and-forget[WS]
2.1.1.1 Scenario Definition
2.1.1.2 Relates To
2.1.1.3 Scenario Description
2.1.2 UC0002 Oneway Message With Guaranteed Delivery[WS]
2.1.2.1 Scenario Definition
2.1.2.2 Relates To
2.1.2.3 Scenario Description
2.1.3 UC0006 Document Centric Computing[WS]
2.1.3.1 Scenario Definition
2.1.3.2 Relates To
2.1.3.3 Scenario Description
2.1.4 UC0015 Request-Response [JJM]
2.1.4.1 Scenario Definition
2.1.4.2 Editors' Comments
2.1.4.3 Relates To
2.1.4.4 Scenario Description
2.1.5 UC0025 Event notification [JJM]
2.1.5.1 Scenario Definition
2.1.5.2 Editors' Comments
2.1.5.3 Relates To
2.1.5.4 Scenario Description
2.1.6 UC0028 Sync/Async Operations [IS]
2.1.6.1 Scenario Definition
2.1.6.2 Relates To
2.1.6.3 Scenario Description
2.1.7 UC0030 Events [IS]
2.1.7.1 Scenario Definition
2.1.7.2 Relates To
2.1.7.3 Scenario Description
2.2 Specification
2.2.1 UC0003 Multiple Faults[WS]
2.2.1.1 Scenario Definition
2.2.1.2 Relates To
2.2.1.3 Scenario Description
2.2.2 UC0004 Service Level Attributes[WS]
2.2.2.1 Scenario Definition
2.2.2.2 Relates To
2.2.2.3 Scenario Description
2.2.3 UC0005 Operation Level Attributes[WS]
2.2.3.1 Scenario Definition
2.2.3.2 Relates To
2.2.3.3 Scenario Description
2.2.4 UC0029 Namespaces with data and interfaces [IS]
2.2.4.1 Scenario Definition
2.2.4.2 Relates To
2.2.4.3 Scenario Description
2.2.5 UC0031 Versioning [IS]
2.2.5.1 Scenario Definition
2.2.5.2 Relates To
2.2.5.3 Scenario Description
2.2.6 UC0032 Classification system for operations [JR]
2.2.6.1 Scenario Definition
2.2.6.2 Relates To
2.2.6.3 Scenario Description
2.2.7 UC0033 Header Specification [WV]
2.2.7.1 Scenario Definition
2.2.7.2 Relates To
2.2.7.3 Scenario Description
2.2.8 UC0034B Specifying streaming [YF]
2.2.8.1 Scenario Definition
2.2.8.2 Relates To
2.2.8.3 Editor's Comment
2.2.8.4 Scenario Description
2.2.9 UC0035 Extending PortType [JS]
2.2.9.1 Scenario Definition
2.2.9.2 Relates To
2.2.9.3 Scenario Description
2.3 Service Reference
2.3.1 UC0027 References [IS]
2.3.1.1 Scenario Definition
2.3.1.2 Relates To
2.3.1.3 Scenario Description
2.4 Meta data
2.4.1 UC0026 Service Metadata [IS]
2.4.1.1 Scenario Definition
2.4.1.2 Relates To
2.4.1.3 Scenario Description
2.5 Miscellaneous
2.5.1 UC0034A Obtaining WSDL from the web service itself [YF]
2.5.1.1 Scenario Definition
2.5.1.2 Relates To
2.5.1.3 Scenario Description
2.5.2 UC0036 Storage and Retrieval of WSDL in Registries and Repositories [AR]
2.5.2.1 Scenario Definition
2.5.2.2 Relates To
2.5.2.3 Scenario Description
A References
B Change Log (Non-Normative)
This document describes the use cases of the web services description language. The use cases are meant to capture what is important for a web service to describe itself. There may be several other important aspects of a web service but irrelevant to its operational and interaction descriptions.
We believe that following viewpoints would prove useful in describing the use-cases for the web service description language.
View Point 1 [VP1]: The web service description defines a contract that the web service implements. The web service client exchanges messages based on this contract.
View Point 2 [VP2]: The description language is used by tools to generate proper stubs. These stubs ensure that the stubs implement the expected behavior for the client.
View Point 3 [VP3]: The web service description captures information that allows one to reason about them semantically.
All the use cases in this document pertain to one or more view-points as described above. Every use case as described in this document has a scenario definition, scenario description, and how it relates to one of the view-points as outlined above. Sample code is based upon the Web Services Description Language 1.1 [WSDL 1.1].
Ability to describe a one way operation of a web service that has no guaranteed delivery semantics. The input message received as part of such an operation MAY be lost.
A metrics collection service exposes an operation to client applications to report their application usage metrics. These applications opt to update their individual reporting metrics to such a web service, instead of reporting individual metric. Consequently, a loss of a message is not critical as the next update would provide an updated summary. The target web service exposes an interface to report those metrics. For the sake of efficiency and simplicity, the client applications are not interested in receiving any faults; they simply want to send a message and forget about it until the next time.
Ability to describe a one way operation of a web service that has guaranteed delivery semantics. The input operation received as part of such an operation MUST NOT be lost.
A web service provides a messaging service. This web service.
Ability to describe an operation of a web service that MAY include message parts that are document attachments along with other regular messages.
A web service is an ebXML [ebXML].
Ability to describe an operation of a web service that responds with an output message or a fault based on at least one or more input messages received.
If UC0001 and UC0002 are accepted, the implications must be well understood and described.
Since the bindings would determine how the messages are actually sent, there may be a need to correlate the request with the response..
Ability to describe an operation of a web service that returns output message.
Are the events as described in the scenario of different types? Does it make sense to have an associated semantics of guaranteed delivery?.
Here is an example of operation definitions.
WS client would then get to use operations properly. Similar to this.
The underlying WS framework would then initiate proper SOAP [SOAP 1.2 Part 1] messaging sequence with acknowledgement and notification phases. SOAP protocol must support asynchronous messaging.
A WS provider can describe events generated by a service as follows:
And this way WS client may subscribe to events like this
And implement a proper handler
The underlying WS framework would take care of the event by either polling (sending a SOAP request) with a specified interval or registering a SOAP listener (endpoint) with the target WS (according to the event definition in WSDL).
We should also describe the SOAP protocol sequence (registration/acknowledgement/notification) for the events in accordance with asynchronous SOAP messaging..
These attributes are part of the meta data of the service. Although you could possibly model the meta data as operations, this meta data is modeled much more cleanly modeled as attributes..
A service can have an OO model like this:
It is possible to represent this model in WSDL and associated XML Schema [XML Schema Part 1] placing schema and interfaces in the proper XML namespaces. It has to be required that namespaces are not getting lost between service provider and the client. It should be part of WSDL compliance.
Here is a brief example:
A WS provider can describe versions of interfaces implemented by a service. Such as this.
Need to have a hint of how long it will take for the service to process the request.
My service invocation contains a routing header in which I specify the return path (the path I want the response to use to come back to me). I may want to provide a different routing path whether I expect the respond to come in one second or in two weeks. For example, for a very quick turnaround I might want to have the response sent directly to me via HTTP [IETF RFC 2616] post because I know I will have a listener available during the next 10 seconds, but if the processing is going to take days I'd rather have the reply go through another route, using always-on intermediary that can store the message for me until I am ready to receive it. In order to be able to choose the most appropriate return path, I need to find in the WSDL an indication of how long the service plans to take to fulfill my request. Note: this use case is not about how to specify the use of a SOAP routing header. It is about how to provide in the WSDL information that allows someone to build the message in a smarter way (for example by optimizing the routing header) because s/he knows more about the expected completion time for the request.
Following is a sample of the routing header as specified according to SOAP binding of WSDL
It seems like this would require specifying header elements to indicate streaming data type level : the service may indicate that it will receive/send streaming as input/output..
Extend existing portTypes to make new ones by including and reusing features/behavior specified by the existing portTypes.
Vertical standards organizations like the UPnP Forum [UPnP]".
To support passing references to web services as operation input or output.
A WS provider can define operations that return and/or take as a parameter a reference to another WS interface.
Here is an example of extended attribute definitions and inclusion.
The definition would look as follows:
A schema for is as follows :
Then a WS client can use references to the interfaces as follows:
The underlying WS framework would support instantiation of a service based on reference (like most already instantiate based on an endpoint URL).
I believe systinet does something similar, but unless it's mandated by the WSDL standard it is as good as private app-specific extension..
Here is an example of extended attribute definitions and inclusion.
A WS client can interrogate the metadata attributes as follows:
Similarly for message descriptions.
This scenario provides requires web services to have predefined method for obtaining wsdl from the web service ?
The WSDL specification should define a notion of equivalence of definitions that would be used by registry and repository implementors.
WSDL documents will be registered in registries such as [UDDI] and stored in repositories. The operations of storage and retrieval must preserve the meaning of the WSDL.
The definitions in a WSDL document do not exactly match the entities stored in a UDDI registry. There is a Best Practices document [UDDI Best Practices] that specifes a mapping between WSDL and UDDI. When a service described by a WSDL document is registered in UDDI, some of the WSDL definitions are converted to UDDI entities. When a user discovers a service in a UDDI registry, processors will extract some entities from UDDI and convert them to WSDL definitions. The result of storing and retrieving WSDL information must preserve its meaning.
Similarly, WSDL documents may be stored in repositories that stored them in a non-WSDL format, for example a relational database. When the documents are retrieved as WSDL their meaning must be preserved.
The WSDL specification should define a notion of equivalence of definitions that would be used by registry and repository implementors. | http://www.w3.org/TR/2002/WD-ws-desc-usecases-20020604/ | crawl-002 | refinedweb | 2,065 | 52.15 |
ROS1, using ROS2 Bridge to talk to multiple robots
I want to talk to different machines over a network and would like to use ROS2 / DDS. Our robots are ROS1 and need to stay that way for now. To solve this I want to use ros1<-->ros1_bridge<-->ros1_bridge<-->ros1
Has anyone tried this to add ros2 / DDS between robots?
This is possible, but will require you to have both ROS1 and ROS2 on the robot which can cause some problems especially with sourcing. We have managed to implement the ros1-bridge on a robot running a ROS1 stack with a small ROS2 install next to it. All the ROS1 topics will become available in the ROS2 global data-space. So if you give all robots a 2-way ros1-bridge then they will indeed all see each other topics (assuming you properly set the ROS_DOMAIN_ID values etc.). What will also be very important is namespaces. If you don't properly name topics or don't properly shield topics from being passed to the global dataspace the robots will start publishing on each others topics (you don't want robot 1 to publish a cmd_vel to robot 2 !!!!). Your "question" is not very descriptive so maybe come with a more properly described use-case or ...(more)
Thanks you answered my question.
I would probably suggest to include
multimaster_fkiein such a setup (ie: one per robot), which should include infrastructure to properly deal with some of the "problems" that @MCornelis describes.
Then consider the bridge as just that: a bridge or perhaps even a tunnel, and have it only bridge topics that have been configured to be forwarded by
multimaster_fkie.
Have you tried this and does it have dds qos?
The message exchange over the bridge will use ROS 2 infrastructure, so yes, that part will have DDS QoS attached. Everything local to the robots (ie: the robot-local ROS graph) will of course not have DDS involved at all, so no QoS there (but that is probably also not needed).
I've not verified how this will interact with ROS 1 service clients blocking, waiting for a response and the bridge failing to deliver the message. | https://answers.ros.org/question/336098/ros1-using-ros2-bridge-to-talk-to-multiple-robots/ | CC-MAIN-2022-21 | refinedweb | 365 | 67.38 |
table of contents
other versions
- stretch 4.10-2
- testing 4.16-2
- stretch-backports 4.16-1~bpo9+1
- unstable 4.16-2
other sections
NAME¶pipe, pipe2 - create pipe
SYNOPSIS¶
#include <unistd.h> int pipe(int pipefd[2]); #define _GNU_SOURCE /* See feature_test_macros(7) */ #include <fcntl.h> /* Obtain O_* constant definitions */ #include <unistd.h> int pipe2(int pipefd[2], int flags);
DESCRIPTION¶pipe().
RETURN VALUE¶.
ERRORS¶
-).
VERSIONS¶pipe2() was added to Linux in version 2.6.27; glibc support is available starting with version 2.9.
CONFORMING TO¶pipe(): POSIX.1-2001, POSIX.1-2008.
pipe2() is Linux-specific.
EXAMPLE¶The¶
); } } | https://manpages.debian.org/stretch/manpages-dev/pipe.2.en.html | CC-MAIN-2019-30 | refinedweb | 102 | 57.13 |
Because I'm an absolute beginner I have a really simple question but I can't help myself.Maybe they are too simple because I can't find the answer in forums, books or other internet paged. I do not get the meaning of some parts of my java code because i just knew the ''normal'' eclipse, and android is so different to that.
import android.app.Activity; import android.app.AlertDialog; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.TextView; import android.widget.Button; import android.widget.Toast; public class Spritkostenactivity extends Activity implements View.OnClickListener{ private TextView textcosts; private TextView textcostsperson; private Button buttonnew; private Button buttonstop;
Are these elements (Button and TextView) objects, variables, attributes? How can I call them?
buttonnew= (Button) findViewById(R.id.buttonerneuteberechnung); buttonnew.setOnClickListener(this); buttonstop= (Button) findViewById(R.id.buttonschliessen); buttonstop.setOnClickListener(this);
Now I put the element of the xml file in the "buttons". If they are objects, why don't I have to write something like "Button buttonnew=new ..." Because after that the Buttons start a Method, so do they need to be an object? And what does the "this" mean. I read that it means the actual object, but what does the actual object mean?
Double.parseDouble(liters)
My last question is about the method Double.parsedouble(). Can you say me for what the Double(.parsedouble) stands? Is it the classname that starts the method?
I'm really sorry that there are several questions in one post, but I think this is basic knowledge, and it wouldn't be necessary to post it in more questions. I hope you can help me. Thank You! | http://www.howtobuildsoftware.com/index.php/how-do/bvy/java-android-button-beginnerwhat-do-several-parts-of-my-code-mean-on-hold | CC-MAIN-2019-09 | refinedweb | 282 | 53.17 |
> at91rm9200bsp.rar > romInit #if defined(CPU_920T) /* * Set processor and MMU to known state as follows (we may have not * been entered from a reset). We must do this before setting the CPU * mode as we must set PROG32/DATA32. * * MMU Control Register layout. * * bit * 0 M 0 MMU disabled * 1 A 0 Address alignment fault disabled, initially * 2 C 0 Data cache disabled * 3 W 0 Write Buffer disabled * 4 P 1 PROG32 * 5 D 1 DATA32 * 6 L 1 Should Be One (Late abort on earlier CPUs) * 7 B ? Endianness (1 => big) * 8 S 0 System bit to zero } Modifies MMU protections, not really * 9 R 1 ROM bit to one } relevant until MMU switched on later. * 10 F 0 Should Be Zero * 11 Z 0 Should Be Zero (Branch prediction control on 810) * 12 I 0 Instruction cache control */ /* Setup MMU Control Register */ MOV r1, #MMU_INIT_VALUE /* Defined in mmuArmLib.h */ #if defined(AT91RM9200_EARLY_I_CACHE_ENABLE) ORR r1, r1, #MMUCR_I_ENABLE /* conditionally enable Icache*/ #endif MCR CP_MMU, 0, r1, c1, c0, 0 /* Write to MMU CR */ /* * If MMU was on before this, then we'd better hope it was set * up for flat translation or there will be problems. The next * 2/3 instructions will be fetched "translated" (number depends * on CPU). * * We would like to discard the contents of the Write-Buffer * altogether, but there is no facility to do this. Failing that, * we do not want any pending writes to happen at a later stage, * so drain the Write-Buffer, i.e. force any pending writes to * happen now. */ MOV r1, #0 /* data SBZ */ MCR CP_MMU, 0, r1, c7, c10, 4 /* drain write-buffer */ /* Flush (invalidate) both I and D caches */ MCR CP_MMU, 0, r1, c7, c7, 0 /* R1 = 0 from above, data SBZ*/ /* * Set Process ID Register to zero, this effectively disables * the process ID remapping feature. */ MOV r1, #0 MCR CP_MMU, 0, r1, c13, c0, 0 #endif /* defined(CPU_720T,740T,920T,940T,946ES) */ /* disable interrupts in CPU and switch to SVC32 mode */ MRS r1, cpsr BIC r1, r1, #MASK_MODE ORR r1, r1, #MODE_SVC32 | I_BIT | F_BIT MSR cpsr, r1 /* * CPU INTERRUPTS DISABLED * * disable individual interrupts in the interrupt controller */ LDR r2, =AIC_BASE_ADDR /* R2->interrupt controller */ MVN r1, #0 /* &FFFFFFFF */ STR r1, [r2, #AIC_IDCR_OFFSET] /* disable all IRQ sources */ /* *. */ LDR r1, =0x55555555 MOV r2, #0 STR r1, [r2] LDR r3, [r2] cmp r1, r3 bne mem_need_remap LDR r1, =0xAAAAAAAA STR r1, [r2] LDR r3, [r2] beq skip_mem_remap mem_need_remap: MOV r1, #1 LDR r2, =MC_BASE_ADDR STR r1, [r2, #MC_RCR_OFFSET] skip_mem_remap: LDR sp, =ARM920T_ROMINIT_C_STACK_TOP MOV lr, pc LDR pc, L$_romCInit_ADDR /* *) | http://read.pudn.com/downloads56/sourcecode/embed/198274/at91rm9200bsp/at91rm9200/romInit.s__.htm | crawl-002 | refinedweb | 428 | 58.66 |
Vincent Lefevre <vinc...@vinc17.net> writes: > That said, perhaps GMP might be improved to detect the ABI by a > simple parsing of $CFLAGS when provided by the user (in case values > like -m32 or -m64 are standard).
Advertising
In Nettle, I try to detect the ABI used by the configured C compiler, with a configure test like case "$host_cpu" in [x86_64 | amd64]) AC_TRY_COMPILE([ #if defined(__x86_64__) || defined(__arch64__) #error 64-bit x86 #endif ], [], [ ABI=32 ], [ ABI=64 ]) ;; ... (and the above misses x32). That's a bit verbose, but I think it's less brittle than trying to parse compiler flags. For one, how would we know which ABI the compiler uses by default? The result is used when selecting assembly code. But then the way I recommend to configure ABI is still not to set CFLAGS, but to set CC. E.g., ./configure CC='gcc -m32' CXX='g++ -m32' Regards, /Niels -- Niels Möller. PGP-encrypted email is preferred. Keyid 368C6677. Internet email is subject to wholesale government surveillance. _______________________________________________ gmp-bugs mailing list gmp-bugs@gmplib.org | https://www.mail-archive.com/gmp-bugs@gmplib.org/msg00450.html | CC-MAIN-2018-30 | refinedweb | 177 | 74.9 |
A model .NET web service based on Domain Driven Design Part 3: the Domain
September 19, 2013 33 Comments
Introduction
In the previous post we laid the theoretical foundation for our domains. Now it’s finally time to see some code. In this post we’ll concentrate on the Domain layer and we’ll also start building the Infrastructure layer.
Infrastructure?
Often the word infrastructure is taken to mean the storage mechanism, like an SQL database, or the physical layers of a system, such as the servers. However, in this case we mean something different..
Infrastructure
The Entities we discussed in the previous post will all derive from an abstract EntityBase class. We could put it directly into the domain layer. However, think of this class as the base for all entities across all your DDD projects where you put all common functionality for your domains.
Create a new blank solution in VS and call it DDDSkeletonNET.Portal. Add a new C# class library called DDDSkeletonNET.Infrastructure.Common. Remove Class1 and add a new folder called Domain. In that folder add a new class called EntityBase:
public abstract class EntityBase<IdType> { public IdType Id { get; set; } public override bool Equals(object entity) { return entity != null && entity is EntityBase<IdType> && this == (EntityBase<IdType>)entity; } public override int GetHashCode() { return this.Id.GetHashCode(); } public static bool operator ==(EntityBase<IdType> entity1, EntityBase<IdType> entity2) { if ((object)entity1 == null && (object)entity2 == null) { return true; } if ((object)entity1 == null || (object)entity2 == null) { return false; } if (entity1.Id.ToString() == entity2.Id.ToString()) { return true; } return false; } public static bool operator !=(EntityBase<IdType> entity1, EntityBase<IdType> entity2) { return (!(entity1 == entity2)); } }
We only have one property at this point, the ID, whose type can be specified through the IdType type parameter. Often this will be an integer or maybe a GUID or even some auto-generated string. The rest of the code takes care of comparison issues so that you can compare two entities with the ‘==’ operator or the Equals method. Recall that entityA == entityB if an only if their IDs are identical, hence the comparison being based on the ID property.
Domains need to validate themselves when insertions or updates are executed so let’s add the following abstract method to EntityBase.cs:
protected abstract void Validate();
We can describe our business rules in many ways but the simplest format is a description in words. Add a class called BusinessRule to the Domain folder:
public class BusinessRule { private string _ruleDescription; public BusinessRule(string ruleDescription) { _ruleDescription = ruleDescription; } public String RuleDescription { get { return _ruleDescription; } } }
Coming back to EntityBase.cs we’ll store the list of broken business rules in a private variable:
private List<BusinessRule> _brokenRules = new List<BusinessRule>();
This list represents all the business rules that haven’t been adhered to during the object composition: the total price is incorrect, the customer name is empty, the person’s age is negative etc., so it’s all the things that make the state of the object invalid. We don’t want to save objects in an invalid state in the data storage, so we definitely need validation.
Implementing entities will be able to add to this list through the following method:
protected void AddBrokenRule(BusinessRule businessRule) { _brokenRules.Add(businessRule); }
External code will collect all broken rules by calling this method:
public IEnumerable<BusinessRule> GetBrokenRules() { _brokenRules.Clear(); Validate(); return _brokenRules; }
We first clear the list so that we don’t return any previously stored broken rules. They may have been fixed by then. We then run the Validate method which is implemented in the concrete domain classes. The domain will fill up the list of broken rules in that implementation. The list is then returned. We’ll see in a later post on the application service layer how this method can be used from the outside.
We’re now ready to implement the first domain object. Add a new C# class library called DDDSkeleton.Portal.Domain to the solution. Let’s make this easy for us and create the most basic Customer domain. Add a new folder called Customer and in it a class called Customer which will derive from EntityBase.cs. The Domain project will need to reference the Infrastructure project. Let’s say the Customer will have an id of type integer. At first the class will look as follows:
public class Customer : EntityBase<int> { protected override void Validate() { throw new NotImplementedException(); } }
We know from the domain expert that every Customer will have a name, so we add the following property:
public string Name { get; set; }
Our first business rule says that the customer name cannot be null or empty. We’ll store these rule descriptions in a separate file within the Customer folder:
public static class CustomerBusinessRule { public static readonly BusinessRule CustomerNameRequired = new BusinessRule("A customer must have a name."); }
We can now implement the Validate() method in the Customer domain:
protected override void Validate() { if (string.IsNullOrEmpty(Name)) { AddBrokenRule(CustomerBusinessRule.CustomerNameRequired); } }
Let’s now see how value objects can be used in code. The domain expert says that every customer will have an address property. We decide that we don’t need to track Addresses the same way as Customers, i.e. we don’t need to set an ID on them. We’ll need a base class for value objects in the Domain folder of the Infrastructure layer:
public abstract class ValueObjectBase { private List<BusinessRule> _brokenRules = new List<BusinessRule>(); public ValueObjectBase() { } protected abstract void Validate(); public void ThrowExceptionIfInvalid() { _brokenRules.Clear(); Validate(); if (_brokenRules.Count() > 0) { StringBuilder issues = new StringBuilder(); foreach (BusinessRule businessRule in _brokenRules) { issues.AppendLine(businessRule.RuleDescription); } throw new ValueObjectIsInvalidException(issues.ToString()); } } protected void AddBrokenRule(BusinessRule businessRule) { _brokenRules.Add(businessRule); } }
…where ValueObjectIsInvalidException looks as follows:
public class ValueObjectIsInvalidException : Exception { public ValueObjectIsInvalidException(string message) : base(message) {} }
You’ll recognise the Validate and AddBrokenRule methods. Value objects can of course also have business rules that need to be enforced. In the Domain layer add a new folder called ValueObjects. Add a class called Address in that folder:
public class Address : ValueObjectBase { protected override void Validate() { throw new NotImplementedException(); } }
Add the following properties to the class:
public string AddressLine1 { get; set; } public string AddressLine2 { get; set; } public string City { get; set; } public string PostalCode { get; set; }
The domain expert says that every Address object must have a valid City property. We can follow the same structure we took above. Add the following class to the ValueObjects folder:
public static class ValueObjectBusinessRule { public static readonly BusinessRule CityInAddressRequired = new BusinessRule("An address must have a city."); }
We can now complete the Validate method of the Address object:
protected override void Validate() { if (string.IsNullOrEmpty(City)) { AddBrokenRule(ValueObjectBusinessRule.CityInAddressRequired); } }
Let’s add this new Address property to Customer:
public Address CustomerAddress { get; set; }
We’ll include the value object validation in the Customer validation:
protected override void Validate() { if (string.IsNullOrEmpty(Name)) { AddBrokenRule(CustomerBusinessRule.CustomerNameRequired); } CustomerAddress.ThrowExceptionIfInvalid(); }
As the Address value object is a property of the Customer entity it’s practical to include its validation within this Validate method. The Customer object doesn’t need to concern itself with the exact rules of an Address object. Customer effectively tells Address to go and validate itself.
We’ll stop building our domain layer now. We could add several more domain objects but let’s keep things as simple as possible so that you will be able to view the whole system without having to track too many threads. We now have an example of an entity, a value object and a couple of basic validation rules. We have missed how aggregate roots play a role in code but we’ll come back to that in the next post. You probably want to see how the repository, service and UI layers can be wired together.
We’ll start building the repository layer in the next post.
View the list of posts on Architecture and Patterns here.
Thank you for sharing this series,really help me a lot.
You’re using a list of business rules to keep track of the failing business rules which means your entity can have an invalid state at a certain time.
Wouldn’t you prefer preventing an invalid state of an entity at all time and throwing exceptions when properties are set to values which result in an invalid entity?
Hello Stefaan,
The Validate() method will make sure that invalid objects cannot be persisted which I think is the main purpose of business rules. The customer object will be in an invalid state for a very short time if incomplete parameters are provided by the client – between the creation and validation. I don’t think it does any harm.
In case you need to pass around the object to other domains or domain services then it can be desirable not to let an invalid object fly around:
public Customer(string name)
{
if name is null then throw exception;
}
//Andras
Ok, but why would you use business rules instead of enforcing validity on your entities at all time. I don’t seem to find any pros.
Isn’t it the responsiblity of the entity to be persistable at all time. Is supose the Validate method has to be called from you repo’s before persisting thus leaving the possibility open where it wouldn’t be called?
Stefaan, yes I agree that domain objects should be persistable all the time, but .. practice and experience have shown that you organize your business logic flows in such way that you have some kind of ‘checkpoints’ where you validate (and persist) domain models. There you fire your validation. Also, you cannot always have full objects – for example you may fill your address ‘value’ objects in different steps in some workflow.
@Andras, nice series, congrats to you for studious work.
Andras, this series is what I have been looking for for a long time now! Thanks for bringing it!
I regard to the valueobject throwing an exception on validationerror I wonder why you chose this solution. Wouldn’t it be more natural to merge the businessrules of the root and all aggregates into one big list of validationerrors for the client to interpret?
Jan, I see what you mean.
Yes, it probably makes sense to present a list of the broken rules of the value objects as well that belong to the Entity. ValueObjectBase could also have a GetBrokenRules() method to return all broken rules just like the abstract EntityBase class.
//Andras
Andras, what is your take on handling additions, changes and deletions from a list of related aggregates below the root (e.g. a list of customer preferences); how should this work in regard to the repository? Is it maintaining an internal list of the changes which is used during the persistence act (if so then how?), or should one just wipe the existing list in the datastore and create the entities again?
Jan, check out the posts on SOA I mentioned in my previous reply to you. You’ll see an example there that sounds like the one you’re after in your question: handling a list of product purchases and product reservations. I took the “wipe out all” approach in the repository. It’s too cumbersome to check each product reservation line by line when there can be thousands of them, so they are first deleted and re-inserted.
//Andras
Andras,
you choose to implement the validation of the business rules with the Validate method shown below.
protected override void Validate()
{
if (string.IsNullOrEmpty(City))
{
AddBrokenRule(ValueObjectBusinessRule.CityInAddressRequired);
}
}
How would you distinguish the context (use case) in this validate methods.
For example:
A first use case forces the customer to have an address in the usa.
A second use case forces the customer to have AddressLine1 and AddressLine2 set.
A third use case force the customer to have a postal code in sweden.
…. and so on and so on.
There are hundreds of case dependent business rules formulated by the customers.
How would you solve this issue?
Would you use a parameterized Validate method like this
protected override void Validate(useCase)
{
if (useCase == useCase1)
{
AddBrokenRule(…);
} else if (useCase == useCase2)
{
AddBrokenRule(…);
} else if (useCase == useCase3)
{
AddBrokenRule(…);
}
}
or some other approach?
Hi Johannes,
Having a 100 if-else statement doesn’t sound like a good idea in general. It makes testing and code maintenance a nightmare. I think those use cases need to be turned into proper objects that implement an abstraction. Then each customer must provide an implementation of that rule. The validate method then can simply call on the implemented rule which will contain the use-case specific rules.
That way you can have a separate class for each use case that can be tested in isolation. You may want to read through the series on SOLID, especially the post on ‘L’ where a switch statement is replaced by objects that stand on their own.
//Andras
Hi Andras,
You stated:
“I think those use cases need to be turned into proper objects that implement an abstraction. Then each customer must provide an implementation of that rule.”
Please provide a concrete example because I have lots of ideas how to implement your suggestion but not all of them would be the intended solution.
A first idea can be
class CustomerUserCase1 : Customer
class CustomerUserCase2 : Customer
class CustomerUserCase3 : Customer
which would work but leads to 100 classes (no good idead)
A second idea can be
class CustomerCreation {
getCustomerForUserCase1() {
c = new Customer()
c.AddBrokenRules( all rules for use case 1)
return c
}
getCustomerForUserCase2() {
c = new Customer()
c.AddBrokenRules( all rules for use case 2)
return c
}
}
which would lead to 100 methods. AddBrokenRules must be public. Every objection creation of a customer must use CustomerCreation, and so on.
Thanks,
Johannes
Hi Johannes,
I didn’t mean it quite that way. The goal is to elevate your rules to proper objects and forget 100 if-else cases and methods. If you have complicated rules by country then I believe you’ll have to create as many classes to represent those rules. I don’t mean that there should be 100 different forms of Customer objects – FrenchCustomer, GermanCustomer etc. – that would be baaaaad. Instead, concentrate on the rules around the customer. That way you can test each rule independently and group them using the decorator pattern if you can group common rules together.
Let’s take an example similar to yours. I’ll keep it simple. Imagine your customers can order your products from 3 countries: England, France and Germany. Every customer has a first name, last name, email and an age. An overall rule is that every customer must have a first name regardless of the country. Furthermore:
A simplified scenario might consist of the following elements. All countries implement the following interface:
Here are the three concrete countries:
All country specific rules will derive from this base class:
The country specific rules are the following:
The country where the customer will be located is only known during runtime so the selection of the right country rule calls for a factory. Let’s take the simplest approach, i.e. a static factory:
So we maintain the list of implementing rules in the factory and just return the correct one based on the country code.
The customer must belong to a country. There are different ways to inject the country into the customer. Here I’ll take the simplest approach: constructor injection. The country specific rule will be selected using the factory. Here’s the Customer class:
Test the code with:
…which will yield 2 broken rules.
If you have 100 different countries then you’ll need 100 [something] anyway, where a 100 if-else statements and a 100 methods in some large class are out of the question. At least I would not go down that path. I remember a similar case, a java project, where income tax forms had to be validated and the rules were country specific. There was one class per country at the end as that was the most suitable OOP solution out there.
There are some other ways to inject the correct country into the customer, or organise the countries into a container and make them available by e.g. Countries.England – like Colors.Red in .NET graphics -, or using an abstract factory instead of a static one etc. The main point with my example is that those rules are recognised as “important” elements which should be organised into classes.
Hope this helps,
Andras
Hi Andras,
thanks for your detailed explanation. That was exactly what I was looking for.
Johannes
Great tutorial and great series. One question, quite naive probably, but why is EntityBase and ValueObjectBase classes located in the Infrastructure Project (the one holding cross-cutting concerns)? After all aren’t those classes relevant only to the domain project? Are they going to be used anywhere else?
It’s not a naive question at all. The motivation to put them there is that the infrastructure layer can hold any classes and interfaces that are used throughout all your applications. The infra layer is meant to be reused and not dedicated to a single solution. Ideally if a company has multiple applications then they will all follow the same structure with domains, repos, services etc. The exact implementations will of course differ but if a developer working on project A has to suddenly work on project B then a similar structure will make it easier to get going. A common rule for all domain layers can be that they implement the same IAggregateRoot interface and derive from the same EntityBase superclass. Hence the developer will immediately feel at home. You’ll see similar elements in the other posts of the DDD series: e.g. IRepository is also part of the infra layer. It is normally only used in the Repository layer. It still makes sense, however, to put it in the infra layer so that ALL repository layers across ALL projects of the company will use that interface. This gives a degree of uniformity to the projects.
//Andras
Hi Andras,
One of my business rules is check if the username is unique.. So I need make a database check, How can I do this on Domain Layer ?
Hi Israel,
You don’t do that in the domain layer directly. Not sure how long you’ve got with the DDD series but this is one way:
– extend IUserRepository in the Domain layer with a method that retrieves a User based on a user name
– implement IUserRepository in the concrete repository layer
– IUserService will then call upon IUserRepository to get a user with a user name
– if that’s null then the user doesn’t exist otherwise throw an exception
//Andras
sorry, I think I didn’t express myself well..
I know we need use the Repositories and Services, but I did not know how to do this.
But now is ok, I got it, Thanks for the answer and for the DDD series.
First of all, thank you for posting.
I have agreed with every word in the intro parts, until I got to part 3.
1. Regarding the Entity base class:
1.1. The Id must be readonly as a class member and as property without setter.
1.2. If you have some entities with Id generated after its creation, clearly you cannot use the same base for both.
1.3. It is the responsibility of the Identity to decide whether it equals to other or not. The entity must not do that, clearly, it must not decide that Identity could be converted and compared as string.
2. Validation – you have separated the data and the logic instead of keeping them together. The validation must happen the very moment a client tries to modify (call method, set property) an object. What if a client doesn’t call the validation or ignores its result?
In case you allow an object to be in invalid state (which btw could be persisted for later processing), again, you cannot use common Entity base class for both.
Common Entity base class could work in some cases, but it’s not the first thing to start with.
The model should evolve.
I’d like to say thank you so much for this great articles. As a junior developer, this is like a course for me.
Hi Burak, I’m glad you find it useful.
//Andras
Hi Andras,
Thanks for the great presentation on DDD.
Actually I know that domain should not be anemic which means we should try to have our business rules in domain. But when it comes to DDD is it true to assume that in two condition we need push logic’s to domain services, first when some business rules relate to other object and second when it needs DataBase operation for example check if username is duplicate.
Anyway thanks again for all articles in donetcodr.
Hi Behnam,
1. “some business rules relate to other object”:
Yes, that’s certainly an option. You can also create specialised domain objects if it makes sense for your business. You can check out the series on SOA starting here which shows how the Product domain can be incorporated into the ProductPurchase and ProductReservation objects in part 2.
In practice the rule is probably more flexible, i.e. you don’t always need to create a domain service just because a domain, like Product needs to check a property on another domain, like Customer.
2. “when it needs DataBase operation for example check if username is duplicate”
You can put the actual uniqueness check in the domain, e.g. through a method that accepts a list of customer names. The actual retrieval of the customer names must of course happen outside in some repository but the name validation itself can be put within the domain.
//Andras
Hi Andras,
Great, I will have a look at the SOA articles. Thanks for your help.
Hello, are the classes and logic to acquire the application settings and pass them into the application for consumption, part of the infrastructure layer?
Hi again,
The interface and its implementation(s) are part of the Infrastructure layer.
“pass them into the application for consumption”: do you mean how the settings are passed into the consuming classes? The most straightforward way is by constructor injection, i.e. you pass the settings into the consuming class through a caller.
//Andras
Pingback: Architecture and patterns | Michael's Excerpts
I am refactoring some of my code to strongly implement ddd, and found your articles. Thanks for them, great work.
I am wondering if is a good practice raise exceptions for simple validations. Wouldn’t be better to create some kind of “response” objects to handle these expected exceptions instead of using exceptions to control the flow of validation?
Validation with exceptions is actually quite common, especially when checking for null values in a constructor. However, I can’t tell you whether it’s a “good” practice or not. Another user commented about collecting all the validation exceptions into a single collection of validation errors so that we can inform the user in the UI about all of them at the same time. That sounds like a good alternative.
//Andras
Hi Andras, this is the best DDD tutorial I’ve found on the web, nice and easy to understand and follow.
One question though.
You said in the 2nd post of the series, that Entities should be simple POCO no business rules attached to them. My confusion is because I see that you are attaching an abstract Validation method to BaseEntitie, thus any Entity which will derive from BaseEntity will have to provide implementation for it. Shouldn’t the Validations be in a different part?
I don’t know maybe, use FluentValidation API and validate the Entity in another part let’s say a project called Domain.Entity.Validatiln ? so that the Entity would be as you stated as clean as possible?
Hi, this is what I stated in part 2: ” Entities should ideally remain very clean POCO objects without any trace of technology-specific code”. Domain specific logic, including validation should be contained within domain objects as much as possible, not anywhere else. Technology-specific code is different from business rules. Technology specific code can be persistence with EntityFramework and the like. //Andras
Hi Andras,
Appreciate you provide the such great post about DDD, I’ve learned a lot stuff from it.
I have a question, why should use ThrowExceptionIfInvalid(…) to verify the state of Address?
How about if I make the Validate(…) of Address as public and invoke in Validate(…) of Customer? | https://dotnetcodr.com/2013/09/19/a-model-net-web-service-based-on-domain-driven-design-part-3-the-domain/?replytocom=84617 | CC-MAIN-2020-24 | refinedweb | 4,106 | 54.63 |
:
6. If you use <xsl:attribute-set> elements, you will be happy to see this one:
These are autocompletions I'm currently aware of. There might be more - it's currently completely undocumented and I probably the first one writing about this feature. For example key names are collected too, but I haven't found where they are used. If you happen to discover another XSLT autocompletion, report it in comments section please.
And finally how to turn this awesomeness on:
Yes, regedit. Create String value called "XsltIntellisense" under "HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\XmlEditor" key. "True"/"False" are valid values.
If you are too lazy for editing registry manually, here is XsltIntellisense.reg file you can run (but rename it to .reg before).
If you don't want to mess with registry, wait till tomorrow. I'm going to release IronXSLT v0.3, which will turn XSLT intellisense on for you while installing.
Enjoy!
TrackBack URL:
Does this wok with custom XSD .
I have custom Schema., It would be nice if VS could sense the path
@Martin Kool
Thanks for the registry-hint
CACuzcatlan, I think you need VS 2008 SP1 for this feature.
I haven't been able to get this working. I'm using Visual Studio Team System 2008 on a Vista 64 machine. I added the key, restarted VS, even restarted the computer, but nothing works. Tried it with both XSL and XSLT extensions. Do I have to do anything after updating the registry values?
One more hidden XSLT feature VS 2008 SP1 has is XSLT Hierarchy.
It is controlled by setting XsltImportTree to True in HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0\XmlEditor.
It adds imports/includes in form of tree inside the 'Solution Explorer' window.
This is the only way to open 'Build-in Rules.xsl' autogenerated file.
One more topic also to look at is insert snippet and surround snippet for xslt
works so good. If you write about this, will help lots of xslt and xml developers.
This feature seems to work only only if you have no xsl:includes. If you have xsl:includes it will autocomplete only the templates that are in your includes and ignore any in your style sheet.
Hi,
Is there any difference using keys in xslt 2.
As for me the keys written work properly in xslt 1 but doesn't work in xslt2.
Let me know if there is any change.
Thanks,
It actually does work for the XSL extension, even for any extension that you choose as long as your contents is an xsl stylesheet.
For users that have Visual Studio Web Developer (the free express edition), this trick works too but with a different regkey:
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\VWDExpress\9.0\XmlEditor]
"XsltIntellisense"="True"
Except of the namespace related ones XMLSpy has those intellisense features too for XSLT. Even more the modes are filtered and you have all of the template names, parameters and modes available in a nice docking outline window also.
This only works if your file extension is XSLT and not XSL.
This page contains a single entry by Oleg Tkachenko published on May 3, 2008 8:38 AM.
blogs.asia was the previous entry in this blog.
IronXSLT v0.3 released is the next entry in this blog.
Find recent content on the main index or look in the archives to find all content. | http://www.tkachenko.com/blog/archives/000740.html | CC-MAIN-2017-34 | refinedweb | 571 | 66.84 |
/* Invoke tmpfile, but avoid some glitches., based on ideas from Paul Eggert. */ #ifdef HAVE_CONFIG_H # include <config.h> #endif #include "stdio-safer.h" #include <errno.h> #include <unistd.h> #include "unistd-safer.h" #include "binary-io.h" #ifndef STDERR_FILENO # define STDERR_FILENO 2 #endif /* Like tmpfile, but do not return stdin, stdout, or stderr. Remember that tmpfile can leave files behind if your program calls _exit, so this function should not be mixed with the close_stdout module. */ FILE * tmpfile_safer (void) { FILE *fp = tmpfile (); if (fp) { int fd = fileno (fp); if (0 <= fd && fd <= STDERR_FILENO) { int f = dup_safer (fd); if (f < 0) { int e = errno; fclose (fp); errno = e; return NULL; } /* Keep the temporary file in binary mode, on platforms where that matters. */ if (fclose (fp) != 0 || ! (fp = fdopen (f, O_BINARY ? "wb+" : "w+"))) { int e = errno; close (f); errno = e; return NULL; } } } return fp; } | http://opensource.apple.com/source/gm4/gm4-13/m4/lib/tmpfile-safer.c | CC-MAIN-2014-10 | refinedweb | 141 | 78.85 |
In this article I describe my two-months summer internship project at Quarkslab: obfuscating Java bytecode using the [Epona] Code Obfuscator. This article explains our approach, its advantages and limitations.
Introduction
Languages like Java, .Net or OCaml are typically compiled to platform independent bytecode before execution. The bytecode is then interpreted and/or compiled to target dependent machine code at runtime. The Java Bytecode is run by the Java Virtual Machine (JVM) and is generated from Java or other languages targeting it, like Scala or the more recent Kotlin language.
A typical Java program bytecode is not specifically optimized, this job is better left to the JVM. Because of this, many powerful free Java bytecode decompilation tools can be found online. This means that it is really easy to decompile any non obfuscated Java program. Lots of free and commercial tools also exist to obfuscate Java code, in order to make this decompilation process harder for reverse engineers.
[Epona] provides a C/C++ compiler with opt-in obfuscation features developed by Quarkslab, mainly targeting C and C++. One of our wish is to use the Epona obfuscator on Java, allowing us to implement, maintain, debug and improve the obfuscation techniques without doubling the necessary work.
As code obfuscations in Epona are implemented as transformations over the [LLVM] Intermediate Representation [1], the goal of this internship was to try whether going from Java bytecode to LLVM IR back-and-forth was a viable solution or not, and to identify problems that could arise.
We will start this blog post by exploring existing solutions. We will then explain how we are going from Java bytecode to LLVM IR and back to Java bytecode. Finally, we will take a look at some optimized and obfuscated examples.
Existing solutions
Various projects exist around the idea of using both Java and LLVM, mainly falling into two categories:
- compiling Java to native code thanks to LLVM
- running LLVM IR within a JVM
In this section, we describe these projects and why they don't completely fit with our goal.
Java bytecode to LLVM IR
- VMKit:
- The LLVM Java frontend:
- JLang:
These solutions have been developed to run Java programs natively (without going back to Java Bytecode), thus discarding valuable information like Java specific try/catch block.
They are also incomplete, because running Java natively requires a whole translation of all the standard Java libraries. Note that we don't have such problem in our project, as we intend to transform back the LLVM IR to Java bytecode.
It is also interesting to notice that the [Zing] JVM uses LLVM via [Falcon] to compile and optimize the most frequently used pieces of codes at runtime. The paper "Obfuscating Java Programs by Translating Selected Portions of Bytecode to Native Libraries" by Pizzolotto and Ceccato (2019) [2] is the one closest to what we try to achieve. Their goal is to transform the Java code to C code, while using JNI to support more complex behavior. This approach can be seen as some kind of [Cython] for Java. Note that the sources of the tool aren't available.
LLVM IR to Java bytecode
- LLJVM:
- Sulong:
- The Proteus Compile System:
The goal of these solutions is to run the LLVM bitcode within a JVM, emulating all the missing functionalities, like raw memory management. If we had reused these projects, we would have ended up with a virtual machine (the LLVM one) within a virtual machine (the JVM). We would also have had to take care of the standard Java libraries problem.
The conversion step
The Java bytecode is a high level bytecode, allowing the distribution of Java programs intended to be run on Java Virtual Machines (JVM). The bytecode cannot manipulate raw memory and is verified before being run to ensure its validity.
The LLVM IR is the Java bytecode counterpart for the LLVM compiler framework. It can be considered as a low level bytecode intended for binaries generation.
Mapping operations and data from Java bytecode to the other is not always direct as there are different trade offs that must be taken into account:
- A very detailed translation from Java bytecode to LLVM IR may give more information to the obfuscator, but it may be rather difficult to go back from the LLVM version to the Java bytecode;
- On the other hand, a very high level translation would yield a useless LLVM IR in terms of obfuscation, as its components are too abstract to be obfuscated by low level obfuscations.
The challenge is, thus, to find the right level of abstraction. In the following sections we describe how the mapping from Java Bytecode to LLVM IR was performed, beginning by the mapping of Java bytecode scalar and object types to LLVM IR types.
Scalar types
There are two types of numbers in the LLVM IR: integer and floating point numbers.
iN // N being the number of bits of the integer i1 // A boolean i8 // An octet i32 // A 32bit integer i1942652 // You guessed it
The type does not specify if the integer is signed or not, most of integer instructions exist in the signed and unsigned flavor.
half // 16 bits floating-point value float // 32 bits floating-point value double // 64 bits floating-point value
We will use the following type mapping:
The boolean type is stored using 8 bits, similarly to how Java Bytecode and the JVM works.
Objects
We consider objects as opaque pointers as we don't know (and don't want to know) what's hidden inside (that is JVM-dependent).
Note that these mappings, for objects and scalars, are inspired by those found in the [jni.h] file, the file used to write native code using the Java Native Interface (JNI).
Runtime abstraction level
We have to find the right amount of abstraction for the translated LLVM bitcode. We consider the Java bytecode to have a high abstraction level and LLVM IR to have a low abstraction level, closer to machine code.
The closer we are to machine code, the farther we are from Java bytecode and the more work we will have to do for the Java to LLVM conversion. Moreover, more code could be modified by the optimizer and obfuscator, and we could end-up with an LLVM IR whose semantic would be very hard or impossible to convert back to Java bytecode without some emulation magic. As we previously stated, we want to obfuscate the Java bytecode and try as much as possible not to end up with an LLVM interpreter within the JVM.
Having a high abstraction level, while greatly simplifying the conversion, is useless if the optimizer and obfuscator can't understand what the code is doing. We might as well put the whole Java bytecode in metadata.
For example, LLVM and Java arrays are very different. A Java array behaves much like an object, using reference, and its destruction being handled by the garbage collector. It does not make sense to use LLVM arrays so we added an abstraction. Arrays are now created and manipulated with methods taking the array pointer as an argument, much more like the way they work with Java bytecode.
Calls
We are using a variety of abstract methods to convert Java bytecode instructions that can't be directly translated to LLVM IR, like calls to other Java methods. These calls are converted using a specific convention: all the necessary information is encoded in the called function name, and the call parameters are the same as their bytecode counterparts. This makes it easy to convert back to bytecode while still allowing the optimizer and obfuscator to do their job.
For example a bytecode
INVOKESPECIAL instruction will be translated to
call void @"Java_@invokespecial@java/lang/Object@<init>@()V"(i64* %1).
Here calling the constructor of the target object
java.lang.Object without any argument (
i64* %1 is the pointer referencing this object)
We provide here a simple Java example, its corresponding Java bytecode and the generated LLVM IR:
AtomicInteger ai = new AtomicInteger(4); Math.pow(ai.get(), 2);
NEW Ljava/util/concurrent/atomic/AtomicInteger; DUP BIPUSH 4 INVOKESPECIAL java/util/concurrent/atomic/AtomicInteger.<init>:(I)V INVOKEVIRTUAL java/util/concurrent/atomic/AtomicInteger.incrementAndGet:()I I2D LDC 2D INVOKESTATIC java/lang/Math.pow:(DD)D D2I IRETURN
%1 = call i64* @"Java_@new@java/util/concurrent/atomic/AtomicInteger"() call void @"Java_@invokespecial@java/util/concurrent/atomic/AtomicInteger@<init>@(I)V"(i64* %1, i32 4) %2 = call i32 @"Java_@invokevirtual@java/util/concurrent/atomic/AtomicInteger@get@()I"(i64* %1) %3 = call double @"Java_@invokestatic@java/lang/Math@pow@(DD)D"(i32 %2, i32 2)
The same technique is also used for representing things like
this which is
ALOAD 0 in a non static Java methods to be used for super class instantiation.
%1 = call i64* @Java_fixed_this() call void @"Java_@invokespecial@com/quarkslab/java2llvm/testfiles/TestFile@<init>@()V"(i64* %1)
Calls are also extensively used for Java array manipulation:
%1 = call i64* @Java_fixed_array_create_10(i32 1) call void @Java_fixed_array_setIntCellData(%1, i32 0, i32 5) call i32 @Java_fixed_array_getIntCellData(%1, i32 0)
With this abstraction, converting back to Java bytecode is natural, as most information required for the translation is available in the function's name.
For example the array type is specified in the array create function (the opcode
10 is for integer) without having to find the first value assignment to get the array type from.
The drawback of this approach is that we suppose that all the users of these abstract functions will be calls, and they won't end up (for instance) in an array of function pointers. This is something for instance an obfuscator could generate. This means that we need to make it aware of the special semantics of these functions.
Representation of the JVM stack
Inspired from this paper about translating bytecode to native libraries [2] , we came up with a very simple way to convert the Java stack to LLVM registers. We began by writing functions to add elements and pop elements from the stack. The stack is a simple array defined with a given length (given in the bytecode) and with an index to track where we are. We then call out special functions each time we need to interact with the stack. These functions are inlined in the final LLVM IR, and the optimizations (Scalar Replacement Of Aggregates [] being the most important) completely remove the stack array.
The same technique is used for store and load operations, without the need for the index to keep track of where we are on the stack.
Note: The functions are generated at build time from C, simply because it is easier to write and to understand than the LLVM IR.
Java to LLVM
Converting from Java bytecode to LLVM IR is the easy part. Most of the bytecode instructions can be directly translated into their LLVM IR counterparts, and the stack machine is easy to emulate thanks to a simple array and index, as seen previously.
We are converting each class file as an individual LLVM module.
Because the JVM is a stack machine, there is a lot of stack-based instructions. For example the
IADD bytecode instruction pops two 32 bit integers from the stack, adds them together and pushes the result on the stack. This instruction is converted to:
%1 = call i32 @Java_popInt(i64* %stackPointer, i64* %stackIndex) %2 = call i32 @Java_popInt(i64* %stackPointer, i64* %stackIndex) %3 = add i32 %1, %2 call void @Java_pushInt(i64* %stackPointer, i64* %stackIndex, i32 %3)
The stackPointer and stackIndex variables are values allocated at the beginning of the translated LLVM function. The maximum size of the stack is given in the original Java class file. Here is an example:
%stack = alloca [2 x i64] %stackIndex = alloca i64 store i64 0, i64* %stackIndex %stackPointer = getelementptr inbounds [2 x i64], [2 x i64]* %stack, i32 0, i32 0 %locals = alloca [3 x i64] %localsPointer = getelementptr inbounds [3 x i64], [3 x i64]* %locals, i32 0, i32 0
As stated above, we are emulating a stack that will be removed by later optimizations of the LLVM bitcode.
Fox example, after conversion the following method:
public int test(int a1, int a2) { a1 = -a1; a1 = a1 << 1; a2 |= 5; a1 &= 15; a1 = ~a1 ^ 20; return a1 + a2; }
gives out 105 LLVM instructions, which are then optimized to the following LLVM IR:
define i32 @"test@(II)I@1"(i32, i32) local_unnamed_addr { lb_434176574_-1874797944: %"7" = shl i32 %0, 1 %"12" = sub i32 0, %"7" %"27" = or i32 %1, 5 %"32" = and i32 %"12", 14 %"40" = xor i32 %"32", -21 %"56" = add nsw i32 %"40", %"27" ret i32 %"56" }
Here you can see that the expression with a NOT followed by a XOR with 20 has been replaced by a XOR with -21, which is ~20 (on 32 bits, signed representation).
Control flow
The control flow is also easily converted and optimized:
public int test(int a1, int a2) { return a1 > a2 ? a2 : a1; }
Gives:
define i32 @"test@(II)I@1"(i32, i32) local_unnamed_addr { lb_529116035_608112264: %"3" = tail call i64* @Java_fixed_this() %"9" = icmp sgt i32 %0, %1 %. = select i1 %"9", i32 %1, i32 %0 ret i32 %. }
We can see that the implicit if instruction and the two possible result basic blocks have been combined into one using a select instruction.
The following example contains a loop:
public int test(int a1, int a2) { int j = 0; for (int i = 0; i < a1; i++) { j += a2; } return j; }
In the generated LLVM IR, it is optimized to a simple multiplication with a select for negative values:
define i32 @"test@(II)I@1"(i32, i32) local_unnamed_addr { lb_1433867275_-465532616: %"3" = tail call i64* @Java_fixed_this() %"1121" = icmp sgt i32 %0, 0 %2 = mul i32 %1, %0 %spec.select = select i1 %"1121", i32 %2, i32 0 ret i32 %spec.select }
The next one:
public int test(int a1, int a2) { for (int i = 0; i < a2; i++) { System.out.println(); } return 0; }
gives out:
define i32 @"test@(II)I@1"(i32, i32) local_unnamed_addr { lb_start: %"17.reg2mem" = alloca i32 %"3" = tail call i64* @Java_fixed_this() %"1016" = icmp sgt i32 %1, 0 br i1 %"1016", label %lb_preloop, label %lb_return lb_preloop: store i32 0, i32* %"17.reg2mem" br label %lb_loop lb_loop: %locals.sroa.4.017.reload = load i32, i32* %"17.reg2mem" %"11" = tail call i64* @"Java_@getstatic@java/lang/System@out@Ljava/io/PrintStream;"() tail call void @"Java_@invokevirtual@java/io/PrintStream@println@()V"(i64* %"11") %"17" = add nuw nsw i32 %locals.sroa.4.017.reload, 1 store i32 %"17", i32* %"17.reg2mem" %exitcond = icmp eq i32 %"17", %1 br i1 %exitcond, label %lb_return, label %lb_loop lb_return: ret i32 0 }
This loop stays as a loop because of the abstract Java functions. It would have been unrolled if it was possible.
Our last control flow example contains a switch:
public int test(int a1, int a2) { switch (a1) { case 0: return 1; case 1: return 5; case 20: return 3; case 15: case 17: return 666; } return 0; }
resulting in the following IR:
define i32 @"test@(II)I@1"(i32, i32) local_unnamed_addr { lb_791885625_-801122440: %merge.reg2mem = alloca i32 %"3" = tail call i64* @Java_fixed_this() switch i32 %0, label %lb_2054881392_-801122440 [ i32 0, label %lb_791885625_-801122440.lb_1887400018_-801122440_crit_edge i32 1, label %lb_2001112025_-801122440 i32 15, label %lb_791885625_-801122440.lb_1288141870_-801122440_crit_edge i32 17, label %lb_791885625_-801122440.lb_1288141870_-801122440_crit_edge12 i32 20, label %lb_314265080_-801122440 ] ...
Fields
A field is a variable inside a class. It can be accessed by any method in the class and sometimes from outside the class.
We translate them as LLVM global variables. When translated back to Java bytecode, these globals will be converted back to fields.
This code:
private int test; public int test(int a1, int a2) { test = 9; test += a1; test *= a2; return test; } public int getTest() { return test; }
Gives out:
@"Java_@test@2@I" = external local_unnamed_addr global i32 define i32 @"test@(II)I@1"(i32, i32) local_unnamed_addr { lb_48612937_-532753336: %"16" = add i32 %0, 9 %"25" = mul i32 %"16", %1 store i32 %"25", i32* @"Java_@test@2@I", align 4 ; A store to the variable representing the test field ret i32 %"25" } define i32 @"getTest@()I@1"() local_unnamed_addr { lb_1618212626_-532735736: %"6" = load i32, i32* @"Java_@test@2@I", align 4 ; A load from the variable representing the test field ret i32 %"6" }
Exceptions
Exceptions, on the contrary, are difficult to translate to LLVM IR. There is an exception system in LLVM with throw, try/catch, cleanup pads, but it doesn't behave like the Java exception system and a lot of information that would need to be forwarded back to Java would have been lost. We need to keep in mind that LLVM is not intended to be converted to Java Bytecode.
We needed to implement our own exception system. This was done with functions representing try/catch blocks to be sure that the optimizations or obfuscations wouldn't mess them up. The exception type is encoded in metadata.
For example the following code:
public void test() { try { ...; } catch (RuntimeException e) { e.printStackTrace(); } }
is translated to the following IR, where we can see two added functions, one for the try block and one for the catch block:
define i32 @"test@()V@1"() { ... %"6" = call i1 @Java_fixed_exception_try_1433867275(i64* %localsPointer) br i1 %"6", label %lb_itsFine, label %lb_haltAndCatchFire lb_haltAndCatchFire: call void @Java_fixed_exception_catch_1433867275(i64* %localsPointer) br label %lb_itsFine lb_itsFine: ret void } define i1 @Java_fixed_exception_try_1433867275(i64*) { ; The function for the try part of the try/catch %9 = tail call i1 @Java_fixed_exception_result_1433867275() ret i1 %9 } define void @Java_fixed_exception_catch_1433867275(i64*) { : The function for the catch part of the try/catch %"13" = tail call i64* @Java_fixed_exception_push() tail call void @"Java_@invokevirtual@java/lang/RuntimeException@printStackTrace@()V"(i64* %"13") ret void }
So as long as each of the behavior of theses functions is the same, at a function level, before and after the transformations, everything is fine. In particular, we prevent the inlining of these functions, so that we can easily convert this scheme back to Java bytecode.
This is the simplest example. This system works fine for simple exceptions, exceptions with multiple catch clauses, exceptions in try or catch blocks. Problems arise when there is a jump out of the try/catch function, possibly in the middle of another one. This had to be accommodated for with switches and it made the exception conversion system quite complex.
On the other hand, translating exceptions back to Java bytecode is simple because with well defined names, finding the matching catch function for a try function is trivial.
LLVM to Java
Converting LLVM IR back to Java bytecode is more complex. Ideally, we should be able to convert back any code, but as previously stated, Java bytecode is stricter and many LLVM instructions are impossible to perform in Java without embedding an LLVM IR interpreter (as done by some LLVM to Java projects).
That being said, the LLVM IR code we generate from the Java bytecode may be further transformed by subsequent optimization or obfuscation steps. So, as we need to stick to a subset of what the LLVM IR can achieve, we may have to abort the compilation process if we end up on something we can't map back to Java.
Conversion of LLVM instructions
Here, we are going to speak about problems that can arise with specific LLVM instructions.
The first easy one is the phi instruction, classically used in an SSA form. We can easily use the already existing
-reg2mem optimization to transform these into memory loads to and stores from an alloca'd variable.
The second instruction is the [GEP] instruction. Its job is to compute pointer offsets to access array or structure elements. To deal with GEP and pointers in general, we needed a points-to analysis to convert GEP instructions, figuring out if we need to access an array, an object, a local value or anything else. We need to be able to resolve all the pointed values when we are converting to Java bytecode otherwise we would need emulation at runtime. It generally works if the program is not too heavily obfuscated.
For others instructions like select instructions or operations on unsigned integers we can translate them with multiple Java bytecode instructions, but we easily end up with five times more instructions to do the same thing. When these instructions are not present we most of the time end up with smaller bytecode.
Another problem to solve is that we need to go from an SSA form back to a stack-based machine.
Back to a stack machine
The JVM is a stack machine, whereas LLVM is working with an infinite amount of registers. Thankfully, locals exist in Java bytecode.
Locals are the way of storing information inside a function when there are some operations the stack can't do, because a needed element is not at the right spot or available on the stack at all. Values can be loaded from locals (e.g.
ILOAD 3 loads a 32 bit integer from the local at index 3) and put on the stack or stored from the stack to a local (e.g.
DSTORE 4 stores a 64 bit floating point value in the local at index 4). The type of a local can't change during the execution of a method. There are about 65k locals available (index encoded on 16 bits) and 64 bits values like long or double take two consecutive spots.
Here is an example, describing the algorithm we use. For each instruction, LLVM easily gives us a list of the used values. The order of these values can be important, depending on whether the operator is commutative or not. So, we need the values on the stack to fit our needs for the next instruction.
For example, consider the following stack with the following values, D being at the top of the stack:
A B C D
We need the values C, D and E to be at the top of the stack for the next instruction. The algorithm finds that C and D are already at the top of the stack and that we only have to add E. E is added at the top of the stack and the next instruction can be executed.
The stack now looks like this, with R being the result of the previous instruction:
A B R
R is duplicated and stored for later use.
Sometimes, elements on the stack are not in the right order, for example:
A C D B
As the top of the stack is not useful to us and might be used in the future, we have to add our values, ending up with:
A C D B C D E //Before instruction A C D B R //After instruction
This algorithm can create large stacks of unused or unusable elements (because they are "covered" by others) so we perform a cleanup at the end of the conversion. We begin by removing useless store instructions with never used values. We then remove all stack values that are never accessed.
This algorithm is simple but does not always produce optimal results. The number of loads and stores could be reduced if the stack was managed better. There are descriptions of non trivial algorithms for conversion and optimization that could be worth implementing [3] [4] [5] .
Examples
Using only the LLVM optimizer
Here are some examples of Java bytecode converted to LLVM IR, optimized by the LLVM optimizer and translated back to Java bytecode:
0: iload_1 /* a1 */ 1: ineg 2: istore_1 /* a1 */ 3: iload_1 /* a1 */ 4: iconst_1 5: ishl 6: istore_1 /* a1 */ 7: iload_2 /* a2 */ 8: iconst_1 9: ishr 10: istore_2 /* a2 */ 11: iload_2 /* a2 */ 12: iconst_3 13: irem 14: istore_2 /* a2 */ 15: iload_2 /* a2 */ 16: iconst_5 17: ior 18: istore_2 /* a2 */ 19: iload_1 /* a1 */ 20: bipush 15 22: iand 23: istore_1 /* a1 */ 24: iload_1 /* a1 */ 25: iconst_m1 26: ixor 27: bipush 20 29: ixor 30: istore_1 /* a1 */ 31: iload_1 /* a1 */ 32: iconst_3 33: idiv 34: istore_1 /* a1 */ 35: iload_1 /* a1 */ 36: bipush 10 38: imul 39: istore_1 /* a1 */ 40: iload_1 /* a1 */ 41: iload_2 /* a2 */ 42: iadd 43: ireturn
And the bytecode after the llvm optimizations:
0: iload_1 1: ldc 1 3: ishl 4: dup 5: istore_3 6: ldc 0 8: iload_3 9: isub 10: dup 11: istore 5 13: iload_2 14: ldc 1 16: ishr 17: ldc 3 19: irem 20: ldc 5 22: ior 23: iload 5 25: ldc 14 27: iand 28: ldc -21 30: ixor 31: ldc 3 33: idiv 34: ldc 10 36: imul 37: iadd 38: ireturn
As we can see there are less instructions, from 40 down to 27, but some might have noticed that we are always using the
LDC instruction, taking up a lot of bytes. We would have even less bytecode if we had simply transformed those to
ICONST_1 or
BIPUSH instructions
We can also observe the effects of optimizations in the following example featuring a large
switch instruction:
public int test(final int a1, final int a2) { switch (a1) { case 0: { return 1; } case 1: { return 5; } case 2: { return 3; } case 3: { return 2; } case 4: { return 24; } case 5: { return 55; } case 6: { return 99; } case 7: { return 11; } case 8: { return 31; } default: { return 0; } } }
It is converted to an array:
public int[] LLVM_swItch_table_test__II_I_1 = new int[] { 1, 5, 3, 2, 24, 55, 99, 11, 31 }; public int test(final int n, final int n2) { if ((n & 0xFFFFFFFFL) >= 9) { return; } return this.LLVM_swItch_table_test__II_I_1[n]; }
Or this loop:
int j = 0; for (int i = 0; i < a1; ++i) { j += a2; } return j;
Which is converted to a multiplication:
return a1 > 0 ? a1 * a2 : 0;
Applying some obfuscations passes with Epona
The original goal of our project is to obfuscate Java bytecode with the Epona compiler. Let's try this on some examples!
The original code is provided below. Keep in mind that the following examples are decompiled using [procyon], which might produce slightly inaccurate results:
a1 = -a1; a1 <<= 1; a2 >>= 1; a2 %= 3; a2 |= 0x5; a1 &= 0xF; a1 = (~a1 ^ 0x14); a1 /= 3; a1 *= 10; return a1 + a2;
The following examples showcase obfuscations passes applied independently of each other.
Mixed Boolean Arithmetic:
final int n3 = n & 0x2; final int n4 = n3 * (n | 0x2) + (n & 0xFFFFFFFD) * (n3 ^ 0x2); final int n6; final int n5 = (n6 = (n2 >> 1) % 3) & 0x5; final int n7 = (n4 - 1 | 0xE) - n4; final int n9; final int n8 = (n9 = ((-4 - (n7 << 1) & 0xFFFFFFD6) + (n7 + 22)) / 3) & 0xA; final int n10 = n8 * (n9 | 0xA) + (n9 & 0xFFFFFFF5) * (n8 ^ 0xA); final int n11; return (n10 | (n11 = -5 - n6 + n5)) - (n11 << 1) + (n10 & n11);
Opaque constants:
final int n3; final int n4; return ((n2 >> ((((n3 = ~n) & 0x8020080) | 0x1020D008) + ((n & 0x8020080) | 0x1090224) ^ 0x192BD2AD)) % (((n3 & 0x400000) | 0x6C124108) + ((n & 0x400000) | 0x92011075) ^ 0xFE53517E) | (((n3 & 0x20100006) | 0x6041050) + ((n & 0x20100006) | 0x8210908) ^ 0x2E35195B)) + (((((n3 & 0x4812100D) | 0x48910) + ((n & 0x4812100D) | 0x1406080) ^ 0x4956F993) & (((n4 & 0x8004D601) | 0x11002040) + ((n2 & 0x8004D601) | 0x4800008A) ^ 0xD904F6CB) - (n << ((((n4 = ~n2) & 0x11C00100) | 0x2400200A) + ((n2 & 0x11C00100) | 0x2209800) ^ 0x37E0B90B))) ^ (((n3 & 0xCD40000B) | 0x12010200) + ((n & 0xCD40000B) | 0x41C80) ^ 0x20BAE160)) / (((n4 & 0x28808280) | 0x5061100) + ((n2 & 0x28808280) | 0x1001401A) ^ 0x3D87D399) * (((n4 & 0x2000000) | 0x80C01146) + ((n2 & 0x2000000) | 0x38054820) ^ 0xBAC5596C);
We can also perform other obfuscation passes such a control flow graph flattening and combine them together. This is a simple flow before obfuscation:
This is an extract of the same function after control flow graph flattening, visualized by the best effort of [JByteMod] tool:
Although the generated bytecode is valid, the decompiler is struggling to make sense of it!
Current limitations
The first issue is that the LLVM to Java bytecode conversion is not 100% complete. This means that we are able to convert back only a subset of LLVM instructions. Another issue is that jumps to basic blocks located in the middle of a try/catch is not supported (which can happen in Java when using a
finally block). This means that if we cannot figure out a way of translating some LLVM IR back to Java bytecode, we abort the conversion process. It usually happens when our obfuscations are too strong and we need to tweak them. This also means that we need to make our obfuscation "Java-aware", which kind of breaks a bit the original design of having "one obfuscation for all languages".
Moreover, Java annotations are not translated at all. We could embed them in LLVM metadata on each function, field or class file.
Finally, we are using an "x86_64" target triple in the LLVM IR, which is the closest we could find for representing a Java virtual machine. This is obviously a hack and creating a custom "java_bytecode" target would be better, allowing us to tweak optimization and obfuscation transformations as needed.
Conclusion
Translating Java bytecode to LLVM back-and-forth can work on toy projects and the concept still needs to be validated with real-life scenarios. The main issue is going back from LLVM without injecting a full LLVM interpreter within the generated bytecode. This involves that only a subset of the LLVM IR can be translated back to Java, and that we need to be careful as to which optimizations and obfuscations can be applied.
Going further
There are nonetheless aspects that can be improved. The first one is the Java bytecode generation: it can be implemented by a better algorithm to reduce its size. This would remove useless type conversions or ensure that we do not skip locals slots that are not used anymore thanks to previous bytecode cleanup.
Moreover we could also create custom annotations, allowing developers to specify the methods they don't want to obfuscate or want to obfuscate with heavier protection. They could be integrated as LLVM metadata that Epona could then process.
Last but not least, in order to fix the fact that we try to use the LLVM IR for something it hasn't been completely designed for, it might be interesting to experiment with [MLIR], with a custom "Java" dialect that could be used as an intermediate translation IR.
Acknowledgements
Thanks to Adrien Guinet for supervising me and helping me to write this blog post. Thanks to Béatrice Creusillet, Juan Manuel Martinez Caamaño, Matthieu Daumas and the Epona team for welcoming me, the help they provided for the project and the blog post reviews. Thanks to Quarkslab for this interesting internship opportunity. | https://blog.quarkslab.com/obfuscating-java-bytecode-with-llvm-and-epona.html | CC-MAIN-2021-39 | refinedweb | 5,029 | 54.46 |
0
In the following program I'm getting the warning -> unused variable ‘fn’. I'm following along with a book so I don't know why it gave me that portion of code if it's unusable. Also, I don't understand this line at all.
->
void(*fn)(int& a, int* b) = add;
#include <iostream> void add(int& a, int* b) { std::cout << "Total: " << (a + *b) << std::endl; }; int main() { int num = 100, sum = 500; int& rNum = num; int* ptr = # void(*fn)(int& a, int* b) = add; std::cout << "Reference: " << rNum << std::endl; std::cout << "Pointer: " << *ptr << std::endl; ptr = ∑ std::cout << "Pointer now: " << *ptr << std::endl; add(rNum, ptr); return 0; } | https://www.daniweb.com/programming/software-development/threads/481644/pointers-and-refs | CC-MAIN-2017-43 | refinedweb | 112 | 73.21 |
Specialized SoundRecorder which stores the captured audio data into a sound buffer. More...
#include <SoundBufferRecorder.hpp>
Specialized SoundRecorder which stores the captured audio data into a sound buffer.
sf::SoundBufferRecorder allows to access a recorded sound through a sf::SoundBuffer, so that it can be played, saved to a file, etc.
It has the same simple interface as its base class (start(), stop()) and adds a function to retrieve the recorded sound buffer (getBuffer()).
As usual, don't forget to call the isAvailable() function before using this class (see sf::SoundRecorder for more details about this).
Usage example:
Definition at line 44 of file SoundBufferRecorder.hpp.
Get a list of the names of all available audio capture devices.
This function returns a vector of strings, containing the names of all available audio capture devices.
Get the sound buffer containing the captured audio data.
The sound buffer is valid only after the capture has ended. This function provides a read-only access to the internal sound buffer, but it can be copied if you need to make any modification to it..
Implements sf::SoundRecorder.
Start capturing audio data.
Reimplemented from sf::SoundRecorder.
Stop capturing audio data.
Reimplemented from sf::Sound(). | https://www.sfml-dev.org/documentation/2.3/classsf_1_1SoundBufferRecorder.php | CC-MAIN-2018-05 | refinedweb | 199 | 58.69 |
Rubyists life made easier with composition operators.
Eugene Komissarov
・2 min read
If you write Ruby code and wandered into FP world you might just started writing those little tiny methods inside your classes/modules. And that was awesome to write code like this:
class Import # Some code goes here... def find_record(row) [ Msa.find_or_initialize_by(region_name: row[:region_name], city: row[:city], state: row[:state], metro: row[:metro] ), row ] end # record is one of: # Object - when record was found # false - when record was not found def update_record(record, attributes) record.attributes = attributes record end # record is one of: # false # ZipRegion def validate_record(record) case record when false [:error, nil] else validate_record!(record) end end # record is ZipRegion object def validate_record!(record) if record.valid? [:ok, record] else error(record.id, record.errors.messages) [:error, record] end end def persist_record!(validation, record) case validation when :ok record.save when :error false end end end
Yeah, I know there is YARD, and argument types are somewhat weird but at the time of coding, I was fascinated with Gregor Kiczales's HTDP courses (that was a ton of fun, sincerely recommend for every adventurous soul).
And next comes dreadful composition:
def process(row, index) return if header_row?(row) success(row[:region_name], persist_record!(*validate_record(update_record(*find_record(parse_row(row)))))) end
The pipeline is quite short but already hard to read. Luckily, in Ruby 2.6 we now have 2 composition operators: Proc#>> and its reverse sibling Proc#<<.
And, with a bit of refactoring composition method becomes:
def process(row, index) return if header_row?(row) composition = method(:parse_row) >> method(:find_record) >> method(:update_record) >> method(:validate_record) >> method(:persist_record!) >> method(:success).curry[row[:region_name]] composition.call(row)
Much nicier, don't you think? Ruby just became one step closer to FP-friendly languages family, let's hope there'll be more! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/enoch0x5a/rubyists-life-made-easier-with-composition-operators-5513 | CC-MAIN-2019-39 | refinedweb | 301 | 50.84 |
PLANT PHYSIOLOGY , Vol 105, Issue 4 1335-1345, Copyright © 1994 by American Society of Plant Biologists
M. Bachmann, P. Matile and F. Keller
Institute of Plant Biology, University of Zurich, Zollikerstrasse 107, CH-8008 Zurich, Switzerland
Ajuga reptans is a frost-hardy, perennial labiate that is known for its
high content of raffinose family oligosaccharide(s) (RFO). Seasonal
variations in soluble nonstructural carbohydrate levels in above-ground
parts of Ajuga showed that the RFO were by far the most predominant
components throughout the whole year. RFO were lowest in summer (75 mg/g
fresh weight) and highest in fall/winter (200 mg/g fresh weight), whereas
sucrose and starch were only minor components. Cold treatment (14 d at
10/3[deg]C, day/night) of plants that were precultivated under warm
conditions (25[deg]C) lowered the temperature optimum of net photosynthesis
from 16[deg] to 8[deg]C, decreased the maximum rate, and increased the
total nonstructural carbohydrate content of leaves by a factor of about 10,
mainly because of an increase of RFO. The degree of polymerization of the
RFO increased sequentially up to at least 15. A novel,
galactinol-independent galactosyltransferase enzyme was found, forming from
two molecules of RFO, the next higher and lower degree of polymerization of
RFO. The enzyme had a pH optimum of 4.5 to 5.0 and may be responsible for
RFO chain elongation. RFO were the main carbohydrates translocated in the
phloem, with stachyose being by far the most dominant form. Studies of
carbon balance during leaf development revealed a transition point between
import and export at approximately 25% maximal leaf area. RFO synthesis
could be detected even before the commencement of export, suggesting the
existence of a nonphloem-linked RFO pool even in very young leaves. Taken
together, it seems that Ajuga leaves contain two pools of RFO metabolism, a
pronounced long-term storage pool in the mesophyll, possibly also involved
in frost resistance, and a transport pool in the phloem.
This article has been cited by other articles: | http://www.plantphysiol.org/cgi/content/abstract/105/4/1335 | crawl-002 | refinedweb | 340 | 51.28 |
In this tutorial, we will show you how to install a file system uploader plugin in Arduino IDE to upload files to ESP32 SPI flash (SPIFFS). By default, there is no support in Arduino IDE to upload files to ESP32. But we can install a plugin to add a file uploading feature in Arduino IDE.
We have a similar guide with ESP8266 NodeMCU LittleFS filesystem:.
Flat Structure
Unlike the file system in Linux, ESP32 SPIFFS does not support directories. Instead, it produces a flat structure. If SPIFFS is mounted under /spiffs, then creating a file with the path /spiffs/tmp/myfile.txt will create a file called /tmp/myfile.txt in SPIFFS, instead of myfile.txt in the directory /spiffs/tmp.
Applications
The use of SPIFFS with ESP32 becommes very handy in various applicatons such as follows:
- Saving mesh and network configuration settings in permanent memory without any SD card
- While using EPS32 as a Bluetooth mesh Node, its provisioing and configuration data must store in permanent memory
- In case of ESP32 web server, server HTML and CSS files from SPIFFS
- Saving look-up tables to perform query and retrieve data.
Related Projects: ESP32 Web Server with SPIFFS (SPI Flash File System)
Installing File System Uploader Plugin in Arduino IDE
ESP32 SPIFFS allows use to write files to flash. We can write our own code to upload files from the file system. However, by doing so, we have to write code ourselves and include the code in our application. Additionally, it will increase the actual size of our application.
Hence, we can use a plugin available for Arduino IDE to upload files. This plugin allows users to upload files to SPIFFS directly from their computers. It will make the whole process easier and let us work with files easily.
Installing ESP32 Arduino IDE
Before starting with the file system uploader plugin installation process, you should make sure that you have the latest version of Arduino IDE installed on your system. Moreover, you should also install an ESP32 add-on in Arduino IDE. You can check this tutorial:
Download File Uploader Plugin
(1) First, download the file uploader plugin for Arduino IDE. Go to this link and click on the ESP32FS-1.0.zip folder. When you click on the zip folder, it will download it to your selected location.
(2) After that, extract the zip folder and you will find ESP32FS folder inside ESP32FS-1.0.zip.
(3) Copy the ESP32FS folder and paste it to the tools folder in your Arduino IDE installation folder. The tools folder is available inside the Arduino IDE installation folder. In our case, it is available at:
C:\Program Files (x86)\Arduino\tools
(4) You have successfully added the ESP32 file uploader plugin to Arduino IDE. You just need to restart your Arduino IDE.
In order to check if the ESP32 FileSystem plugin has been installed correctly, open your Arduino IDE. After that, go to tools. You will find the “ESP32 Sketch Data Upload” option there.
Upload File to ESP32 using the Filesystem Uploader
In this section, let’s see an example to upload an example file to ESP32 SPIFFS using the plugin we have just installed. Follow these step by step instructions:
- First, open Arduino IDE and open the sketch folder. To open the sketch folder, go to sketch>>Show Sketch Folder. It will open the default folder where your sketch is being saved.
- Inside the sketch folder, you will find two a sample Arduino sketch and a data folder. In case your sketch folder does not have a data folder. You should create one because we put the files inside the data folder, which we want to upload to ESP32 SPIFFS.
- Create a sample text file with any name. But you should use the same name in the Arduino sketch while reading a file. For example, we have created a text file with the name of “example.”
- Copy this example.txt file inside data folder.
- Finally, we can upload the file saved in the data folder to the filesystem of ESP32. To upload a file, go to tools and select ESP32 from tools>boards, and after that press “Tools > ESP32 Sketch Data Upload“.
Note: Make sure to select the ESP32 board before uploading the file. Otherwise, you will get this error SPIFFS Not Supported on avr.
As soon as you click the ESP32 Sketch Data Upload” option, Arduino IDE will start uploading SPIFFS files image to ESP32 as follows:
Note: If you see the Connecting …….____……” message in your Arduino console. That means Arduino IDE is trying to connect with ESP32. But in some versions of ESP32 boards you need to press and hold the BOOT button on ESP2 board when you see the Connecting …….____……” message.
After some time, you will see the message “SPIFFS Image Uploaded. “ That means files are uploaded successfully to ESP32 SPIFFS.
In this example, we are using only one file. But you can save more than one file in the data folder inside the sketch folder. It will upload all files to the ESP32 filesystem.
Example Sketch to Read SPIFFS File
Let’s see an example sketch to read the example.txt file, which we just uploaded to the ESP32 filesystem and print its content on the Arduino serial monitor.
Copy the following code to your .ino file of Arduino sketch and upload it to Arduino.
#include "SPIFFS.h" void setup() { Serial.begin(115200); if(!SPIFFS.begin(true)){ Serial.println("An Error has occurred while mounting SPIFFS"); return; } File file = SPIFFS.open("/example.txt"); if(!file){ Serial.println("Failed to open file for reading"); return; } Serial.println("File Content:"); while(file.available()){ Serial.write(file.read()); } file.close(); } void loop() { }
After uploading a sketch, open the serial monitor in Arduino IDE by going to Tools > Serial Monitor. Also, set the baud rate to 115200. Finally, press the enable button on the ESP32 board:
As soon as you press the enable button, it will print the content of example.txt file on the serial monitor as follows:
That means you have successfully uploaded the file to the ESP32 filesystem and also able to read the file.
Summary
By using the file system uploader plugin, we can easily upload files to ESP32 SPI Flash. On top of that, we can also read, write, delete and close files inside our Arduino sketch using the SPIFFS.h library available in Arduino IDE. You can check the following project to learn to serve HTML, CSS files to a web client stored on the ESP32 file system:
More ESP32 tutorials and projects:
- Getting Epoch/Unix time with ESP8266 NodeMCU through NTP server using Arduino IDE
- ESP32 with MPU6050 Accelerometer, Gyroscope, and Temperature Sensor ( Arduino IDE)
- LM35 Temperature Sensor with ESP32 – Web Server
- ESP32 web server control relay and 220 volt lamp
- ADS1115 I2C external ADC with ESP32 in Arduino IDE
- ESP32 Web Server Control Servo motor with Arduino IDE | https://microcontrollerslab.com/install-esp32-filesystem-uploader-in-arduino-ide-spiffs/ | CC-MAIN-2021-39 | refinedweb | 1,157 | 64.3 |
Create a new Apache Royale library project in Visual Studio Code - BowlerHatLLC/vscode-as3mxml Wiki
Learn to set up a project in Visual Studio Code to create a SWC library for Apache Royale.
Development Setup
Install the ActionScript & MXML extension for Visual Studio Code.
Create a new directory for your project, and open it in Visual Studio Code.
To open a directory, select the File menu → Open... or click Open Folder button in the Explorer pane.
Choose an ActionScript SDK for your workspace.
Create a file named asconfig.json in the root directory of your project, and add the following content:
{ "type": "lib", "compilerOptions": { "targets": [ "SWF", "JSRoyale" ], "source-path": [ "src" ], "include-sources": [ "src" ], "output": "bin/MyLibrary.swc" } }
In addition to
include-sources, you may also use the
include-classesor
include-namespacescompiler options to include code in your library. See Library Compiler Options for asconfig.json for details.
If you need to set the
configfield to
"js", you must replace the
"JSRoyale"target with
"JS"instead. Similarly, when using the config value
"node", replace the
"JSRoyale"target with
"JSNode".
Create directory named src.
Inside src, create any classes that you want to include in the library SWC. | https://github-wiki-see.page/m/BowlerHatLLC/vscode-as3mxml/wiki/Create-a-new-Apache-Royale-library-project-in-Visual-Studio-Code | CC-MAIN-2022-05 | refinedweb | 196 | 59.19 |
0
I am almost done with a project and now I would like to code an installer for it. I know there are many free solutions to make installer packages online, but I would enjoy learning how to make one from the ground up for the expirience if nothing else. What I have so far is designed to work on a Windows machine.
#include <iostream> #include <fstream> using namespace std; int main() { system("copy %~d0\package\file1.exe C:\package"); system("copy %~d0\package\readme.txt C:\package"); //add to Start Menu etc return 0; }
Note that %~d0\ is the Windows variable for the active drive.
This is a VERY simple installation program. How would I unzip a zipped file, back up a section of the registry, install the program (copy files and edit the registry), and check that it succeeded? I know this is a lot to ask but advice on any section of this program would be appreciated. I will add the GUI later on. | https://www.daniweb.com/programming/software-development/threads/134088/installer | CC-MAIN-2017-47 | refinedweb | 167 | 72.46 |
Usual disclaimer - I don’t know Haskell at all. What follows is my rambling experimentation with the
Maybe type.
Haskell has a useful type called
Maybe. F# and probably most other functional languages have something similar. Even C# has Nullable
Maybe is a parameterised or polymorphic type that represents that we possibly have a value of the type parameter. So
Maybe Int means maybe we have an
Int, maybe we don’t (ie we have Nothing).
Because Haskell doesn’t allow
null values we might use
Maybe for a function that returns its input if the input is a positive number.
giveIfEvan :: Int -> Maybe Int giveIfEvan n = if n `mod` 2 == 0 then Just n else Nothing
Maybe is a monad, which, to me, means primarily that it provides a function to convert from a
Maybe of some type to a
Maybe of some other type (
>>= :: Maybe a -> Maybe b).
Imagine I want a function that given a
Maybe Int adds 1 to the value (if there is a value). One terrible way to write this is:
addOne :: Maybe Int -> Maybe Int addOne n = if isJust n then Just (fromJust n + 1) else Nothing
or with pattern matching:
addOne :: Maybe Int -> Maybe Int addOne (Just a) = Just (a + 1) addOne Nothing = Nothing
since
Maybe is a monad we can use do notation:
addOne n = do v <- n return (v + 1)
The benefit here is that
addOne handles the nothing case without an explicit conditional.
If you don’t mind using a lambda you can use the de-sugared version of do
>>= (bind):
addOne n = n >>= \v -> return (v + 1)
Or instead of a lambda we could define an extra function:
increment :: Int -> Maybe Int increment v = return (v + 1) addOne :: Maybe Int -> Maybe Int addOne n = n >>= increment
Here is a complete program. Try changin the 8 to an odd number to see the nothing case.
import Data.Maybe giveIfEvan :: Int -> Maybe Int giveIfEvan n = if n `mod` 2 == 0 then Just n else Nothing addOne :: Maybe Int -> Maybe Int addOne n = do v <- n return (v + 1) main = do return (addOne $ giveIfEvan 8) | https://www.withouttheloop.com/articles/2013-05-19-maybe-haskell/ | CC-MAIN-2022-27 | refinedweb | 354 | 61.19 |
Wolfowitz: Je Ne Regrette Rien
Watching the news from Iraq this week and I couldn't help but wonder: is anyone going to pay for these crimes?
How is it that innocent people can die by the hundreds and thousands thanks to policy decisions made by Americans and nobody here is held accountable.
And, no, I do not think that losing Congress or the Presidency amounts to being held accountable for mass carnage.
The worst part is that these people do not even hold themselves accountable. LBJ and McNamara suffered pangs of, I don't know, guilt, shame, anger, regret. Something.
Not this gang.
Interviewed by The Australian while on a visit to Melbourne, Paul Wolfowitz makes clear that, for him, all the war's costs have been outweighed by the accomplishment.
."
Maybe it did. It just didn't change anything important, like the slaughter of innocent people.
Iraq is infinitely worse off today than it was before the war. We have, essentially, destroyed a country and, in the process lost 3700 young Americans.
But Wolfowitz, Cheney, Bush, Rumsfeld, Perle, Kristol, Feith and the Members of Congress who voted for the war just sail on.
Like Edith Piaf, they have no regrets.
As for the rest of us, we see documentaries about the war. We get angry. And, in my case, I write posts. Big deal.
But few of us really do a goddam thing.
I couldn't agree more with your sentiments here. And while I think that it is our responsibility as Americans to police ourselves and punish these guys and others here like them, I sadly don't see it happening.
One thing that I would love to see is a united world response. If we here do not arrest and put on trial these kinds of criminals then the world should label us a rogue nation or worse and smack travel bans on us and our officials. Basically tell us "hey we don't like you and you're not welcome." And arrest anyone of them that's tied to these horrible crimes if they set foot outside our borders. I know...it's a fantasy land proposition for a plethora of reasons. First and foremost is economics. Tourism and trade would take huge hits and as we all know, money is what makes the sun rise and the world turn. But I'd love to see the world get fed up with our outrageous and at times bloodthirsty behavior and really knock us on our asses for it. After all, the only way American citizens really get their panties in a bunch is if they personally get the screws turned on them. But to be honest, more and more it seems that even then they may not even react. It's just too much of a bother...
August 17, 2007 1:29 PM | Reply | Permalink
They suffer pangs of "Why don't people realize how great we are?"
Seriously, they have convinced themselves that they are heroes, that those who die are pawns and that those who oppose are misguided fools who will one day see the light and apologize.
W. very seriously believes that we'll one day call him a great president and he probably thinks that he'll be around to hear us all apologize.
thosethingswesay.blogspot.com
August 17, 2007 1:40 PM | Reply | Permalink
It is in the nature of imperialism that the aggressor is not punished for her crimes. Look at Belgium. They invaded the Congo, exploited her rubber and in the space of 20 years (from 1890 to 1910) the native population was reduced from 20 million to about 10 million. Now that almost counts as genocide.
Yet in 1914 little Belgium was being treated as the poor abused victim of the hated Germans in the American press.
Definitely not fair, but those are the rules of the world.
August 17, 2007 1:49 PM | Reply | Permalink
Their is something shocking about this post coming as it does just after Al Qaeda slaughtered something like 500 human beings! You are not calling for a doubling of the effort to deal with them, but calling for those who launched a war, against the lawful tyranny of the former government from a 20% minority that tried to hold 80%+ in bondage, to be held to account for liberating those people!
Who could doubt the extreme right wing nature of pseudo leftists ranting on about this revolution! Of course the war was illegal (laws are established by revolutions not the other way round!)
LBJ etc tried to prevent democracy in Vietnam and were soundly defeated yet you compare the liberation of the Iraqi masses with the former attempt to prevent the liberation of the Vietnamese peoples. Sheesh!
If anyone ought to be held responsible it ought to be those that have run US policy since WW2 almost totally rotten to the core.
The people of the ME deserve solidarity and unity in the struggle against oppression. All former US policies of promoting tyranny deserve to be swept into the dustbin of history and the reactionaries passing themselves as 'progressives' with them.
The liberation of the entire region is now on the agenda, starting with the establishment of a Palestinian state and the ending of 40years of a failed war for greater Israel and yet you are not demanding greater effort against the reactionaries but shaking your head and regretting that the monsters have been let loose.
Ah yes tyranny nice and safe hey: don't rock the boat or we could all end like so many slaves that have tried to rebel before us!
I don't think so!
Everywhere the old world is under challenge get used to it.
Patrick
August 17, 2007 2:10 PM | Reply | Permalink
The US aggression in Vietnam thirty-five years ago wasn't stopped by peace marches or letters to the editor, and in spite of its unpopularity it certainly wasn't ended by the Congress. It only stopped when the troops refused to undertake further offensive action against the citizens of Vietnam. These refusals were punctuated by insubordination, mutinies and fragging.
The government is smarter now. With a volunteer military it is handing out large sums of money for re-enlistments and enlistment referrals, as well as lowering enlistment standards, and is forcing soldiers into longer enlistments. Still, there are troops who understand the wrongness of the war. They need to be supported. IVAW--Iraq Veterans Against the War--is a good place to start.
August 17, 2007 2:16 PM | Reply | Permalink
I'm sorry Patrick. That wasn't Al Quaeda. That was the Kurds. Your allies, Patrick. They were just engaging in a little urban renewal, opening up more land for good old Sunni Kurds.
August 17, 2007 2:18 PM | Reply | Permalink
Val you have no evidence for your slur!
Patrick
August 17, 2007 2:22 PM | Reply | Permalink
I appreciate your outrage. It was well expressed. On a positive note, I was heartened by an article in the paper today on administration peace initiatives. It argued or at least claimed that, compared to 2000, we've the advantage now of involvement of Blair in helping to build a Palestianian future and of the Saudis and other Arab nations. It offers hope that, for all the spiral of violence that bolsters political extremism, we may be able to get past the Bush failure of leadership for years here. And a Democratic president is coming soon.
John
August 17, 2007 2:26 PM | Reply | Permalink
Veterans for Peace is also a great organization that needs support, and is also a good place to start.
~~~~~~~~~~~
Quidquid latine dictum sit, altum videtur.
Come visit PROJECT: Lucidity
Where everybody knows your name...
unless you use a pseudonym
August 17, 2007 2:28 PM | Reply | Permalink
More like Condi Rice is coming!
Patrick
August 17, 2007 2:29 PM | Reply | Permalink
Let's see now, Patrick claims al-Qaeda did it, but Professor Juan Cole thinks maybe it was done by Sunni Arabs fighting Kurds.
Co."
Ah, it's a tough choice, but I'll go with Juan Cole on that one.
The "liberation of the Iraqi masses" into an Islamic state allied with Iran? Is this the "noble cause"? Come on Patrick, you can do better than that. The US is fulfilling OBL's dream agenda: (1) We're out of holy Saudi Arabia (2) We overthrew the secular Saddam and (3) Instituted a new religious state in Iraq. Okay, it's Shia and not Sunni like OBL, but we gave him most of what he wanted--and thanks to continued congressional funding we're in it for the long haul. Pity that.
And now "liberation of the entire region"? Hold on tight.
August 17, 2007 2:33 PM | Reply | Permalink
There are over 150,000 contractors over there, too, doing jobs that were done by the Army a generation ago. Their casualty numbers aren't included in government war death stats.
August 17, 2007 2:46 PM | Reply | Permalink
I wonder...if someone would have shot Hitler or Goering or Gerbels dead in the years running up to the World war would the Germans have considered them heroes or traitors?
If someone were to assassinate Wolf-a-witch, or Rumsfeld, Feith, Perle etc. would they be heroes for bringing justice to these mass murderers(especially since Cheney has admitted knowing what would happen in Iraq on video before we invaded...and did it anyway) should they be considered heroes or traitors?
Here they aren't even considered criminals but in a world court they would be condemned. We have become so brainwashed by the "America can do no wrong" patriotism that we consider lying us into a war and raping another country political bad judgment.
I have to remind myself that Justice serves at the pleasure of the president.
August 17, 2007 2:50 PM | Reply | Permalink
But they are heroes who have saved the world. They have not avoided committing crimes but have hidden those crimes in some amorphous war-times powers excuse. They are covered by the all-powerful invisible cloak of GWOT. They are really untouchable. The only way to hold them accountable is for the Democrats in the House to gather their courage and impeach.
August 17, 2007 2:59 PM | Reply | Permalink
I remember hearing a piece on NPR quite a few years ago about Iraq. It was put together when Saddam Hussein was still in power. They talked about how the kids in Baghdad were pretty much like the kids anywhere in the world: cruising the city in their cars, listening to western music and hanging out with their friends. And I remember particularly one interview they did with a group of teenagers they met on the street. These were all apparently happy kids who talked about how they loved Britney Spears and Madonna and whatever else was popular at the time. And I remember one girl--I'll never forget her happy, innocent voice--saying, "We love America! We love America! Why you hate us?"
It was while I was listening to that piece that I realized I was being lied to about Iraq. By my own government. Just like we're being lied to about Iran now. I've often wondered if that girl made it through our Shock and Awe attack and the subsequent violence in her country.
Saddam was a horrible, violent, ruthless leader. But George W. Bush is infinitely worse.
August 17, 2007 3:10 PM | Reply | Permalink
First note what Valdron said, then consider just how silly this notion of separating Al Qaeda from other Sunni’s that are prepared to conduct such suicide bombings is; Al Qaeda is not targeting Sunni’s. These bombers are all equivalent! As with Japanese Militarism, Italian Fascism and German Nazism it does not much matter. The real question is what side are you on?
‘Sooner or later, my guess is that the Sunni Arabs will wage a major war with the Kurds over the oil fields of Kirkuk.’ Why ought they? If oil revenue sharing is proportional what would they hope to gain but a disproportionate share. How would anyone support a disproportionate share for one ethnic group? South Africa and Israel are the former models but are well out of fashion in the moder era.
Valdron correctly recognized that I support the Kurdish peoples and the Iraqi government against such aggression. Do you?
BTW: Iraq is not an Islamic state allied with Iran, but rather a threat to the Iranian theocracy as democratic elections are looked on with envy by the Iranian people who will overthrow the theocrats. Remember how the police states of eastern Europe infected one another with regional effect. The ME is ripe for revolution.
Patrick
August 17, 2007 3:18 PM | Reply | Permalink
If you CLICK on the link provided by AJ you will see this post by AJ is 'timely' as the article is dated 8/18 Australia time.
He wasn't writing a column on al Qaeda, he was writing about those that got the US into IRAQ not having any regrets regardless of the carnage that is happening and with no end in sight. Maybe tomorrow he'll write about al Qaeda and the 500 dead.
I'm not even going to address that piece of nonsense, except to ask one question;
"Hey Iraqi, how is that liberation working out for you?"
He's also not demanding an end to world hunger or disease.
"There is something shocking about" your post in that you ignore MJ's point along with perhaps 100,000 Iraqi dead, close to 4,000 dead troops, another 20,000 wounded, many maimed for life, $10 billion per month down a rat hole, much of it corrupted, a military stretched to the limits, a record number of suicides among our troops....and finally; Not one piece of evidence has been uncovered that justified Bush's reasons for invading Iraq. No WMD, no al Qaeda connection, and certainly its been proven that Saddam was no threat to the US.
Would those 500 human beings you care about be dead today if Bush didn't invade Iraq?
I'm shocked, shocked I tell you that you don't post about these points.
August 17, 2007 3:18 PM | Reply | Permalink
W couldn't care less about his legacy. His only concern is too continue the reign of the oligarchs. Middle East oil and projecting our power there is the prop he needs to hold up the current hierarchy.
August 17, 2007 3:28 PM | Reply | Permalink
Val,
it used to be Bill Clinton that did all the evil, now its al Qaeda.
Did you notice how the Bush gang and our military rarely, if ever, mention the "insurgents" now, no more talk of civil war, its all al Qaeda now with little snippets about Iran thrown in.
The "Democracies" in Afghanistan and Iraq are frikkin jokes, Potemkin Villages.
"Hey Iraqi, did you vote in the last election?"
'Yes, I did."
"Show me your purple finger"
"I can't, my hand was blown off by a car bomb."
Sad.
August 17, 2007 3:31 PM | Reply | Permalink
"We have become so brainwashed by the "America can do no wrong" patriotism jingoism..."
FTFY
August 17, 2007 3:38 PM | Reply | Permalink
Hey, we Americans will do whatever it takes, just so long as American Idol isn't on the tube that night. Seriously, no, we won't do anything at all, and the primary reason is that the right wing in this country is so vicious and sociopathic that we are afraid of them. Never forget that about 30% of Americans believe we did the right thing by invading Iraq and killing hundreds of thousands of Iraqis. To them the only good Moslem is a dead one. Great country we have here.
Hoppy in Sacramento
August 17, 2007 3:38 PM | Reply | Permalink
Saddam was supported by about 20% of Iraq. Bush is supported by about 30% of America. See how much better Bush is than Saddam? On any other basis, you are correct.
Hoppy in Sacramento
August 17, 2007 3:46 PM | Reply | Permalink
Neocons believe and live by the belief that superior people have the right - and the duty - to rule inferior people.
Neocon philosophy echoes the 'apology' delivered by Robespierre to justify the Reign of Terror, "Out of pity and love of humanity, you must be inhuman."
If you want to elicit fits of hysteria among neocons, just suggest to them that their belief system bears an uncanny resemblance to that of their declared enemy, communism, - "The end justifies the means."
August 17, 2007 3:46 PM | Reply | Permalink
I think the effort in WW2 was worth it and that the effort in Vietnam (when the US was the enemy) was also worth it even though the Vietnamese peoples suffered in their millions because 'the dust wil not vanish where the broom does not reach'.
But you say ‘and finally; Not one piece of evidence has been uncovered that justified Bush's reasons for invading Iraq. No WMD, no al Qaeda connection, and certainly its been proven that Saddam was no threat to the US.’ All true.
Your problem is that you are blind to the real reason the U.S. ruling elite decided to go to war.
Bush is no lefty because he is waging this war in the interests of his class, and it is only incidentally in the interests of the oppressed peoples’ in the Middle East. He has become a progressive-right-winger because history thrust greatness upon him.
He asked the big questions that had to be asked, after 9/11. What more can they do to us? ...Well, Mr. President they will, if not stopped, eventually get hold of a nuke and destroy Washington or some other city.
What strategy must we adopt to defeat them? Mr. President we must set down policies to turn every country in the world into a modern (bourgeois) democracy. If all countries look, and smell like Sweden and France, we will have won. The world needs sewerage systems for the smell, and industrialization for the sewerage systems; it needs education for the industrialization, and it needs basic bourgeois political freedoms to permit the education… We must stop doing what we have been doing for the whole post WW2 period.
We must reverse all our old policies.
These mosquitoes are attacking us because we caused a swamp in the Middle East which breeds them! We must drain that swamp, and then there will be no more mosquitoes. Mr. President there is no other way of winning this war…. (At least that is what I would tell him if I was in the war cabinet)
If you want to think more about the type of issues that I am bringing up then go to
and have a look at the Draining the swamp thread in the forum, or democracy was intended for Iraq
Your ruling class lies to you and has initiated this war for strategic reasons. The US ruling elite sat down after 9/11 and accepted that the former policies of propping up every tyrant and reactionary against the peoples struggle for democracy (while lying about supporting democracy) had resulted in the disaster.
All the former policies have been reversed and democratic revolution is now being supported.
Static thinking is your problem.
The Iraqi people fighting for their liberation against a heavily armed Baathist regime with tanks and artillery helicopter gun ships intact command and control etc would require many more casualties from the side I support. Then at the end of that struggle the enemy could still resort to the type of bombing we see now!!
Patrick
August 17, 2007 3:51 PM | Reply | Permalink
Interesting point hoppy - the whole fear of the right thing.
A thought just flashed through my "it's Friday and I can't wait until 7pm and my first Guinness" mind that is a little disturbing but I still find myself smiling about it. And as a childhood conservative that grew up and found a conscience I think it could be considered ironic.
In my little fantasy I'm the "dictator for the day" here in America. I get to rule on everything. And you know what I do? Why I treat all of the rightwing lunkheads out there that hate anything that's not white, Christian and rich with a drawl and I do unto them everything they bellow about doing unto others. I round them all up. I toss them all in cells and do some alternative forms of questioning. I ban their religion and label it a fascist and terrorist organization and shut that thing down too. And by the time I wake up the next day I find that we all miraculously are getting along much better and not hating or trying to kill or control the entire freaking planet. And I smile a little because it's at that moment I realize just how much fun it could be to scare the bejeezus outta the right using their own devices. And they'd be scared of me too because as a former member of the right...they'd be fully aware of just what I'd be capable of! >:D
August 17, 2007 4:04 PM | Reply | Permalink
Spot on. I read a great theory a number of years ago about why the US had succeeded in forming a representative democracy and the French had gone the way of the reign of terror. The author linked the French Revolution, Leninism, Nazism as coming out of romantic utopianism. The utopian end justifies the means, and the means then terribly corrupts the utopian end.
The US was more fortunate coming out of the more rational Enlightenment. Gore speaks to that in his "Assault on Reason".
Neocons are corrupt utopians.
August 17, 2007 4:09 PM | Reply | Permalink
Patrick,
The US military doesn't consider it "silly" to separate the Sunnis from al-Qaeda. The US is now arming Sunnis (former American-killers) in Anbar Province to fight al-Qaeda, who have been killing Sunnis.
Who are you to suggest that the Sunnis shouldn't fight the Kurds? Are you an Arabic-reading expert? What are your qualifications? The Sunnis once ran the country, they have now left the "government", and in a civil war power comes out of the barrel of a gun. (It's US policy also.) Why do think that the Sunnis should trust the Shi'ites? Saudi Arabia certainly doesn't, and so they are supporting the Sunnis too.
Finally, you've got the Iraq/Iran relationship all wrong. Maliki and the Iranian mullahs are as tight as a drum. Iran has been aiding Iraq for at least two years. recent news:// The Iraqi Prime Minister Nouri al-Maliki has called for the expansion of ties with Iran, requesting Tehran's cooperation in rebuilding Iraq. In a meeting with Iran's Vice-President Parviz Davoudi on Wednesday, al-Maliki expressed hope that Iranian companies would welcome investments on the fields of energy, industry and commerce.//And Iran just gave Maliki a new Airbus 300 aircraft.
August 17, 2007 4:10 PM | Reply | Permalink
I have been reading the ravings of Hitchens for a few years now, but this is the first time I have encountered anyone repeating his rants.
Is that you Chris, hiding in there?
August 17, 2007 4:11 PM | Reply | Permalink
I got more evidence for the Kurds than you got for Al Quaeda, Patrick ol boy.
And it's a well established fact that the Kurds are engaged in good ole ethnic cleansing.
They gots to, you see. They need that oil, and that means all those Turkmen, Assyrians, Sunni Arabs and Yeziday, who might otherwise contest that claim for oil have to be removed, all permanent like, or they'll never have their Greater Kurdistan.
That's how the game is played.
August 17, 2007 4:15 PM | Reply | Permalink
"Worth it" to who? To the millions of Vietnamese who suffered? The 58,000 Americans who died and the untold numbers of wounded? To who?
What was the outcome of our war in Vietnam?
What was "worth it"? Worth what? What was gained?
'a broken broom sweeps not clean'.
August 17, 2007 4:30 PM | Reply | Permalink
The US under Bush is trying to 'drain the swamps' of the Middle East - that is, undermine and overthrow the dictators it has propped up for 60 years - and we should be supporting them, and demanding they do more and go faster.
Sure, Patrick, that's why we're giving all that aid to Saudi Arabia, the Gulf emirates and Egypt, and why we continue to support Jordan and Kuwait.
Horsepucky.
August 17, 2007 4:31 PM | Reply | Permalink
You're pre empting the Historians. :-)
August 17, 2007 4:32 PM | Reply | Permalink
I'm afraid I have to disagree with Cole on this one. Look at how the players stack up in this region.
The Sunni Arabs in the region are playing defense, they're being attacked and ethnically cleansed by the Kurds who are making a play for control of strategic and oil producing borderland and mixed areas. The Kurds have American support and pretty much a free hand to do as they like. The Sunni Arabs simply don't have the same strategic advantages.
Instead, the biggest and best card that the Sunnis are able to play is to ally with minority and marginal groups in the area in an anti-Kurdish front. These groups, the Turkmen, Assyrians, Yeziday are all extremely vulnerable, they're small minorities, but they are very well established with long histories in the region. They have nowhere to go. To survive, they need to ally with one of the countries major factions. The Kurds are out to displace them. The Shiites are too remote. That leaves the Sunnis.
Juan Cole is correct that someone is clearing the deck for a Sunni/Kurdish showdown. But it makes no sense for the Sunni to start toasting their natural allies. It makes a lot of sense for the Kurds to clean house.
Sorry, Juan Cole is an invaluable resource and he's definitely the expert. But he called this one wrong.
Oh, and whatever it is, it's not Al Quaeda. Al Quaeda claims its victims, it issues press releases, makes announcements, and goes out of the way to take the credit.
No one is taking credit for this. This is another finger pointing at the Kurds. They can't afford to take credit overtly, it would get them in dutch with their big dog.
August 17, 2007 4:32 PM | Reply | Permalink
I'm sorry Patrick, but that's just self evidently dumb.
Because the Kurds want an independent state, or as close to independent as they can get. They need those oil fields, they're vital to Kurdish aspirations. And the Kurds aren't interested in sharing. Everyone knows this.
Glad to hear it. How many American lives will you spend to support the Kurds, because their plans involve Americans dying on their behalf in large numbers for the Greater Glory of United Kurdistan.
Actually, Iraq's ruling political parties, SCIRI and Dawa are closely allied with Iran. Iran gave these parties political asylum during the Saddam era, funded them, and helped arm their militias. Maliki and other Iraqi government figures frequently visit Iran. Basically, once America leaves, Iran will inherit Iraq. Thank Bush for that.
August 17, 2007 4:42 PM | Reply | Permalink
The representative democracy was so successful that it enslaved nearly half of the population of the south and plunged the country into a civil war that killed more than 10 times what the French terror did.
You call that being more fortunate ??
August 17, 2007 4:43 PM | Reply | Permalink
The Germans liked Belgium so much they invaded them twice.
August 17, 2007 4:53 PM | Reply | Permalink
And before them the French, the British, the Dutch, the Danes, the Swedes, etc. etc. Basically, there was a time when they wouldn't let you into Europe if you didn't invade Belgium. It was like part of the membership requirements.
August 17, 2007 4:58 PM | Reply | Permalink
Patrick:
To sound like Goebbels does not help your argument. They are attacking us because we are occupying them.
Now, a linguistic question: If you call those people mosquitoes, what does that make you? A cockroach?
August 17, 2007 5:02 PM | Reply | Permalink
I'm not sure Vietnam was worth it to America. But I'm not sure what you are saying about Vietnam.
As for the real reason Bush wanted to go to war, that's simple. Oil. Strategic control of Iraq's oil reserves, which would mean strategic control of the entire Persian Gulf, intimidation of Iran, breaking the back of OPEC, and using domination of the planet's key resource to ensure America's economic and political hegemony indefinitely.
None of this crap about liberating people, bringing forth the revolution, or any of that. Free people have a disconcerting habit of acting in their own interests, rather than how you want them to act. Dictators are always much safer.
As for whether Democracy was ever intended for Iraq, that's a dubious proposition at best, considering the maneuvering to put Ahmad Chalabi in power.
August 17, 2007 5:04 PM | Reply | Permalink
Everywhere the old world is under challenge get used to it.
A few hundred, or a few thousand, random murders does not a new world order make.
We've had serial killers with three-figure body counts. Somehow, our Republic still managed to survive.
About three thousand people died during the 9/11 terrorist attacks. During that same year, over 50,000 people in the United States died in car crashes.
The terrorists simply are not a substantial threat to our way of life. We need to treat this as an international law enforcement issue against a handful of rogue murderers, not declare war against the entire Islamic world.
August 17, 2007 5:32 PM | Reply | Permalink
Patrick
Just a point about "draining swamps" - it is highly recommended that you have some people involved that know a thing or two about say the ecosystem in the swamp and something about the other life forms that live in the swamp that may be harmed if you just go a-draining things. Oh yeah and it might also be helpful if you knew exactly where you were "draining" all that swamp to because that guy living next door might not take too kindly to finding all that swamp in his backyard tomorrow...
You see the leaders in our country and the vast majority of all of those rabid pundits and think-tank dregs operate on little more then opinion and prejudice. I'm reminded of that Jon Stewart segment - They don't know Dick. They've been disastrously wrong on so many things so very often that I'm surprised we don't simply do the exact opposite of whatever they say. I'd be willing to bet things would improve all over the world dramatically in one week if that's what we started doing.
August 17, 2007 5:54 PM | Reply | Permalink
Patrick, Up until now I occasionally have troll-rated you because of your mindless quotings of neo-con talking points.
I am beginning to rethink some things; I realize that you may not be an actual troll, but instead that you are one sick puppy. I won't gointo all of your rantings, but the Vietnam bullshit just stands out.
According to you, we LOST Vietnam because of lack of support for the powers that be, etc, right? So, that means we lost. We went in (theoretically) to keep it from going Communist. Well, we LOST, but lo and behold, Vietnam isn't communist; we do business with them on a daily basis.
So, if our 58,000 young men an women, and the untold millions of Vietnamese hadn't died, how would things be different? (Since we LOST, and they trade happily with us, how would things be different if we had WON?) But don't lose that thought of the 58,000 and untold millions who died -- who of them might have been a real leader? Who of them might have made this world a better place? (If the coward chickenhawks Bush and Cheney had stepped up to the plate and NOT survived Vietnam, I am CONVINCED the world would be a better place, but that is another argument entirely.)
Hell, China isn't even communist any more. Why? Because they caught on to the idea of capitalism. Yea US! We taught China so well that they are poisoning our pets and children in the name of capitalism!
You paranoid sociopaths have to get over the idea that WE know best! You paranoid sociopaths have to get over the idea that the end justifies the means! You paranoid sociopaths have to get over the idea that the love of, and the act of making money is the root of all happiness!
And, Patrick, you really need to check yourself in to a major mental health facility! Trust me, I am a nurse, and I know pathology when I see it.
Jan
August 17, 2007 6:09 PM | Reply | Permalink
Jan according to me the Vietnamese people won (and I supported them! The US ruling class had it's ass kicked soundly. The lives lost were because of US aggression on the part of 'nice' liberals like Kennedy.
Now take a deep breath and stop jumping to the conclusion that I would believe that "the act of making money is the root of all happiness"
Patrick
August 17, 2007 6:10 PM | Reply | Permalink
There is one important thing we can all do. It seems very small and insignificant, but if you don't do it, you shouldn't be offering your 2 cents worth on these blogs because it isn't worth that much. Write to your congressman and both senators and tell them how you feel about it. Blog readers can't vote on anything that isn't on a referendum.
August 17, 2007 6:29 PM | Reply | Permalink
killing "wogs" is the oldest sport in the history of amerika.
it is the amerikan birthright, so to speak.
and it started at the outset of the round-eye, big nose invasion of this hemisphere.
and none of the homicidal gangsters have ever paid any penalty. why? because amerikans like to kill those that they think to be untermenschen.
what is novel in this latest episode of mass murdering of "wogs" is that those who were victims[aka "wogs"] of mass murdering in europe 65 years ago are now the proponents of mass murdering[genocide].
in a very real sense, the amerikan mass murdering of middle eastern "wogs" has been instigated by those who you might have thought would be opposed to "genocide"....the religious victims of the thousand year reich - jews.
when i think of israel, i think of amerika from 1865-today. amerika's motto: "the only good indian is a dead indian".
in israel today, the motto is: "the only good palestinian[non-jew] is a dead palestinian."
before your very eyes, a pogrom is being pursued.
the israeli objective, financed and armed by the united states of amerika, is to eliminate all non-jews from the middle east.
a startlingly similar objective to ah's.
August 17, 2007 7:05 PM | Reply | Permalink
Feed The Snake Its Own Tail
The lame and crippled democracy of Iraq is not even as noble as the Mob with the guillotine was at the dawn of the French Revolution, and this is now four plus years out. Yet this is the best reason this revisionary can offer for his miserable failure as Assistant Secretary of Defense; and it is most assuredly a revised vision of the Iraq War by Wolfowitz. In a May 2003 interview with Sam Tannenhaus from Vanity Fair, Wolfowitz expressly ruled out freeing the Iraqi people as a justification for bleeding American Soldiers in Iraq. Trouble is that:
There Were No WMDs
There Were No WMD Production Facilities
There Were No Ties Between Saddam and al Qaeda
The Threat Was Not Imminent
There Was No Gathering Storm
What else is left but to claim that this bloodbath of ethnic purges, which far surpasses even the most bloated estimates of Hussein's killing fields; that this pillaging of the American treasury, which in comparison gives the Oil for Food Scandal an aura of a two-bit grifter's scheme, is a worthy outcome for his dream's obscene manifestation in reality.
Feed the snake his own tail. Make him choke on his own words. What could be more appropriate to this end than a dish served off of the DoD's own servers?
The north of iraq was not under Saddam's control, but was intead under Kurdish control, and protected by US/UK/French Over-Flights. "this guy Zarqawi whom Powell spoke about in his UN presentation" was a one-legged Baghdad Hospital guest at Saddam's behest, who had lost his leg in Afghanistan fighting against American forces, btw. This was before he became Iraq's Qaeda One. Is there anything at all in this analysis that was correct?
August 17, 2007 7:25 PM | Reply | Permalink
There was no waffling back then, they went for the mussels in Brussels.
August 17, 2007 8:24 PM | Reply | Permalink
Trust me, I don't know what I'm talking about on this subject which is why I deferred to Juan Cole.
I take your point on the Yezidi, but there's this from wikipedia [I know]:
It is alleged by some[attribution needed] that during the regime of Saddam Hussein, Yazidis were considered to be Arabs and maneuvered to oppose the Kurds, in order to tilt the ethnic balance in northern Iraq,[citation needed] but this cannot be entirely substantiated. It is known, however, that the Yazidi's unique identity, despite being culturally Kurdish, was in fact used by the Baathist regime to isolate one from the other. However, both groups fought against Baathist troops, often in joint Peshmerga units. Since the 2003 occupation of Iraq, the Kurds want the Yazidi to be recognized as ethnic Kurds to increase their numbers and influence.
August 17, 2007 8:41 PM | Reply | Permalink
I generally defer to Cole myself. But in this case we have an unattributed massive bombing right in the middle of a zone of Kurdish expansion? Who is kidding who?
The Yazidi are part of an ongoing regional contest.
August 17, 2007 8:49 PM | Reply | Permalink
Not in any sense useful--except perhaps as a kind of personal therapy. This kind of screed convinces no one, and adds nothing to the discourse. America with the k, been there, done that: how sixties . It didn't do any good back then, either, -- saved no lives, preserved no integrity, stopped no atrocities. Maybe then, too, it served as a kind of personal therapy.
"Wogs" which has a specific corrupt meaning but an English one, brought in for who knows what reason.
MJ offers an elegiac piece, an apology for those complicit in destroying Iraq from one who was not, and this kind of thing cheapens and debases it, and diverts attention from the atrocity which is Iraq to no purpose. How many Iraqis are saved by the Israel-Palestine analogy? There will be plenty of times to take this conversation down that road when MJ chooses to begin it in that direction. It will be microseconds now before persons here will be tossing Nazi and Fascist at each other. Thanks for nothing.
aMike
August 17, 2007 8:49 PM | Reply | Permalink
We don't mind: we're used to it. Happens all the time, and we're far less sensitive than the economists are. :-)
aMike
August 17, 2007 8:52 PM | Reply | Permalink
amike, I happen to believe that we need a shot of "therapy" now and then. A wake-up call, with lots of caffeine. Hot, straight and black, no cream or sugar please, and hold the fancy rhetoric.
One can make the case that the Iraqis, the poor shot-at Iraqis, are our Palestinians. Albert Champion makes that argument. We are, as he indicates, doing the bidding of the Israelis in the Middle East, and that includes a lot of killing.
Fellow bloggers, I support Albert Champion. Bring on the 0's and 1's. I look forward to them.
August 17, 2007 9:27 PM | Reply | Permalink
Much as I agree with the general sentiment and believe the war to be illegal, and the inner administration circle involved directly in its prosecution to be criminals, I would like to leap to the defence of Edith Piaf.
The whole point about the song is the recognition of her own frailties, her successes and failures, good and bad, loves lost. She blames no one but herself, and would live her life the same (maybe?).
"Je repars à zéro..." and, the final line, "Aujourd'hui, ça commence avec toi!" affirming her appreciation of and attachment to her fans.
She rises far above the GWB's and the non-apologetic neo-cons of this world who are too cowardly to openly declare their real agenda.
[If accents don't print: "Je repars a zero..." and "Aujourd'hui, ca commence avec toi!"]
August 17, 2007 9:42 PM | Reply | Permalink
Somebody at the DOD must have left that interview up to show what the dumb, ignorant, self-righteous, flaming a**hole sounded like in his drivel from 2003.
Understatement of the 2003 interview by Wolfie:
...there might be some inter-communal violence if he (Saddam) were removed.
August 17, 2007 10:46 PM | Reply | Permalink
Patrickm, how many active duty military tours have you or your fellow geo-political analysts at lastsuperpower spent 'draining' the 'mosquito swamp' in the Middle East? Bush and Wolfowitz aren't draining a swamp but creating a super sized one.
In the fifth year, are the mosquitoes diminishing or proliferating, has the draining process become plugged up with dead bodies? At half a trillion and counting, all paid for by the USA, how long do smart guys like you think we can afford to keep playing plumber in what used to be Iraq?
August 17, 2007 10:47 PM | Reply | Permalink
I wonder if it has struck anyone else just how handy al Qaeda and their atrocities have been to the fortunes of Bush Inc.'s gang of Neo-Cons.
I’m sure it's just an innocent symbiotic relationship that finds al Qaeda doing just what Bush Inc’s policy makers need an enemy to do…and so on cue…
August 17, 2007 11:02 PM | Reply | Permalink
Hell, some of them were once Communists and/or the children of Communists, of the Trotsky sort mind you. I'd say they were Trotsky's revenge on the world's people.
August 17, 2007 11:15 PM | Reply | Permalink
Ouch...that one made my eyes sting! LOL
August 18, 2007 12:39 AM | Reply | Permalink
Thanks for clarifying this. Being related to a fabulously successful economist I have evidence to the contrary. Aside from Krugman and a few here (and there) I would have used the term "sensitive" to describe an economist's acute ability to squeeze a buck.
The aforementioned explained that while historians are mainly "alpha" economists are primarily "numeric", boasting that they use the whole keyboard to advance their position.
When this person can drop a million on a flat in town as a convenience for late night business, it's hard to argue.
August 18, 2007 3:07 AM | Reply | Permalink
Oh yeah? Well, well, he did have Weapons of Mass Destruction related aspirations.
-George Bush
(smirk)
August 18, 2007 5:05 AM | Reply | Permalink
For me, the worst of the war criminals is the guy General Franks called "the stupidest fuck on earth," Doug Feith.
And where is he today? Teaching at Georgetown University.
This is a metaphor for the decline of America. Kissinger wanted to go back to Harvard after leaving the State Dept, but couldn't because Harvard said no (the university feared riots). Kissinger, of course, was a big Harvard prof before going into government and, despite a later career as a war criminal, was a distinguished academic. Yet HU said "no."
Now I know Georgetown aint Harvard. But it's a Jesuit school. Jesuit! And it welcomes not just a war criminal but a fool with no credentials. And it does so knowing that in these days, the students will just go along.
The Feith appointment at Georgetown is the single most disturbing move of any of the Iraq war criminals. Georgetown! Society of Jesus! War criminal Douglas Feith! St. Ignatius of Loyola must be spinning.
August 18, 2007 5:05 AM | Reply | Permalink
aMike is fined $10.00 for forcing me to go to the dictionary to look up the word "elegiac," and finding this :
Etymology: Late Latin elegiacus, from Greek elegeiakos, from elegeion
1 a : of, relating to, or consisting of two dactylic hexameter lines the second of which lacks the arsis in the third and sixth feet.
(*&&^%$#@
:-)
August 18, 2007 5:16 AM | Reply | Permalink
Hector
My wife and I went last evening to see Charles Ferguson's extraordinary film, "No End in Sight". I'd add just one friendly amendment to jophusa's (in my view) importantly correct suggestion: When you write to your representative and senators, urge them to see "No End in Sight". This week. And to pay attention, and to watch it through to the film's last words, from a US Marine who strkes me as an extraordinary human being.
August 18, 2007 5:24 AM | Reply | Permalink
"The liberation of the entire region is now on the agenda"
Oh, Jesus Christ. Finish junior high school first and then we'll talk. America is not somehow inherently on the good side simply by virtue of being America. We're the good guys when we act like the good guys. When we launch unprovoked "Shock and Awe" terrorist attacks on other nations and make alliances with guys like Musharraf, then we're the bad guys. Right now, in America, the biggest battle is the battle to capture and bring to justice the people who subverted our constitution and our democracy in order to launch an evil, ill-advised and illegal preemptive military action against a sovereign nation that had not attacked us or threatened to attack us (except in the criminally-paranoid minds of our neocon friends in power).
You must have some neighbor kids somewhere who need to be "liberated" from their mean mother or something, don't you? Have at it, hero. See you when you get out of jail.
August 18, 2007 6:52 AM | Reply | Permalink
Yeah - makes me think of all those swamps that were drained in Florida from 1920-1970. As a result of which vast areas of the state are turning into deserts and in many cases collapsing into sinkholes. Requiring a huge program to refill as many of the swamps as possible.
sPh
August 18, 2007 7:45 AM | Reply | Permalink
Howard will have a 1500 word essay on this word posted by this afternoon ;-)
sPh
August 18, 2007 7:47 AM | Reply | Permalink
The worst part is that these people do not even hold themselves accountable. LBJ and McNamara suffered pangs of, I don't know, guilt, shame, anger, regret. Something.
What does Madeline Albright feel for the sanctions regime that made this invasion, if not inevitable, much more likely?
August 18, 2007 8:11 AM | Reply | Permalink
When you can write the script you do have the possibility of a symbiotic relationship that flowers. Imagine how effective a boogie man under the bed can be if used properly. It isn't like you have to worry about the boogie man actually coming out and refusing your script.
You might want to give a little thought to the close business relationship between the Bush's and the bin Ladens, and to the way Osama had such an easy time avoiding capture or killing by US forces.
Hoppy in Sacramento
August 18, 2007 8:46 AM | Reply | Permalink
"inevitable, much more likely?"
Who believes the invasion of Iraq was inevitable or much more likely? Certainly not Cheney who was pontificating against overthrowing Saddam, not only in 1992, but as late as 2000.
What made the invasion of Iraq inevitable?
August 18, 2007 8:50 AM | Reply | Permalink
sphealey,
HAHAHAHHAHHAHAHA
That's our Howard! :-)
August 18, 2007 8:53 AM | Reply | Permalink
The Bush enabler, John Yoo, is teaching Law at Berkley.
How the f**k did that happen?
Bush: "John, I need a legal opinion allowing me to create concentration camps."
Yoo: "Comin' right up Chief."
August 18, 2007 8:54 AM | Reply | Permalink
Ward Churchill was fired for his irresponsible comments regarding 9/11 and "little Eichmanns." (Officially, he was fired for academic misconduct, but no one can actually be naive enough to believe this cover story.)
Why shouldn't John Yoo be fired for his equally loathsome (and far more influential) defense of the Divine Right of Presidents and his encomiums to torture?
August 18, 2007 8:59 AM | Reply | Permalink
The appointment of George W. Bush to the Presidency?
Let's go in the wayback machine:
"Fuck Saddam, we're taking him out!"
or even further back:
"To be successful, you've got to be a war President."
August 18, 2007 9:07 AM | Reply | Permalink
a fancy way of saying that 6 people kick you in the arsis but #3 and #6 miss their target.
August 18, 2007 9:26 AM | Reply | Permalink
As distinct from the understanding of the sense of humor of computers, recognized by really good computer scientists and engineers: "If you ever encounter a computer with feet, never bend over near it."
--
Howard
*equal opportunity offense to both extremes*
"Those who cannot remember the past are condemned to repeat it" [George Santayana]
August 18, 2007 9:49 AM | Reply | Permalink
The mind of Wingnutus Limbaughtomi: nothing bad is the fault of The Decider because we voted for him, ergo, blame the Iraq fiasco on Madeline Albright and her boss, Bill Clinton.
August 18, 2007 10:05 AM | Reply | Permalink
When I saw the title didn't know whether MJ knew the reference to Piaf's song until I read further.
I thought it an insult to link the too-tender-hearted Piaf in anyway with the ones Kurt Vonnegut accurately described as psychopaths.
Glad to see you stepped up to defend her -- and much better than I could have.
August 18, 2007 11:29 AM | Reply | Permalink
August 18, 2007 11:31 AM | Reply | Permalink
I wouldn't rule out some historical parallel. After failing in a coup in still-French Algeria, the Foreign Legion's 1st Parachute Regiment sang it, on their way to barracks where their unit was to be dissolved in disgrace.
--
Howard
*equal opportunity offense to both extremes*
"Those who cannot remember the past are condemned to repeat it" [George Santayana]
August 18, 2007 12:23 PM | Reply | Permalink
There's also the little problem that all the things I think of right away -specifics edited -- are all quite felonious.
I've been trying to stimulate y'all to the hard work of building a long-term civil movement of resistance, yet getting far few takers than total mis-understanders on the blog comment wires.
So it's not fear of the right-wing sociopaths in themselves, it's more the fear of the severe legal consequences, the disruption to the family and stability I have invested in the reality of my private life, and the fact that my daily habits are so pacificist and oriented towards fulfilling my obligations to my three jobs, at age 56, that to actually take steps in the directions of my war-like thoughts would completely overturn most of my carefully-constructed infrastructure of mental stability.
So although I know most everything necessary to be the leader of the underground army, in reality it cannot happen.
Yet I imagine a young person with little to lose ... Is America is so stable, that we are not five to seven years from some severe social strife?? (And I would definitely predict at least localized conflict if civilized systems fail to keep delivering food to the cities, anytime in the next generation or so.) Are we really that much more stable than say, Yugoslavia in 1985 or Northern Ireland in 1965?
August 18, 2007 1:00 PM | Reply | Permalink
Rating: Hilariously funny, yet more than a little dismissive of the complexity of the great wheel of all human history ...
August 18, 2007 1:04 PM | Reply | Permalink
Every time a neocon gets athletes' foot he blames Bill Clinton.
August 18, 2007 1:10 PM | Reply | Permalink
An example from the early dawn of Dubya Dim's Administration, before their miserable failure to fulfill the honourably sworn duty of defending America. Back in the idyllic times Mr. Bush was best known as an abattoir for allegories. A Press Conference from February 22, 2001:
Even then, it was a black and white world for GW:Have it my way,
or
through the bomb bay...
August 18, 2007 2:02 PM | Reply | Permalink
Much too simplistic of an analysis, that doesn't even begin to pierce the complexities, but is a vehicle enabling agitprop fantasies.
August 18, 2007 3:09 PM | Reply | Permalink
I'll bet they didn't sing it as well as Edith.
August 18, 2007 4:18 PM | Reply | Permalink
Probably not -- although I have to say that our military, if nothing else, has lots of great musicians. I met a colleague in his second career as a programmer, after he retired as arranger for the Army band.
--
Howard
*equal opportunity offense to both extremes*
"Those who cannot remember the past are condemned to repeat it" [George Santayana]
August 18, 2007 4:39 PM | Reply | Permalink
He was aptly given the appellation of Torture Yoo.
How many body guards does a guy like that need to protect him? We are scanning grandma's from Nebraska at the airports, and war criminals walk amongst us with no fear of the imagined multitudes of evil doers who supposedly threaten not just our air transport, but the republic itself.
August 18, 2007 7:17 PM | Reply | Permalink
As I read thru these posts again I noticed in your closing paragraph that you are making a classic western mistake that has played no small part in the ongoing tension between the middle east and the west. You make reference to European examples of police states. This desire in the west to use our history and our perspectives to measure those in the middle east is flawed. The are very different people with their own unique history & perspective. Western comparisons will therefore often prove to be very wrong & in fact can increase the tension by rightfully being seen as arrogant & ignorant.
August 18, 2007 8:30 PM | Reply | Permalink
No 0's for you, no 0's for Champion either, not from me. Especially no 0's for you...you're civil and clear in your defense of him. But my point remains pretty much what it was. We have all the room in the world for catharsis on this website...including personal blogs. I just don't believe it helps much to take everything written, no matter about what, into invective which doesn't relate to the primary post.
In this instance I suspect I was prodded by the fact that here, for once, MJ was writing about something other than the Israel and Palestine conflict. I rather liked that, and I'd like to see him broaden his focus. Every time he writes on his most frequently touched subject, however, the thread quickly becomes a mud in the sandbox party, and we pretty much all know who starts playing. . .they rarely, if ever, follow the writings of any other regulars here. Had A.C. stuck to the big-nose stuff and the Amerika with a k stuff, I probably wouldn't have done more than shrug and move on. But I really wanted people to stick with reflecting on Iraq, and chose my post as my tool to try to have that happen.
I've tried to be as clear in my explanation as you were in yours. Thanks for yours. I appreciate the civility of it.
aMike
August 18, 2007 8:32 PM | Reply | Permalink
Back in the good old days we might be able to count on a straight-talking Marine to set straight a louse like this, but instead at Wolfy's retirement two years after the Vanity Fair interview we had General Peter Pace, soon to become the first Marine Chairman of the JCS.
The Pentagon transcript reads: ."
It's not easy to use the word "courage" three times in one sentence after already using it once, but Pace had about run out of superlatives for the architect of Operation Iraqi Fiasco. Now it's time for Perfect Pete to say good-by. I'm sure he will be equally praised and get a nice medal.
August 18, 2007 8:44 PM | Reply | Permalink
Juan Cole (Jan."
August 18, 2007 8:58 PM | Reply | Permalink
One always tries to raise the tone. I think this time, from C to C#, was about it. I like the less technical... see Reference.com
Somehow I thought that of course all the posters here possessed reflective minds, and that Piaf's singing represents "emotions recollected in tranquility". MJ's post represented his version of tranquil...about three shades below 5.1 on the Richter Scale. <grinmode></grinmode>
(can I have the fine reduced to $6.99 with time off for good behavior?)
aMike
I am resisting with all my might punning on knowing one's arsis from one's feet. (I guess I failed, huh?)
August 18, 2007 9:30 PM | Reply | Permalink
It is not just the Yazidi, Valdron, and I know what you do in the Great White North, because once you you spoke of roots, ideals and goals in this namespace when I was watching. You probably have insight I am incapable of, which is the reason for this link drop.
The links lead to data produced by Iraq minorities that are getting their butts kicked from Baghdad up to the North on the Nineveh Plains. How's your old testament fact retention? Try Jeremiah and Jonah; at least as I recall, which isn't what I desire and think it should be, especially when reaching back this far into the past, but it still feels solid.
The Assyrians and The Turkemen are being butchered where not long ago was their little corner of Iraq. It's along the fault line which both Kerkuk and Mosul sit upon.
August 19, 2007 2:39 AM | Reply | Permalink
This string reflects more of my recent sentiments than any I've seen in recent times. I've been left a bit blue by watching a discussion among Bill Maher, PJ O'Rourke and Ben Affleck on the relevance of the netroots and the potential to bring about change, as well as a lot of the rightwing dreck that is out there, and the most cogent, poignant whine going in my head comes from something a friend sent me the link to. Google or search for George Carlin education on YouTube if anyone wonders what kind of statement can actually make it into the public awareness. But there is still the question about what one can do to take it down short of the kind of post-apocalyptic burrowing into a non-existent underground that would take one completely off the economic page all of us are forced to sign for our daily keep. It would be easier if I were alone, but others depend on cog continuing to have the right number of teeth on it, with the right amount of space between, and turning at the same rpm as all the rest.
August 19, 2007 2:53 AM | Reply | Permalink
When Mgmax runs out of diversions to blame Clinton for Bush f**k ups, he'll start blaming FDR or Woodrow Wilson.
August 19, 2007 6:21 AM | Reply | Permalink
JohnW1141; by omission I conclude that you believe the effort of the Soviets and the other allies such as the U.S. in WW2 was worth it; if so then I conclude that at least we have a base from where we can potentially build a unity in a struggle by progressives and even conservatives against the sort of base reaction that was on display, as much on 9/11 as it was the other day in the mass terror bombings in Iraq’s north.
Valdron suspects the Kurdish peoples even though it is clear that the whole series of bombs were a coordinated attack and that Kurds were a major target. I suspect Al Qaeda (and it is not the case that an Al Qaeda cell always claims credit when it is them).
Al Qaeda is not a top to bottom fully functioning command and control organization. Nevertheless I have to concede that it may be just another variant of Sunni supremacist grouping that may even pass themselves off as nationalist that may be responsible. It matters not one fig. Those that did this are anti-proletarian and are the enemy of all progressives, they are the enemy of all modernity.
Yet that is not what the note that started this thread is all about! It is about holding the revolutionaries to account for liberating the Iraqi masses!
The U.S. is not always the enemy of the people of the world. Even Karl Marx once remarked during the civil war that the workers of the world were behind the stars and stripes. Was the civil war worth it?
We may have a wish that it were not required, but the overthrow of the slave system was the result and it has produced 150 years later, and after much more struggle and the inevitable suffering, a society where a black woman Condi Rice is described as the most powerful woman ever, and who could realistically be the next President. That is worthy social achievement.
If we support the overthrow of oppressors then we can agree that there are quite a few oppressive regimes that the modern era is still dealing with. The Taliban, Baath, Mugabe the situation in Sudan and Somalia and on and on.
Fortunately most countries will manage to overthrow their own ruling tyrants (like the Philipines did with the U.S. backed Marcos; or Indonesia from the U.S. backed Suharto; or Iran from the U.S. backed Shah; or Chile from the U.S. backed Pinochet etc) but sometimes it will be better for outside forces to begin the process. Outside forces can only ever begin the process because ultimately it remains up to the masses that live there to solidify the progress.
The U.S. post WW2 has mostly been the problem. Even when opposing the revisionist Soviet invasion of Afghanistan the U.S. did so from the stupid position of backing Al Qaeda etc!
However this U.S. ruling elite has grasped the historical lesson and changed course.
Yet people on this thread think themselves somehow progressive, but without a plan to fight the reactionary bombers and trying with all their efforts to start a region wide sectarian war. There is not a progressive bone in this failure to render assistance from such racist scum.
The Iraqi masses deserve support, as do the Palestinian people as Rice brings about a Palestinian state. All the tyrannies of the ME need to be undermined and the revolution sought by the peoples of the entire region supported.
But this thread just whines how terrible it is to call it a swamp with mosquitoes. (ie to use Noam Chomsky's term) Reflect on the Chinese revolution to grasp the time scale that we may be dealing with. You can bet your bottom dollar that it won't take as long to produce Iraq's equivalent of Condoleezza Rice.
Consider the forty years since the Zionists launched their now obviously failed war for greater Israel.
What ever you do stop whining and build unity with right-wingers against fascists. Look to the broad united front of WW2 for inspiration.
The enemy terrorize for a very good reason and no progressive ought play into their hands by undermining in any way the fight against such racist scum.
Whatever the history of how this war was started, it is, since the election process, irrelevant. The war is now clearly about defeating the most reactionary elements on earth.
As with the enemy of WW2 their viciousness will not save them and the masses will eventually prevail.
Patrick
August 19, 2007 8:59 AM | Reply | Permalink
Roots, ideals and goals? Well, I try not to come within a country mile of those. Perhaps you were thinking of someone better than me.
Thanks for the links.
August 19, 2007 9:25 AM | Reply | Permalink
Remember way back in 1999 when "W" was the Uniter. This Administration has divided us on every issue put before us. As this great post points out, the War is the worst issue of all. It is time for the people of this country to join together and simply put an end to it. Some still support the President and his rhetoric for War. Others believe that the War was wrong from the get go. But, now, all of us need to look at the results of the War as it is today. Somewhere close or more to a million people dead. A country is reduced to desert rubble. Life will not be the same for this country for at least a generation. How many more must die for an argument? Your idealogy, your politics, your personal beliefs are insignificant in the face of the reality of all the death, human damage and destruction. Stopping it all now is only sane solution. A different path must be taken.
August 19, 2007 10:26 AM | Reply | Permalink
"But few of us really do a goddam thing."Which is why John Conyers needs to start impeachment proceedings against Cheney, Bush, and Gonzo.Tom
August 19, 2007 3:31 PM | Reply | Permalink
Oh dear God in heaven. You have the gall to talk about others having a problem with their way of thinking after typing stuff like this.Tom
August 19, 2007 3:35 PM | Reply | Permalink
The worst part is that these people do not even hold themselves accountable. LBJ and McNamara suffered pangs of, I don't know, guilt, shame, anger, regret. Something.
In Retrospect, Robert S. McNamara, 1996."
McNamara's 11 Lessons from Vietnam: [UNLEARNED Lessons, apparently].
August 19, 2007 3:56 PM | Reply | Permalink
PseudoCyAnts,
I am glad you mentioned the Assyrians, Turkmen. You can also add Chaldeans and Armenians. Most Americans know little about the ethnic composition of Iraq. There are over a million Christians in Iraq. Many of them supported us. Many of them are refugees in Syria, Jordan, Turkey. We owe these people something. We should at least give them visas for destroying their lives.
August 19, 2007 6:27 PM | Reply | Permalink
Shame is a social phenomenon--we have induce it through public hearings and (hopefully) impeachment.
August 20, 2007 6:25 AM | Reply | Permalink
I have very little understanding of these minorities, and went looking for links a week or two ago, because it relates to a different issue I've been tracking and trying to understand: the Kurds and Turkey. I intentionally listed only non American based sites that had English translations in an effort to proffer immediate relevant information. I did not come across any Chaldean sites that fit this criteria, nor did I find any information that mentioned Armenians in Iraq.
Also, I did not post this for any religious reason, I claim no faith. Furthermore, I directly challenge the rectitude of anyone's Christian faith if they only care about this because Iraqi Christians are caught up in the middle of it. This is about atrocity and genocide; about the lies being fed to an acquiescent American people; about the destruction of an immeasurable amount of history from the very dawn of western civilisation that we may at this very moment be witnessing; about the responsibilities and blood debt that is now an inherent part of contemporary American citizenry. Each and everyone of us ARE RESPONSIBLE for the deeds of our unchained leviathan, because if we are not a government by the people, then America has ceased to be. NO ONE can claim absolution simply because they dissented. The dissent was anemic and wanting. ALL are blameworthy.
August 20, 2007 7:20 AM | Reply | Permalink
None of the people responsible for initiating the crimes in Iraq will ever personally pay any price for what has happened and none of them will be held accountable in a court of law for the crimes they are responsible for.
Very sadly, however, the American people for the next generation or more will pay for these crimes because all of them were committed in their name. The majority of Muslims globally will hate and loath America. The majority of people around the globe who opposed the idiotic nd immoral invasion of Iraq will disdain America and may never trust us again. God only knows how high a price our children will have to pay for this folly, but it will certainly be high. The only mystery is in how many ways will they pay? Will it be in blood, money, a bankrupt and disabled economy? It's uncertain how future Americans will pay for these crimes, but it is certain that pay we will.
And more's the pity since it clearly didn't have to be that way. People will look back years from now and speculate as to why we did nothing when we knew what our government was doing was wrong. They will wonder why the same people who, as young men and women, spoke out against war, were too complacent to act on their beliefs as they got older. They will wonder why we sat comfortably in our homes and gave no thought at all to the terrible consequences this massive criminal enterprise held for our children and grandchildren.
I think this is a clear case of the "sins of the fathers..."
August 20, 2007 9:20 PM | Reply | Permalink
"They will wonder why the same people who, as young men and women, spoke out against war, were too complacent to act on their beliefs as they got older"Sorry, plenty of us anti-Vietnam war protestors have been out busting our butts trying to stop this insanity since before it started. So qualify what you are saying. Stop using "we". It is accurate to say not enough people protested but there are plenty of us worldwide who have tried. Look at the numbers from the worldwide 2/15/2003 protest. It was the largest coordinated protest ever.Tom
August 21, 2007 5:09 AM | Reply | Permalink
This is irresponsibly naive, and by naive, I am not referring to an endearing quality, but instead to an Arrogant Three Monkey World View.
"... it is clear that the whole series of bombs were a coordinated attack and that Kurds were a major target"
Do you honestly believe that all Kurds are bros with each other? The Yezidis practise a religion that strict Iraqi Muslims believe is a form of devil worship. There was recent infighting between Kurdish Yezidis and Kurdish Muslims.
As I posted earlier in this thread, this is all happening along the fault-line that is the yet to coalesce delineating border of the Kurdish controlled Northern Iraq. To most Americans, this is a fight between just three factions, The Sunni, The Shia, and The Kurds. In reality, all three of those factions are composed of smaller sub-factions, and there are many who are not part of the big three, who are caught up in the meat grinder, and have been previously ground up.
Then there is the PKK terrorist group, and anyone who claims they are not terrorist is either a bald faced liar, or an imbecile. Do you know what their dominant religion is? It's Revolutionary Marxism, and they've proved themselves devoutly faithful adherents to this ideology in the past, as they slaughtered Kurds in Southern Turkey who dared to dissent against the PKK. In veneration of Pol Pot, the PKK also has had a tendency to kill teachers.
The Former OSP planner and CPA Pentagon 'Political Officer', NeoConniving AEI fellow, Michael Rubin told Turkish Press representatives in Washington DC last year that Barzani was selling arms to the PKK.
Rubin exudes a horrific Ledeenesque odor. Why hasn't he been challenged by Conservatives in America for fomenting bitter dissent in our long-term ally, Turkey?
I am not attempting to whitewash Turkey's own past acts, and it does have its own share of blame for its previous mistreatment of Turkish Kurds, but the recent election showed that Turkey was willing to admit Kurdish representatives into parliament. Time will only tell if they choose to use this opening wisely, for the good of the people they represent, or if they attempt to politicise their new pedestal, mouthing Marxist rhetoric, which is certainly NOT representative of the predominantly Muslim Kurds who live in Southern Turkey.
The Kurds have greatly benefited in the past cycles of faux Iraqi Democratic process, touted so highly by the Bush Administration. The devastation of Fallujah disenfranchised almost the whole population in a major city for one election, which aided the Kurds. The Tel Afer offensive disenfranchised a significant number of Turkmen in a later election, to the benefit of the Kurds. There have been credible claims of voting improprieties in Kerkuk and Mosul, which again, aided the Kurds in the election process. It stretches credulity well past any likely probability in reality to claim that these were all just random events, which happened to break in the Kurds favor.
Was this enough for you? I've got plenty more, and you ain't no Pinball Wizard, so pull off your eyeshades, and take off your earmuffs, but please, go ahead and use the cork...
August 21, 2007 7:20 AM | Reply | Permalink
Makes one wonder. . . Is Ahnold, a.k.a Conan the Republican, eligible for President of the European Commission? Brussels might devour that hunk of manly man.
aMike
August 21, 2007 8:22 AM | Reply | Permalink
Tom,
agreed.
The Dems seem to be playing a delaying game, scared sh*tless of doing anything 'controversial' that may cost them the White House or Congressional seats in 08.
Leahy's deadline for obeying the subpoena came and went, Leahy gave them another deadline.
August 21, 2007 8:50 AM | Reply | Permalink
I am sure there are persons within the DoD, who would be pleased to see Wolfowitz get what he should have coming. This is not the reason that this document still remains on their servers though.
Government produced public web product is generally considered to fall immediately into the public domain. This was a public post by a high-ranking pentagon official political appointee. If someone started to play memory hole with these posts, it would be noted, and the response would be strident. There would be no place to hide, as some of the most vocal opposition would be from within government bureaucracy itself, NARA and The Library of Congress.
Others would notice and get into the act. Most obvious of these would be The Internet Archive, The Federation of American Scientist's Secrecy news blog, authored by Steven Aftengood, and The EFF. Then Congress would be likely to get into the game for political reasons.
An attempt to hide data like this by deleting it would be counter-productive in that it would draw more attention to the document deleted than just leaving it with the rest of the documents. Safety in numbers.
August 21, 2007 9:13 AM | Reply | Permalink
This is really good to know. Thanks for this information. Is there a specialized search engine (similar to THOMAS) in Congress where one can track down this information. It would be a pretty useful thing to have, MHO. I know that individual agency websites are search-able, but wouldn't it be nice to be able to catch them all in one fell swoop, getting the materials alone without commentary on them as one gets using generic search engines...even ones with "advanced" search capabilities.
aMike
August 21, 2007 9:31 AM | Reply | Permalink
This may also be something of an oversimplification. It assumes that Vietnamese Society (and, for that matter, our own) are static, embedded in amber sorts of things. If practitioners of history like myself know anything, it is that no depiction of any group remains viable for very long. We compound this by having shorter and shorter historical attention spans.
A Borg is a Borg is a Borg, I guess. But the same isn't true about the Vietnamese people any more than it is true about us. A case in point: Last semester I had two undergraduate Vietnamese students in my class. A generation and a half ago, persons of my generation and their parents/grandparents were engaged in bitter combat. I would never have predicted I'd be teaching young Vietnamese men in 2006 when I began my career in 1972. Prowl around Vietabroader, and you'll see what I mean. I should add that students from my institution are studying there, as well.
The current number of young Vietnamese men and women studying here is in excess of 4,500, and it is growing by double digits.
(I don't know how many Borgs are studying here...could be more, could be less) <grin></grin>
aMike
August 21, 2007 9:55 AM | Reply | Permalink
The old saying: 'truth is stranger than fiction' is relevant here. As another example of feeding the snake its own tail, I offer a paragraph from Mr. Bush's State of The Union Address, January 20, 2004:
"Weapons of mass destruction-related program activities" is meaningless, because of its breadth.
This was the first time in a long while that a American Conservative had stated such strident concern for the UN's Image. At least since before the Reagan Administration. In fact, most Conservatives' complaints about the UN are centered around the concept that its proclamations are naught but "empty threats". Also of note here is the hard revisionist spin towards justifying the War Upon Iraq as being a war of "liberation". As I pointed out previously, this was refuted by Wolfowitz in June, 2003. An investigation into what Mr. Bush had said prior to the 2004 SOTU Address is illuminative. On the eve of the Iraq Invasion, March 17, 2003. Mr. Bush gave a televised address from the White House.
The whole address consisted of 1824 words. In it 'liberation was mentioned once, word #973. Liberty was spoken three times: word #1686, #1722, and #1747. In all four of these instances, the context was not regarding the causes justifying the invasion of Iraq.
In his remarks, Mr. Bush stated:
By June 9, 2003 Mr. Bush was not stating it with unequivocal terms, but he was still saying that Iraq had WMDs when it was invaded earlier that year. This was also before the emergence of Al Zarqawi as a primary foe, and Bush was all too happy to lean on him for the War's justification, and had begun to push Iraqi freedom as a justifiable cause for war:
At his appearance on MSNBC's Meet The Press, February 8, 2004, Mr. Bush had completely flip-flopped on the surety of intelligence:
Saddam would of if he could of is not a righteous justification for war, even by NeoConnivers' standards; hence the revisionary claim of Iraqi Liberation.
August 21, 2007 10:38 AM | Reply | Permalink
I learned to have affection for Vietnamese in my tour as a conscript chopper doc, and whenever I'm up around San Jose, I usually visit a business or two which are owned and predominantly frequented by Vietnamese/Americans. On the other hand, the North was much different from the South. It is impossible for me to accurately express the alien-like feelings I'd got when realising just how cheaply they valued an individual's life in the scheme of things. More often than not, American GIs preferred to go waist deep into a fetid rice paddy than to step on a human carcass, even one that while living, had been an enemy. South Vietnamese felt much the same way towards there own or towards Allies, but not towards North Vietnamese. The NVA, and the Viet Cong, even more so, were willing to waste human life at levels unspeakably foul for simple tactical advantages.
I also believe that much of this attitude is/was related to a fundamentalist variant of Maoist communism that reached down to its blackened core to spawn the Khmer Rouge. There is nothing of value to be had from the boot of a totalitarian, be it left or right sided. People do not believe me when I tell them, but I am walkabout in the Dreamtime, and I drink from the very fount that nourishes its force, believing that All Humans are born equally in possession of liberty at birth, endowed by that which they perceive as the Creative with natural rights, and that three of these rights are life, liberty and a freedom to pursue their own happiness; and that whenever a State's actions become antithetical to these ends, it is the right, as well as the duty of the people to muzzle their leviathan, using whatever force is necessary to achieve this end. Now couple that with "against all enemies foreign or domestic". To say that I am presently prone to slip into a melancholic state of mind, would be to greatly understate the truth.
August 21, 2007 11:17 AM | Reply | Permalink
Eloquently said.
However, given the venal inadequacy of the Diems as leaders, or Ky, it was inevitable that the stronger North would prevail. And given that the North was the one that shed colonial fetters more thoroughly, they would be the more vigorous and unified. And given that Ho Chi Minh didn't get anywhere at and after Versailles, it was inevitable that he would look elsewhere than western states for allies in the search for independence. Communism was the viable alternative to unhelpful democracy. The example of Russia bootstrapping itself into modern industrial capability was persuasive, I guess.
A sc-fi story I read had a future Earth with some pretty vicious predator animals around. The explanation in the story was that if a predator goes extinct, the niche gets filled with something more effective. A reason to be more flexible about indigenous politics in evolving societies.
August 21, 2007 11:46 AM | Reply | Permalink
The Government sites are not set-up in a manner that allows you to easily search them all in one swoop. Clusty has a specialised government search, which I haven't really taken out for performance testing though. You may find it to be usable.
There are methods which you can directly access content stored by the Internet Archive. if you use Firefox as your default browser, it is very easy. if you use IE7, then you need to click through warnings to implement it, but it still works. I'll try to get something posted within namespace I exert personal control over soon that can show you how toimplement the function in your browser, but no promises. I have an open email channel accessible through my member profile stored here at TPM Cafe. If you desire the data soonest, it's probably the best method to achieve it.
If you already are familiar with the concept called 'bookmarklets' for Firefox, and 'favelets' for IE, then it is a breeze. Let me know, because if you are aware of it, I may be able to just just it in a post. I use it often. It is an excellent research tool that cannot be properly compared to Thomas, which I also use often, as well as the GPO Access site, which has search methods that are arcane.
As for accessing Aftergood's Secrecy blog, I use the provided Atom news feed, and my present preferred Newsreader Program to access it. Just being kept up to date on the newest CRS Reports that FAS is serving makes it a worthwhile read.
August 21, 2007 11:46 AM | Reply | Permalink
FOREIGNID: 290390
FOREIGNPARENTID: 290118
FOREIGNCOMMENTERID: 10973
AUTHOR: moat
DATE: 08/21/2007 03:20:43 PM
August 21, 2007 3:20 PM | Reply | Permalink
I couldn't agree more with you, but when I say "we" I mean everyone and certainly not myself or we who have protested the war. I have attended every protest that has been held where I live (and some I have traveled to get to) including those leading up to the illegal, immoral invasion of Iraq, but simply because some of us took part in the largest one-time protest ever doesn't mean a whole lot. I have been there on cold days and hot days, dry and wet days when many others have chosen to stay home. The point of protest is that it must be sustained over time. One of the reasons (not the only reason) that the protests against the war have been so effectively marginalized despite the very large numbers that have been involved in a couple of them is because they were not sustained. If large protests continued, that would be very different than what we have seen occur since just prior to the invasion.
While millions of us have done something to publicly protest the war, there are many millions more who have done nothing. If we (collectively) were willing to do what is necessary to put real pressure on the politicians to end the war we would be doing a great deal more protesting on a far more frequent schedule.
Many, particularly blogophiles like to say protests don't mean anything or don't work, but they are wrong IMHO. History shows quite clearly that public protests are very, very effective and not just in the 1960's. I believe that is just a convenient out for those who would rather note their opposition on a blog or sign an e-mail circulated petition.
When protests are widespread and sustained they put pressure on elected and appointed officials in a very unique and effective way. A way, I might add, that all the blogging and letter writing and e-mail sending in the world cannot do. But that's just my opinion.
I guess what I'm trying to say is that we need to have more world wide coordinated protests, but especially in the US. Frankly, I am surprised at how willing so many of us are to simply accept the situation while registering our disapproval but are unwilling to really go beyond that point.
August 21, 2007 3:28 PM | Reply | Permalink | http://tpmcafe.talkingpointsmemo.com/2007/08/17/wolfowitz_je_ne_regrette_rien/ | crawl-002 | refinedweb | 14,698 | 70.13 |
We are pleased to announce the April updates of HDInsight Tools for IntelliJ & Eclipse. This is a quality milestone and we focus primarily on refactoring the components and fixing bugs. We also added Azure Data Lake Store support and Eclipse local emulator support in this release. The HDInsight Tools for IntelliJ & Eclipse serve the open source community and are of interest to HDInsight Spark developers. The tools run smoothly in Linux, Mac, and Windows.
Summary of key updates
Azure Data Lake Store support
HDInsight Visual Studio plugin, Eclipse plugin, and IntelliJ plugin now support Azure Data Lake Store (ADLS). Users can now view ADLS entities in the service explorer, add ADLS namespace/path in authoring, and submit Hive/Spark jobs reading/writing to ADLS in HDInsight cluster.
To use Azure Data Lake Store, users firstly need to create Azure HDInsight cluster with Data Lake Store as storage. Follow the instructions to Create an HDInsight cluster with Data Lake Store using Azure Portal.
As shown below, ADLS entities can be viewed in the service explorer.
- By clicking “Explorer” above, users can explore data stored in ADLS, as shown below:
- Users can read/write ADLS data in their Hive/Spark jobs, as shown below.
- If Data Lake Store is the primary storage for the cluster, use adl:///. This is the root of the cluster storage in Azure Data Lake. This may translate to path of /clusters/CLUSTERNAME in the Data Lake Store account.
- If Data Lake Store is additional storage for the cluster, use adl://DATALAKEACCOUNT.azuredatalakestore.net/. The URI specifies the Data Lake Store account the data is written to and data is written starting at the root of the Data Lake Store.
Learn how to Use HDInsight Spark cluster to analyze data in Data Lake Store.
Learn how to Use Azure Data Lake Store with Apache Storm with HDInsight.
Local emulator for Eclipse plugin
Local emulator was supported before in IntelliJ plugin.
Now local emulator is also supported in Eclipse plugin, similar functionalities and user experiences as local emulator in IntelliJ.
Get more details about local emulator support.
Quality improvement
The major improvements are code refactoring and telemetry enhancements. More than forty bugs around job author, submission, and job view are fixed to improve the quality of the tools in this release.
Installation
If you have HDInsight Tools for Visual Studio/Eclipse/IntelliJ installed before, the new bits can be updated in the IDE directly. Otherwise please refer to the pages below to download the latest bits or distribute the information to the customers:
Upcoming releases
The following features are planned for upcoming release:
- Debuggability: Remote debugging support for Spark application
- Monitoring: Improve Spark application view, job view and job graph
- Usability: Improve installation experience; Integrate into IntelliJ run menu
- Enable Mooncake support
Feedback
We look forward to your comments and feedback. If there is any feature request, customer ask, or suggestion, please do email us at hdivstool@microsoft.com. For bug submission, please submit using the template provided. | https://azure.microsoft.com/da-dk/blog/hdinsight-tools-for-intellij-eclipse-april-updates/ | CC-MAIN-2019-04 | refinedweb | 499 | 53.21 |
Implementing Heroku Review Apps with Rails
Heroku’s Review Apps are an important part of shipping code at Opendoor. Gone are the days of “works on my machine” — instead, code reviewers, PMs, and anyone looking at your work gets a working link to a server running your code. Review apps also let us test infrastructure tweaks (like build tooling) that can be harder to iterate in local environments.
Before you rush off to enable them for your teams, we found that a few subtle implementation details can make or break adoption of review apps. They come with their own tradeoffs that one organization might see differently than we have — we’ve done our best to document them below.
At the end of the day, we chose to think of review apps as “staging, but running your code,” which not only makes implementation simple but helps engineers have an intuition for the limitations of review apps. Technically this means at runtime
Rails.env.staging? is true for review apps, and all review apps share the same database and services as our normal staging server.
Rails Environment Configuration
You might already have a central staging server, which will lead you to wonder “Should ‘review’ be its own Rails environment, distinct from staging?” If you choose to have a distinct environment, which would be mutually exclusive to staging, there are a few downstream consequences.
Thinking of review apps as a different environment adds another runtime permutation of your code which everyone may not remember as they add new features or during code review. This is what we went with having our review apps “quack” like staging, for which we already had well established practices.
For example, your code might also have runtime logic that forks based on
Rails.env.staging?. Depending on the size of your codebase when first enabling Review Apps, you might have to audit a lot of code to figure out where to inject new instances of
Rails.env.review?.
Additionally, all of your YAML configuration files (think
secrets.yml,
database.yml) would now need a distinct section for review. You could use YAML-style inheritance to share configuration and avoid duplication between environments, but we found that the YAMLs became increasingly hard to grok once we went beyond the usual development, staging, and production environments.
In the very few cases we did need to fork the logic at runtime differently between staging and review apps, we used code like this:
class DeployInfoService
def self.review_app?
ENV['HEROKU_APP_NAME'].to_s.include?('-pr-')
end
end
Heroku Configuration
Again, you might have a staging environment set up as a Heroku app. If you want to think of staging and review apps as distinct entities, it would mean creating a new app for your Heroku pipeline from scratch (or cloning it from your staging app). Much like the Rails-level environment and code configuration, there could be a strong chance your Heroku settings between staging and review drift apart.
The other Heroku-level consideration for setting up review apps is how you decide to spin up plugins and services for each app. Do you spin up a new database for every PR, or do you share one? What about services like Elasticsearch or Redis?
Dollar cost is one consideration — it can be expensive to run many more instances of these services all the time. And then there’s time it takes to maintain whatever mechanism you use to load data into these services — do you copy it from staging or some other persistent source? How long in minutes would that process take? Or would you write some scripts that generate data?
Ultimately we chose to point our review apps to a single set of shared persistent data sources. They all look at the same primary databases, which are available instantly upon review app creation — no need to wait for data to copy to your app. We have yet to encounter a case where this has been an issue (i.e. someone deleting another person’s staging data unexpectedly) — additionally, many of the data sources are rehydrated on a regular basis, so in the event of an issue it wouldn’t be a permanent problem. We do spin up a new Redis instance for each review app, primarily so that our asynchronous jobs don’t collide.
Your team might make different tradeoffs — it might be easier for you to maintain seed data scripts, or the cost mechanisms might work in your favor. YMMV.
Domain Complexities
Each review app is its own Heroku app, so they all end up with their own domains like
my-review-app-*.herokuapp.com. Depending on your product, this might be a no-op or have some frustrating side-effects.
If you use OAuth to authenticate to external services (Google, Facebook, etc), you’ll probably need to think of an alternative login path for review apps. Most OAuth services enforce a whitelist of allowed domains and do not support wildcards.
In our case we came up with a workaround for the one feature that requires external authentication, but a more scalable solution might be to have a proxy service between your review apps and the OAuth domain. Using the OAuth state parameter to store your originating review app URL and route accordingly, your proxy service could transparently pass OAuth information back and forth.
Another domain gotcha is around browser cross-origin policies. You might have resources like fonts or iframes that are configured only to be retrievable by your staging or production domains, but your new cluster of review apps presents a complication. In the case of static resources, S3’s policy syntax does allow for wildcards and makes it relatively simple to support review apps.
Those are the two classes of domain problems we encountered. It may be hard to predict the failures that will happen due to domain settings, so you might have to do some exploratory debugging the first time you setup a review app.
Worth It?
Despite some of the above subtleties, we think review apps are absolutely a worthwhile investment and have paid the initial setup cost back in spades. It took a week or two to iron out the kinks and get feedback from teams on what features weren’t working as expected, but since then they have been self-driving.
We’re looking for engineers of all backgrounds: it doesn’t matter what languages you work with now, we’re sure you’ll ramp up fast.
Find out more about Opendoor jobs on StackShare or on our careers site. | https://medium.com/opendoor-labs/implementing-heroku-review-apps-with-rails-ecc655fd3fd | CC-MAIN-2021-17 | refinedweb | 1,095 | 58.01 |
Sencha Touch + MVC?
After downloading and opening up the Kiva app, I see the move to the MVC pattern going on in Touch now. I think that is a great move but my question is, is this a viable option NOW that we should move towards if we are starting a new dev project...or is this too bleeding edge in terms of support, framework changes, documentation etc.? For instance, I am looking at the Ext.dispatch to see what the "instance" property is all about, but cannot find anything about Ext.dispatch in the API docs..unless I am missing something.
Again, would like to make this move, but is it here to stay and how long before API docs are updated to accommodate these new features? I know the team is busy though...
Thanks
I'm curious about the same thing. The tweet demo is the only place have found anything about the MVC structure.
I have made the move and it's pretty cool. Not fully integrating models into the picture yet, but it's very clean and it's also very cool to have a client-side and server-side both implementing MVC type patterns. The ability to "dispatch" to central processing controllers and pass parameter objects is worth it alone, but I am sure that I am just scratching the surface. I would recommend it...
It will be great to have a blog/article on best practices while using MVC on client layer. I didn't know of Ext.regController until I looked into Kiva code.
Either a Sencha developer or someone who has used it should do a brief post on implementing MVC based design.
We're working on getting more information up on this but still catching up after the conference earlier this week. Expect more information within the next week on how to use this stuff.Ext JS Senior Software Architect
Personal Blog:
Twitter:
Github:
I've been playing around with the MVC platform and it looks very promising.
I am eagerly looking forward to some examples, as I feel like I have only just touched the surface of this cool new feature and I would really like to learn more about it. I have built a Sencha Touch app using my own implementation of the MVC pattern but it's somewhat limited, so I would gladly port to a more sophisticated MVC platform.
I've been play around with Sencha Touch recently, and I made a Contacts Manager demo using MVC pattern.
I hope my demo helps:
Thanks for posting this. I'm dying for a very simple Sencha Touch MVC example. I'll look at yours and hopefully gain something.
I've got a grasp on using and positioning various panels, like in most of the example apps. But when I opened up the Kiva and Twitter apps, I was utterly lost.
Ext.Application instance will create the namespace itself:
Similar Threads
Using Sencha Touch with ASP .Net MVCBy atulbahl in forum Sencha Touch 1.x: DiscussionReplies: 8Last Post: 24 Mar 2011, 12:33 PM
Sencha Touch on iPhone v1 / iPod touch v1 ?By palnap in forum Sencha Touch 1.x: DiscussionReplies: 4Last Post: 28 Oct 2010, 5:30 PM | https://www.sencha.com/forum/showthread.php?115688-Sencha-Touch-MVC | CC-MAIN-2015-27 | refinedweb | 540 | 73.88 |
J.
Table of Contents)
Jekyll Benefits
Simple
Jekyll is really flexible. You can build templates and write content in markdown, textile, liquid, or even just plain HTML/CSS. It really doesn't matter and is up to your preference because Jekyll will intelligently build your site based on all your files.
Static
The entire website gets compiled into a static website. This means you can host it in almost any server environment with nearly zero overhead. You can also host it for free on Github Pages, or host it on a file storage service like Amazon S3. Finally, since it's static, if you put any sort of CDN with HTML caching (CloudFlare, CloudFront, etc.) in front of it, you'll sleep well at night knowing you can very cheaply handle an almost unlimited amount of traffic without downtime.
Blog-aware
Jekyll has all the benefits of a CMS, you just need to know how to use it. It's completely blog-aware with permalinks, categories, pages, posts, custom layouts, and even custom content types called Collections. On top of this, there's themes, plugins, and all sorts of extras.
System Requirements
Now that we have done a very healthy intro into Jekyll, let's get started!
Jekyll is a command-line executable built with Ruby and has a few commands we need to run from time to time. If you're not a Ruby developer, there's a few things we need to do to setup our environment for this kind of development.
We'll be doing this tutorial for Mac users, but it's very similar to Windows (Window users see this resource first) and Linux users.
The first thing you want to do is make sure Xcode Command Line Tools is installed.
Run this command from your terminal to start the download prompt:
xcode-select --install
The next thing you need to do is install Ruby. Your system might already have this, but we'll be getting the latest version. First install Homebrew:
ruby -e "$(curl -fsSL)"
If you already have Homebrew install, first update it with:
brew update
Next, install Ruby by following these steps:.2 rbenv global 2.2.2 ruby -v
If you're using oh my zsh, make sure you add this to the bottom of your
~/.zshrc file or you still won't be using the most current version of Ruby we just installed:
export PATH="$HOME/.rbenv/bin:$PATH" eval "$(rbenv init -)"
At this point, you should have the latest version of Ruby installed. You'll also notice RubyGems (Ruby's package manager) was installed.
You can see that with:
gem -v
You can update RubyGems with:
sudo gem update --system
For whatever reason if RubyGem's isn't installed, follow the manual install instructions here.
Finally, it's a good idea to install Bundler even though we won't be using it for this tutorial. Bundler allows a way to make sure that different gems have matching versions across different systems. This is especially useful while collaborating with teams.
It's really similar to how Composer or how NPM manages dependencies:
sudo gem install bundler
The last requirement is to make sure you have NodeJS installed. If you don't have it installed yet, just run:
brew install node
This might seem like a lot, but is a pretty standard setup for a lot of web developers. Plus, now your machine is Ruby ready!
Installation
Now that you have all your system requirements setup, installing Jekyll is as easy as:
gem install jekyll
If you run into permission issues, just do:
sudo gem install jekyll
After Jekyll is done installing, you should be able to type anywhere from the command-line:
jekyll -v
This will prompt you with the Jekyll version installed and means Jekyll is successfully installed.
Getting Started
Before we kick-off on building our blog, I quickly wanted to note how awesome the Jekyll documentation is and recommend checking it out.
So let's spin-up our first blog. From the command-line, navigate you where you'd like this project to be and type the following command:
cd wherever/you/want/this/project/on/your/computer jekyll new my-blog
The
new command here will create an install of Jekyll with the default theme. Alternatively, if you'd rather start with our theme, run the following command instead (Note: We make a ton of comparisons in this tutorial to the default theme setup in this tutorial. It might be a good idea to follow along that way first.):
git clone my-blog
Let's jump into our new
my-blog directory and run the
serve command:
cd my-blog jekyll serve
Jekyll comes with a built-in development server. This command start this server on your machine and starts watching your files for changes similar to Grunt or Gulp. This is awesome and makes development super easy with little overhead on your part.
Now navigate to in your browser to see the Jekyll install we just setup.
Commands
Let's quickly explain some commands and how Jekyll works a bit more. Since there's no database, you're going to be creating new pages, posts, and templates in
markdown,
html,
textile, or
liquid files and then using Jekyll to compile (or
build) them together into a website. Before
building the site, it actually doesn't exist and is just a bunch of template files.
"serve", or just "s"
The serve command builds your site, boots up a development server, and starts watching files for changes by default. Any time a change happens, it will
build your site automatically (see below).
To run this, just do:
jekyll build
Stopping the server is as easy as:
ctrl-c
"build", or just "b"
By default, whenever you
build your website, it will be generated into a folder called
_site.
You can generate your static site by running:
jekyll build
You can also change the destination with:
jekyll build --destination <destination>
Lastly, you can also add a
--watch flag to rebuild on changes:
jekyll build --watch
Your probably wondering why you wouldn't just use the
serve command. This is useful to know for serveral reasons:
- You don't always want the local server.
- It's best practice to
.gitignoreyour
_sitefolder. So you may have to just compile your site on the fly somewhere.
new
The
new command will create a new Jekyll site scaffold in PATH (aka, the current location) with the default theme. We did this already while getting started.
Here it is again:
jekyll new my-new-static-super-cool-blog-about-cats-and-dogs
You'll only have to do this when starting a new project from scratch. If you're using a theme or existing site, you won't even touch this.
Folder and Directory Overview
So we created our first project and are now familiar with the commands a bit. Let's review the folder structure that was created with the
new command.
It's important to learn these pieces well since this is essentially the core framework of Jekyll, its templating, configuration, and where content generation is done.
We're just going to do a brief overview of these and jump into all of them in detail later in the tutorial.
You can also reference the official Jekyll docs on Directory Structure.
_config.yml
If you're starting a new project or cloning down an existing one, this is usually the first file you'll want to take a peak at. This file hosts global configurations for your entire site in YAML format.
These configurations are defaults used by the
jekyll executable (such as a destination) and can also be retrieved in templates or content by doing:
{{ site.variable_name }}
We'll cover this in a lot more detail in a bit. Here's the official resource on Jekyll config if you want to quickly review.
_layouts
We'll cover how layouts work later too, but this directory is where you will put your templates. Templates are the HTML that wrap posts and other types of content like pages.
_includes
This folder is where you'll put reusable bits of code for your templates. This is sometimes also called "partials" or "slices". We'll cover how to use these in the templating section later too.
This folder contains all your posts in a format of your choosing. Since there's no database, each post is a separate file formatted like so:
YEAR-MONTH-DAY-this-is-my-title.MARKUP
_drafts
You'll notice this folder actually isn't there if your using the default theme! You can create this empty folder now, but this is just where you will store unpublished posts.
_plugins
This also doesn't exist yet with the default theme! You can add this in case you want to add plugins later.
about.md
The default theme comes with this page in the root directory. This is slightly annoying for organizational purposes. In the Scotch Theme though, we moved this to it's own folder
_pages. We'll cover that later.
index.html
This is your blog's homepage. So long that this file has a YAML Front Matter section (which, again, we'll cover), it will be transformed by Jekyll. The same will happen for any other
.html,
.markdown,
.md, or
.textile file in your site’s root directory or directories not listed above.
_site
This is your generated static website. Everytime your site is built or generated, this folder is "cleaned" and rebuilt from scratch. So never touch this and just know that it exists solely to host the output of your static site.
_data
This is where you'll host things like reusable data, variables, or more. We'll make extensive use of this folder. Data can be in
YAML,
JSON, or a
CSV.
_sass
Jekyll comes Sass-ready. The default theme doesn't use Bootstrap, but you can compare it to the Scotch Theme on how we integrated Bootstrap 3 with it.
Any Other Files/Folders
All other files and folders automatically get copied over to the static generated site. So if you create a folder called
img, it will be copied over to the static site. This makes referencing images easy.
You'll notice that with the Scotch Theme, we created a
js and
img folders since the default theme doesn't have these out-of-the-box.
Configuration
As mentioned above, your site's configuration is done in
_config.yml. The values set here are shared to the
jekyll command from the command-line.
You'll also notice in the default theme that there's some settings in the example generated such as
twitter_username, and
github_username. Some people use this for declaring site-wide global variables since you can retreive them in templates like so:
{{ site.variable_name }}
Although you can do this and the default theme does this, I actually recommend using Data Files for anything custom instead.
When developing, Jekyll will not watch this file for changes. Any changes only happen during a brand-new site build - which is midly frustrating when moving fast.
That's why it makes sense to limit this file to your build config only. So what we did with the Scotch Theme was actually delete everything and add every single default value for quick reference and tweaking instead.
This is a full list of all the defaults:
# Where things are source: . destination: ./_site plugins: ./_plugins layouts: ./_layouts data_source: ./_data collections: null # Handling Reading safe: false include: [".htaccess"] exclude: [] keep_files: [".git", ".svn"] encoding: "utf-8" markdown_ext: "markdown,mkdown,mkdn,mkd,md" # Filtering Content show_drafts: null limit_posts: 0 future: true unpublished: false # Plugins whitelist: [] gems: [] # Conversion markdown: kramdown highlighter: rouge lsi: false excerpt_separator: "\n\n" # Serving detach: false port: 4000 host: 127.0.0.1 baseurl: "" # does not include hostname # Outputting permalink: date paginate_path: /page:num timezone: null quiet: false defaults: [] # Markdown Processors rdiscount: extensions: [] redcarpet: extensions: [] kramdown: auto_ids: true footnote_nr: 1 entity_output: as_char toc_levels: 1..6 smart_quotes: lsquo,rsquo,ldquo,rdquo enable_coderay: false coderay: coderay_wrap: div coderay_line_numbers: inline coderay_line_number_start: 1 coderay_tab_width: 4 coderay_bold_every: 10 coderay_css: style
If you'd like to read more about configurations, check the official docs on it here.
Templating
Templating in Jekyll is amazingly simple. If you're familiar with modern templating systems, it will be a breeze to learn.
Templating is basically broken down into two parts: Front Matter and Liquid Templates
Front Matter
Front Matter is YAML located at the top of your files for specifying page or template specific variables. This is where the beauty and power of Jekyll comes from.
An example of Front Matter on a page would be:
--- layout: page title: About permalink: /about/ --- # {{ page.title }}. ## Heading 2.
On a site build, Jekyll will parse this information at the top, generate a page accessible at the URI "/about", and make sure it uses the layout that is named "page".
And, as the example above shows, you can also access front-matter variables with liquid by doing:
{{ page.variable_name }}
Front Matter variables can override defaults or be totally custom. For a full list of the defaults, reference the official docs here.
Liquid Templates
Jekyll uses the very awesome Liquid Templating Language by Shopify. It's easy to learn, secure, and extremely extensible.
The fastest way to get aquanted with Liquid is to read this resource: Liquid for Designers. It covers everything and more. We'll cover the basics here too though very quickly.
Includes
These go in a folder called
_includes. Here's the syntax:
{% include my-include.html %}
Echoing or Printing
{{ variable_name }}
Tag Markup (doesn't print)
{% stuff goes here %}
Filters
{{ 'i am now uppercase'|upcase }}
Loops
{% for post in posts %} My title is {{ post.title }} {% endfor %}
Conditionals
{% if variable_name %} variable_name exists {% else %} variable_name doesn't exist {% endif %}
Creating Pages
Creating pages with Jekyll is as easy as creating a new file. By default, you can just create the file in your root directory, but we'll be organizing our pages in their own folder.
To do that, all you need to do is add this to your
_config.yml. Remember to reboot your local environment afterwards:
include: ['_pages']
Then, just create a file that is either
.html,
.markdown,
.md, or
.textile and add your front matter. Your front matter can be any variables you want, but you need to pick a
layout,
title, and
permalink at minimum.
Here's an example:
--- layout: inner title: About permalink: !
Creating Posts
Creating posts are equally easy as creating pages. The only difference is you need to associate a date or timestamp with them and they go in their own folder:
Here's how you should create your file:
YYYY-MM-DD-my-title-is-called-this.md
This is automatically parsed by Jekyll and creates default title and date variables. You can override this in your front-matter though. Here's an example of a post's front-matter:
--- layout: inner title: 'My First Post on Jekyll' date: 2015-08-31 13:26:34 categories: blog development tags: cats dogs code custom_var: 'meow meow me!
Check out all the post variables and how to retreive them in your templates here.
Looping through Posts
There's essentially two ways to loop through posts:
- Without Pagination
- With Pagination
Here' how to do it without pagination:
{% for post in site.posts %} <h2>post.title</h2> <div class="content"> {{ post.content }} </div> {% endfor %}
And here's how to do it with pagination:
{% for post in paginator.posts %} {% include tile.html %} {% endfor %}
Collections
This article won't cover collections, but imagine collections as a custom content type. Not everything is a post or a page, so that's where these come in handy.
In WordPress, this would be the equivalent of a "custom post type". Or, in ExpressionEngine or other CMS's, a custom "channel".
You can read about setting them up here.
Data files
"Data files" are collections of pure, raw, and static data. Think of these as variables or groups of variables. You can have data files be in
.yml,
.yaml,
.json, or even a
.csv.
I personally prefer to put everything custom in here and not in my
_config.yml file. I separate custom variables into data files because they're "watched" by Jekyll during development. Variables in
_config.yml are set when the site is built - and that's it.
Some good example use cases:
- Site navigations
- Global variables that are site-wide
- Misc. footer stuff
- Google Analytics tracking code
- Etc...
You can have as many data files as you want. Just put all of them in a folder called:
_data/
To retrieve the "data" in your layouts, it's as easy as:
{{ site.data.filename1.some_variable }} {{ site.data.filename2.another_variable }}
Check them out here for more information.
Conclusion
Jekyll is pretty cool and easy to use - even if you don't like or know Ruby. Static CMS's definitely have their obvious benefits for users over database driven CMS's.
Make sure you checkout the demo and its code! It covers everything and more.
Here's some additional resources on Jekyll to wrap-up the tutorial: | https://scotch.io/tutorials/getting-started-with-jekyll-plus-a-free-bootstrap-3-starter-theme | CC-MAIN-2018-22 | refinedweb | 2,864 | 63.8 |
This module implements nice syntactic sugar based on Nim's macro system.
Macros
macro `->`(p, b: untyped): untyped
- Syntax sugar for procedure types. It also supports pragmas.Warning: Semicolons can not be used to separate procedure arguments.
Example:
proc passTwoAndTwo(f: (int, int) -> int): int = f(2, 2) # is the same as: # proc passTwoAndTwo(f: proc (x, y: int): int): int = f(2, 2) assert passTwoAndTwo((x, y) => x + y) == 4 proc passOne(f: (int {.noSideEffect.} -> int)): int = f(1) # is the same as: # proc passOne(f: proc (x: int): int {.noSideEffect.}): int = f(1) assert passOne(x {.noSideEffect.} => x + 1) == 2Source Edit
macro `=>`(p, b: untyped): untyped
- Syntax sugar for anonymous procedures. It also supports pragmas.Warning: Semicolons can not be used to separate procedure arguments.
Example:
proc passTwoAndTwo(f: (int, int) -> int): int = f(2, 2) assert passTwoAndTwo((x, y) => x + y) == 4 type Bot = object call: (string {.noSideEffect.} -> string) var myBot = Bot() myBot.call = (name: string) {.noSideEffect.} => "Hello " & name & ", I'm a bot." assert myBot.call("John") == "Hello John, I'm a bot." let f = () => (discard) # simplest proc that returns void f()Source Edit
macro capture(locals: varargs[typed]; body: untyped): untyped
- Useful when creating a closure in a loop to capture some local loop variables by their current iteration values.
Example:
import std/strformat var myClosure: () -> string for i in 5..7: for j in 7..9: if i * j == 42: capture i, j: myClosure = () => fmt"{i} * {j} = 42" assert myClosure() == "6 * 7 = 42"Source Edit
macro collect(body: untyped): untyped
- Same as collect but without an init parameter.
Example:
import std/[sets, tables] let data = @["bird", "word"] # seq: let k = collect: for i, d in data.pairs: if i mod 2 == 0: d assert k == @["bird"] ## HashSet: let n = collect: for d in data.items: {d} assert n == data.toHashSet ## Table: let m = collect: for i, d in data.pairs: {i: d} assert m == {0: "bird", 1: "word"}.toTable # avoid `collect` when `sequtils.toSeq` suffices: assert collect(for i in 1..3: i*i) == @[1, 4, 9] # ok in this case assert collect(for i in 1..3: i) == @[1, 2, 3] # overkill in this case from std/sequtils import toSeq assert toSeq(1..3) == @[1, 2, 3] # simplerSource Edit
macro collect(init, body: untyped): untyped
Comprehension for seqs/sets/tables.
The last expression of body has special syntax that specifies the collection's add operation. Use {e} for set's incl, {k: v} for table's []= and e for seq's add.
Example:
import std/[sets, tables] let data = @["bird", "word"] ## seq: let k = collect(newSeq): for i, d in data.pairs: if i mod 2 == 0: d assert k == @["bird"] ## seq with initialSize: let x = collect(newSeqOfCap(4)): for i, d in data.pairs: if i mod 2 == 0: d assert x == @["bird"] ## HashSet: let y = collect(initHashSet()): for d in data.items: {d} assert y == data.toHashSet ## Table: let z = collect(initTable(2)): for i, d in data.pairs: {i: d} assert z == {0: "bird", 1: "word"}.toTableSource Edit
macro dump(x: untyped): untyped
Dumps the content of an expression, useful for debugging. It accepts any expression and prints a textual representation of the tree representing the expression - as it would appear in source code - together with the value of the expression.
See also: dumpToString which is more convenient and useful since it expands intermediate templates/macros, returns a string instead of calling echo, and works with statements and expressions.
Example: cmd: -r:off
let x = 10 y = 20 dump(x + y) # prints: `x + y = 30`Source Edit
macro dumpToString(x: untyped): string
- Returns the content of a statement or expression x after semantic analysis, useful for debugging.
Example:
const a = 1 let x = 10 assert dumpToString(a + 2) == "a + 2: 3 = 3" assert dumpToString(a + x) == "a + x: 1 + x = 11" template square(x): untyped = x * x assert dumpToString(square(x)) == "square(x): x * x = 100" assert not compiles dumpToString(1 + nonexistent) import std/strutils assert "failedAssertImpl" in dumpToString(assert true) # example with a statementSource Edit
macro dup[T](arg: T; calls: varargs[untyped]): T
Turns an in-place algorithm into one that works on a copy and returns this copy, without modifying its input.
This macro also allows for (otherwise in-place) function chaining.
Since: Version 1.2.
Example:
import std/algorithm let a = @[1, 2, 3, 4, 5, 6, 7, 8, 9] assert a.dup(sort) == sorted(a) # Chaining: var aCopy = a aCopy.insert(10) assert a.dup(insert(10), sort) == sorted(aCopy) let s1 = "abc" let s2 = "xyz" assert s1 & s2 == s1.dup(&= s2) # An underscore (_) can be used to denote the place of the argument you're passing: assert "".dup(addQuoted(_, "foo")) == "\"foo\"" # but `_` is optional here since the substitution is in 1st position: assert "".dup(addQuoted("foo")) == "\"foo\"" proc makePalindrome(s: var string) = for i in countdown(s.len-2, 0): s.add(s[i]) let c = "xyz" # chaining: let d = dup c: makePalindrome # xyzyx sort(_, SortOrder.Descending) # zyyxx makePalindrome # zyyxxxyyz assert d == "zyyxxxyyz"Source Edit | http://nim-lang.github.io/Nim/sugar.html | CC-MAIN-2021-39 | refinedweb | 844 | 66.13 |
The Tinusaur is a small board that has a DIP-8 socket for an Atmel ATtiny85 (or ATtiny25/45, even 13) microcontroller with the minimum required parts to work which is pretty much 2 capacitors for the power source and one 10K pull-up resistor to the RESET signal of the microcontroller.
On both sides of the MCU socket there are 2 double row female headers that could be used to connect external components like LEDs, buttons, sensors, etc. The inner rows are the signals from the MCU while the outer rows are the GND. Those headers could be used (in same cases) like a breadboard.
There is also 10-pin male header to connect the external ISP programmer such as USBasp.
On the bottom of the board there is a battery holder for CR2032 - a button cell lithium battery rated at 3.0 volts.
A pair of 2-pin male headers are available to connect an external power source and (with additional jumper) to disconnect the coin cell battery.
At the corners there are 4 holes - 3 mm each, that could be used for mounting the board.
The goal of The Tinusaur Project is to give you everything you need to start your first microcontroller project - very simple and very easy to do - from assembling the board, through setting up the development environment, to writing a very simple program - like the one making a LED to blink.
Everything in that project in open source - the designs, the software, etc.
All the necessary parts could be purchased at the popular online stores, but may also be available at some local hobbyists stores.
The only specific part is the PCB - you can make one yourself or order it online from various places. The latest version is available at 123d.circuits.io -.
All the parts are also available as a kit at the Tinusaur online store:.
The Tinusaur is an educational, non-commercial project and it's been used in several high schools and universities around the world as part of some courses. The reason those kits are sold online is convenience for those who what to play with it but don't have access to the components and PCB manufacturing.
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Parts - What's in the Package
Here is what's in the package - 15 different components ...
- PCB - Printed circuit board
- MCU - Atmel AVR ATtiny85 microcontroller
- Socket, DIP-8 socket for MCU
- H1, Header 2×4, Female
- H2, Header 2×5, Female
- ISP, Header 2×5, Male, for ISP
- RESET, Button - Tactile push button, for RESET
- Power Header 1×2, Male, for external power
- Battery Header 1×2, Male, for battery power on/off
- Battery Jumper, 2-pin, for battery power on/off
- C1, Capacitor 100nF, Small
- C2, Capacitor 100uF, Low profile 5×5 mm
- R1, Resistor 10K, Small, 1/8W
- Battery holder for CR2032
- Battery 3V, CR2032
The only specific for the project component is the PCB but it could be ordered (besides the Tinusaur own website) from various places:
- OSH Park -
- 123d.circuits.io -
All the other parts are available on Internet. eBay is a good place for cheap ones but there are more reputable sources as well.
Step 2: Assembling the Board
Assembling the board is very easy. At one of the workshops organized by the Tinusaur team there were people of age 16 to 44 who did that without any significant issues.
On the PCB board there are markings where and how to put the components.
There are only 2 components that should be soldered in a specific direction:
- The socked for the microcontroller - it has an identifying notch that marks the top, the PCB has a mark for that notch as well.
- The 100uF capacitor - it has (+) and (-) signs - there are the same mark on the PCB too..
Here is the recommended order for soldering the parts:
- MCU socket. Note: do not insert the chip yet.
- Capacitors C1, C2 and resistor R1.
- Headers H1, H2.
- External power header – red.
- Battery on/off header – yellow.
- ISP header.
- Battery holder.
- RESET button.
Please note that the battery holder should be soldered before the button.
Some more details and recommendations for assembling the board could be found on this page:.
Step 3: Setting Up the Development Environment
Even for not very experienced computer specialist should be easy to setup the development environment.
But before that we need to install a driver for the ISP programmer that will be used to upload to the microcontroller the programs we create. The programmer that is used here is called USBasp and its home page is at. The driver for MS Windows is available at the same website.
The USBasp programmer itself could be purchased from various places - eBay, etc. It is also part of the Tinusaur Starter kit that is available at the Tinusaur website.
Now - the development environment. It is called WinAVR. Its website is available at this address:.
Basically, it is ...
- the SDK - these are the C/C++ compiler and the libraries;
- some other tools that help building the program, converting it to a binary format and uploading it to the microcontroller.
There is no built-in text editor - any such could be used. Norepad++ is a good choice.
The building is done at the command using the make command and file called Makefile.
More detailed instructions how to setup the WinAVR are available at the Tinusaur guides page:. There you can also find some tests that will help you find out if the setup was correct.
Step 4: Writing the "Hello World!"
The "Hello World!" of the microcontrollers is a program that makes a LED to blink.
For this we also need a LED connected to the GND and to the Vcc through a 300 ohm (or 270 ohm) resistor.
It is a very simple program.
#include <avr/io.h>
#include <util/delay.h>
#define LED1_PORT PB0 int main(void) { DDRB |= (1 << LED1_PORT); while (1) { PORTB |= (1 << LED1_PORT); _delay_ms(200); PORTB &= ~(1 << LED1_PORT); _delay_ms(400); } return (0); }
All the necessary file are available at bitbucket.org / tinusaur / tutorials / tut001_blinking_led_x1 including the Makfile.
Compile/build the program with this command:
make
Check the output in the console window to is if there aren't any errors during the compilation.
Then, upload it to the microcontroller with this command:
avrdude -c usbasp -p t85 -B 0.5 -U flash:w:"main.hex":a
Make sure that the Tinusaur board is properly connected to the USBasp programmer.
More detailed tutorial about the blinking LED program is available at
Step 5: What's Next ...
What else could you do with this Tinusaur board?
- You can connect more LEDs, buttons, temperature sensors, relays and many other things.
- You can even make small shield-like boards to stack on top of the Tinusaur board.
There are some project on the Tinusaur website that you can try but it isn't that hard to create your own.
Got bored?
If you've learned everything there is to learn about ATtiny85 it is probably the time to move to something more advanced like Arduino. The Arduino Uno board is a good start.
If you feel you're limited by the capabilities of the ATtiny85 and the Tinusaur you should probably consider using Arduino. The Arduino Nano board is very good for choice for small sized projects.
Please, don't forget that the Tinusaur is only an educational project.
References
ATtiny85 product page:
The Tinusaur Project:
The Tinusaur Online Store:
How to setup WinAVR:
Blinking LED program:
Arduino oficial website:
5 Discussions
3 years ago
this is very helpfull.
thanks a lot
4 years ago on Introduction
this is just amazing sir !!
4 years ago on Introduction
this is just amazing sir !!
4 years ago on Introduction
You're welcome,
This is my first instructable and it's far from perfect so if you think that there's something missing or wrong, or if you'd like to see something specific - let me know.
4 years ago on Introduction
Great tutorial, thank you! | https://www.instructables.com/id/Getting-Started-with-the-Tinusaur-Board/ | CC-MAIN-2019-43 | refinedweb | 1,360 | 71.65 |
User:Andy Crowd
My achievements
My packages in AUR: list them.
My sandbox pages: A list of other sub/fun
Fast Notes & current problems that need to be solved
- Current links as tmp-memo I need to use
- Download Chinese fonts
A="$(curl | grep -i ttf | sed 's/[^*]*href=\"//g' | cut -d\" -f1)"
- Interesting link about mimetypes-database.
- Show list of unique category names. Could simplify and added as option --lc in lsdesktopf to search only in
/usr/share/applications/and
$HOME/.local/share/applications/.
- reviewer and test linux Versions one more link to same.
- Now available new updated version in AUR4 lsdskAUR. Link to forum forum.
- I'm an idiot moments - nice and cool
- Compare VOIP
- Future project: Live CD + auto-configuration of pulseaudio + detect user-names in Windows
C:\users\folder and create them in Linux by using
C:\users\user_nameas a home path.
- Just some links from Talk:Securely_wipe_disk
Fix for
MPEG: No audio stream found -> no sound.
- A4 = 210x297 mm - command line for sane
scanimage -x 120 -y 210 --resolution 300 --mode Color --format=tiff >image2.tiff
- Sakis3g and QT bug fix:
export QT_X11_NO_MITSHM=1before X start
URL extraction new version 2gisAUR
$ curl "" | awk -Fzip '{if(match($0,"2GISShell") != 0){AA=substr($2,index($2,"http"));if(match(AA,"http") != 0)print AA"zip"}}'
Convert media scripts
My scripts for conversion of media files
m4a to ogg
m4a_to_ogg.sh
#!/bin/bash for I in "$HOME"/Media/Music/*.m4a;do PP=${I##*/}; if [ -d "$HOME/Media/Music/OGG" ]; then #### ffmpeg -i "$I" -acodec vorbis -strict -2 -ac 2 "$HOME/Media/Music/OGG/${PP/.m4a/.ogg}" ; #### IR=$(du "$HOME/Media/Music/OGG/${PP/.m4a/.ogg}" | awk '{print $1}') if [ "$IR" != "0" ]; then if [ -d "$HOME/Media/Music/Converted" ]; then mv -vi "$I" "$HOME/Media/Music/Converted"; else echo The '"$HOME/Media/Music/Converted"' folder is missing break fi else echo Something gone wrong size of converted "$PP" file is 0 break; fi; else echo Path doesn"'"t exist: '"$HOME/Media/Music/OGG/"' break fi done
Auto-gen configuration files
Perfect to use them on a Live CD
Conky
All moved to github
HDDTemp
Moved to github
TMP for before deletion
. ************************************************ ****.
Restore original file names by using backup file with checksums and comparing with List only unique files by checksum generated file.
awk -F"|" -v W="$(cat compmd5_new.tmp)" '//{split(W,Z," "); for(i in Z)if(index(i/2,".") != 0){if(Z[i] == $4){F=Z[i+1];gsub(/[^\/*]*\//,"",F); print $1"|"$2"|"$3"|"F};}}' compmd5_1.tmp
This is the same as above but will also handle filenames with spaces correctly
awk -F"|" -v W="$(cat compmd5_new.tmp|awk '{print "|"$1"|"substr($0,index($0," "))}')" '//{split(W,Z,"|");for(i in Z)if(index(i/2,".") == 0){if(Z[i] == $4){F=Z[i+1];gsub(/[^\/*]*\//,"",F);print $1"|"$2"|"$3"|"F};}}' checksums.list | grep -v ^$
Link to forum where I was looking for help: linuxforum
Populate array by image extensions, may be not work correctly if some part of extension exist in the list, e.g.
h extension will be found in
html.
RR="jpg gif"; QQ=($(awk -F'|' -v KK="$RR" '{SS=$3;gsub(/[^*\.]*\./,"",SS);if ( index(KK,SS) != 0 ) print $3}' checksums.list)) for (( i=0;i <= ${#QQ[@]};i++ ));do if [ ! -z ${QQ[i]} ];then echo ${QQ[i]};fi;done
This will clean up special symbols, sort restored names, add a number to the duplicate names.
#!/bin/bash} }' else echo 'Path to file is missing!' fi
I will probably rewrite my post_rec_scripts to use checksum file, already sorted file names.
With grep and awk commands populate the array
This way of populating an array is many times faster as with a while command but has limitations that might cause errors. A common way of populating an array as in this example causes problems due using space between words as a separator and a file names that contains them will not be restored and errors will be shown. A $SearchFor variable is more intuitive to edit then if all patters are in the same line with grep.
SearchFor="-e compressed -e archive"; ArrayOfFiles=($(grep -i $SearchFor info-mime-size-db.txt | awk -F'|' '{print $1 }'));
Without grep you have to use
if inside of gawk command and add patterns. Suitable if file with data is a really a very big and you can chose in which part of string you want search compared to grep that uses a whole string.
ArrayOfFiles=($( gawk -F '|' '{if ($3 ~ "image/jpeg" || $3 ~ "image/gif" || $3 ~ "image/png") print $1 }' info-mime-size-db.txt))
You can find out which of recovered files contains spaces in their names and save information about them in a file for future use.
$ find . -type f -name "* *" >> filenames-with-spaces.txt $ gawk -F'|' '{if ($1 ~ " ") print $1 }' info-mime-size-db.txt >> filenames-with-spaces.txt
Calculate duplicate files with awk
Sort out duplicates, tests with any check summer
md5sum f*.pdf | awk '// {Count[$1]++;CNames[I++]=$1}END{ for (i in Count) {if( Count[i] > 1 )print Count[i]" "CNames[A++];}}'
Note about misc
github-wiki - will be home of my python3 scripts.
Original name only from photorec recovered path
A lot of cuts but no need to use an external program/utility and can be used with loops(while/for):
AA='./recup_dir.1/f864563104_wmcloc_kmon-0.1.0.tar.gz'; ZZ=${AA/*\//}; BB="${ZZ/_*/}_"; echo ${ZZ/$BB/}
Cuts away generated names by photorec from original, cannot be used with external loops:
$ awk -F'|' '{AA=$1;sub(/^.*\//,"",AA);if ( AA ~ "_") {BB=index(AA,"_")+1; print substr(AA,BB )} }' info-mime-size-db.txt
All in one
Used folder and files auto create test ground section for
aa.txt in the example below.
By using IFS bash script special standard variable to change separator it is possible fill in an array with strings that contains spaces. Works perfect, will create test pattern
./recup_dir.1010/f872681448_wmavgload-0.6.1.tar.gz | OName= wmavgload-0.6.1.tar.gz | ./recup_dir.1010/f872688972.txt| FName= f872688972.txt |
Full path with filename|Destination name, cut to orig if exist in it|
In array it will use step by two with
| as a separator, e.g.
IFS="|" C="0"; A=ArrayItem[C] B=ArrayItem[C+1] C=$((C+2))
#!/bin/bash awk -F'|' '//{AA=$1; sub(/^.*\//,"",AA); BB=AA; if ( AA ~ "_") { GUline=index(AA,"_")+1; OName=substr(AA,GUline ); print $1 " | OName= " OName " |" } else {SIName=index(AA," "); if (SIName) { SWName=AA; print $1 "| SWName= " SWName " |" } print $1 "| FName= " BB " |" }; }' info-mime-size-db.txt
- Can cut path to show only base filename.
- Can cut generated name by photorec from original.
- If generated filename doesn't contain original as part of it then output generated into array.
- Can fill in array with strings that contain spaces by setting up and use a new separator with help of IFS special bash variable.
See also: internal bash variables.
Base name only
Example base name only:
$ awk -F'|' '{print $1}' info-mime-size-db.txt | sed 's/[^*/]*\///g'
With
printf will show errors like not enough arguments to satisfy format string if variable contains some of symbols that it uses as expressions e.g
%:
$ awk -F'|' '{AA=$1;sub(/^.*\//,"",AA);printf AA "\n"}' info-mime-size-db.txt
With
$ awk -F'|' '{AA=$1;sub(/^.*\//,"",AA);print AA }' info-mime-size-db.txt
See also: awk manual
Two more alternatives to fill in an array with data without using of grep:
gawk -F '|' '{if ($1 ~ "bmp" || $1 ~ "zip") print $1 }' info-mime-size-db.txt gawk -F '|' '{AA=index($1,"png");if (AA) print $1 }' info-mime-size-db.txt
Simple walk through folders
This will copy files from one destination to another, based on the name or the file extension, it doesn't use or do checks for any other information about files as e.g. a mime-type or a pre-made file with the descriptions. You can modify the script depends on what kind of files you will need.
This script is slow because is must go through each folder and search for files.
You can download the example script «search-folder-by-folder.sh» from the SourceForge.
Config Alsa Note
- ALSA advanced settings linuxSoundALSA
Make install - preparation
automoc4 (req. for kde lang compile) libtoolize --force aclocal autoheader automake --force-missing --add-missing autoconf ./configure make # make install
One more make example for installation
export LIBS=-lXext ./configure --prefix=/usr --x-libraries=/usr/lib # make make prefix="$pkgdir/usr" "libexecdir=$pkgdir/usr/bin" install
# make all-recursive
See also
xmkmf imake cmake
List installed from custom or official repositories
RepoName="custom"
List all installed that are not in custom repository
$ pacman -Ss | grep -i 'installed' | grep '/' | grep -v -e ^"$RepoName" -e ^' ' | awk -F'[// ]' '{print $2}'
List all that are in custom repository
$ pacman -Ss | grep -i 'installed' | grep '/' | grep "$RepoName" | grep -v ^' ' | awk -F'[/\/ ]' '{print $2}' repair purposes, but some of them can have place for the addition storage that can be connected to them such as Secure Digital SD cards where can be stored only initial "factory" ISO and optionally also the internal storage device back up image.
Virtual Box
The information about path to harddisks and the snapshots is stored between
<HardDisks> .... </HardDisks> tags in the file with the .vbox extension. You can edit them manually or use this script where you will need only change the path or use defaults, assumed that .vbox is in the same directory. It will print out new configuration to stdout.
#!/bin/bash NewPath="${PWD}/" Snapshots="${NewPath}/" Filename="$1" awk -v SetPath="$NewPath" -v SnapPath="$Snapshots" '{if(index($0,"vdi") != 0){A=$3;split(A,B,"="); L=B[2]; gsub(/\"/,"",L); sub(/^.*\//,"",L); sub(/^.*\\/,"",L); if(index($3,"{") != 0){SnapS=SnapPath}else{SnapS=""}; print $1" "$2" location=\""SetPath SnapS L"\" "$4" "$5} else print $0}' $Filename
Thanks
To Trible for showing some of advanced awk functionality, find duplicate x/y in a text file.
awk -F '|' '// { Count[$3 "|" $5]++; } END { for (i in Count) { printf "%s|%s\n", i, Count[i]; }}' /path/to/file
To gregm for help about how to count a duplicate strings in an array due population of it in a python script.
To dugan about how to search integer duplicates in an array and information about using of a default fill in array without actually predefining it with a data.
from collections import defaultdict DD = defaultdict(int)
To Alad for info about a bash spell check. I really needed it.
To 蔡依林 for the unbelievable Great voice and always the best performance ever!
Crash > test > Ouch > solution > if empty > Wiki | https://wiki.archlinux.org/title/User:Andy_Crowd | CC-MAIN-2021-43 | refinedweb | 1,775 | 64.51 |
The application is written by kivy.
I want to test a function via pytest, but in order to test that function, I need to initalize the object first, but the object needs something from the UI when initalizing, but I am at testing phase, so don't know how to retrieve something from the UI.
This is the class which has an error and has been handled
class SaltConfig(GridLayout):
def check_phone_number_on_first_contact(self, button):
s = self.instanciate_ServerMsg(tt)
try:
s.send()
except HTTPError as err:
print("[HTTPError] : " + str(err.code))
return
# some code when running without error
def instanciate_ServerMsg():
return ServerMsg()
class ServerMsg(OrderedDict):
def send(self,answerCallback=None):
#send something to server via urllib.urlopen
class TestSaltConfig:
def test_check_phone_number_on_first_contact(self):
myError = HTTPError(url="", code=500,
msg="HTTP Error Occurs", hdrs="donotknow", fp=None)
mockServerMsg = mock.Mock(spec=ServerMsg)
mockServerMsg.send.side_effect = myError
sc = SaltConfig(ds_config_file_missing.data_store)
def mockreturn():
return mockServerMsg
monkeypatch.setattr(sc, 'instanciate_ServerMsg', mockreturn)
sc.check_phone_number_on_first_contact()
I made an article about testing Kivy apps together with a simple runner - KivyUnitTest. It works with
unittest, not with
pytest, but it shouldn't be hard to rewrite it, so that it fits your needs. In the article I explain how to "penetrate" the main loop of UI and this way you can happily go and do with button this:
button = <button you found in widget tree> button.dispatch('on_release')
and many more. Basically you can do anything with such a test and you don't need to test each function independently. I mean... it's a good practice, but sometimes (mainly when testing UI), you can't just rip the thing out and put it into a nice 50-line test.
This way you can do exactly the same thing as a casual user would do when using your app and therefore you can even catch issues you'd have trouble with when testing the casual way e.g. some weird/unexpected user behavior.
Here's the skeleton:
import unittest import os import sys import time import os.path as op from functools import partial from kivy.clock import Clock # when you have a test in <root>/tests/test.py main_path = op.dirname(op.dirname(op.abspath(__file__))) sys.path.append(main_path) from main import My class Test(unittest.TestCase): def pause(*args): time.sleep(0.000001) # main test function def run_test(self, app, *args): Clock.schedule_interval(self.pause, 0.000001) # Do something # Comment out if you are editing the test, it'll leave the # Window opened. app.stop() def test_example(self): app = My() p = partial(self.run_test, app) Clock.schedule_once(p, 0.000001) app.run() if __name__ == '__main__': unittest.main()
However, as Tomas said, you should separate UI and logic when possible, or better said, when it's an efficient thing to do. You don't want to mock your whole big application just to test a single function that requires communication with UI. | https://codedump.io/share/Uo2TZMr9BSJw/1/how-to-interact-with-the-ui-when-testing-an-application-written-by-kivy | CC-MAIN-2017-13 | refinedweb | 485 | 58.18 |
Re: Flashing a sprite
- From: Maxamor <Maxamor@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: Wed, 26 Apr 2006 15:26:01 -0700
You could do something like this:
public class Animation
{
private float timeSinceLastFrame = 0f;
... cut to the chase ...
public void Update(float timeElapsed)
{
timeSinceLastFrame += timeElapsed;
if(timeSinceLastFrame >= frames[frameIndex].timeToDisplay)
{
timeSinceLastFrame = 0;
frameIndex++;
// Optionally, you can loop back to the beginning by checking if
you've reached the end of the animation, in which case frameIndex would go
back to Zero.
}
}
}
public class Frame
{
private float timeToDisplay = 0;
}
Somethingl ike this would work for small animations.
"Daniel" wrote:
Hey ZMan.
can you explain why
if (gametime % 2.0 == 0)
that would work? isn't that just dividing by 2 and if the answer is equal to
show time? I guess the 2 would be my amount of seconds....but after gametime
goes over 2 seconds it would stop?
Really confused by that example sorry.....appreciate the help tho
"ZMan" <zman@xxxxxxxxxxxxxx> wrote in message
news:uO8LWhVaGHA.4248@xxxxxxxxxxxxxxxxxxxxxxx
Maxamor is correct. For both #1 and #2 you should not base anything on
frametime (unless you can atarget specific hardware and lock the
framerate).
Use the stopwatch class in system.diagnostics to give you a time in
seconds
Then do something like
if (gametime % 2.0 == 0)
show sprite
--
Zman - News and information for Managed DirectX
"Maxamor" <Maxamor@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message
news:2BF85B17-1FA8-4D6E-B1E6-A8D5427D7F9A@xxxxxxxxxxxxxxxx
As far as animation goes, I recommend not making it based off of "x
amount of
frames display frame 0, x amount of frames display frame 1" etc...
The reason behind this is that each frame may not be the same amount of
time
depending on which computer your program is running on. This means the
animation may look great on your pc, but if it runs faster on someone
else's
the animation would go faster too, possibly too fast.
Make the animation time-based.
That way, each frame can have an amount of time that it displays itself
before it lets the next frame be drawn.
Then, you could just make a 2-frame animation for your flashing sprite.
One
is completely invisible, and the other is completely visible and have
them
both display for 500ms or something. Then all you have to do is loop the
animation.
"Daniel" wrote:
Hi
Any one now a good tutorial or can help me onhow to make a flashing
sprite.
I want to make a flashing arrow.
I have some ideas:
1) lower and increase the alpha levels per frame
2) have it non rendered and then rendered after x many frames..
My problem is for option 2...and 1 for that matter, how do i get it to
flash
at a pace i want?
Is this kind of solution mental? To make a class for the flashing sprite
that i call say SpriteFlasher.
Then in that class have a var for how many frames to allow to pass until
showing the sprite, and another var for howmany frames it should remain
visible for.
Then i could create an instance and pass in the frames passed on every
loop?
- References:
- Flashing a sprite
- From: Daniel
- RE: Flashing a sprite
- From: Maxamor
- Re: Flashing a sprite
- From: ZMan
- Re: Flashing a sprite
- From: Daniel
- Prev by Date: Re: Getting images from capture card
- Next by Date: Re: Flashing a sprite
- Previous by thread: Re: Flashing a sprite
- Next by thread: Re: Flashing a sprite
- Index(es): | http://www.tech-archive.net/Archive/Development/microsoft.public.win32.programmer.directx.managed/2006-04/msg00078.html | crawl-002 | refinedweb | 578 | 72.46 |
Adding new key Calculated Key Figure (CKF) in PCS report
In this example, I will create a new Calculated Key Figure Staffed %. query. Queries copied to customer namespace can be changed in the same way.
The formula for Staffed % will be as follows:
Staffed % = (Staffed Quantity/Planned Quantity) * 100
This should only be calculated for activity resource type. To achieve this, we firstly need to create two RKFs using Staffed Quantity and Planned Quantity key figures restricted to resource type 0ACT. This can be done as shown in blog on creating RKFs. Create two RKFs with the name “Staffed Quantity – Activity” and “Planned Quantity – Activity”. After creating the RKFs, to create the CKF proceed as follows:
BEx Query Designer –> Open query /CPD/AVR_MP_Q008 –> Navigate to Rows/Columns –> Right Click Structure “Key Figures” –> Click “New Formula”
Edit the formula –> Change the CKF description to “Staffed %” –> Enter the formula for Staffed % in Detail View –> Click OK –> Save the query
PCS report now shows the new RKFs and CKF and displays the staffed % only for activity resource type.
| https://blogs.sap.com/2018/06/03/adding-new-key-calculated-key-figure-ckf-in-pcs-report/ | CC-MAIN-2021-49 | refinedweb | 175 | 54.97 |
Today we are releasing a new milestone: Kotlin M11, which brings such long-awaited features as secondary constructors, a first glimpse of true reflection support for Kotlin and much more.
Language Changes
M11 brings quite a few language changes, some of which are breaking changes and/or deprecation of old ways of doing things in favor of new ones. Some of your code may break, but we’ve done our best to make your transition path as smooth as possible.
Multiple Constructors
This feature has been most awaited by Android developers, because subclassing standard view classes on Android requires having more than one constructor. Now you can do that:
Please refer to the user docs and the spec document for more details.
Prefixes For Initializer Blocks
Another change, also related to constructors, is prefixing initializer blocks with the soft-keyword
init.
The main reason for this change is that the formerly used syntax (where just curly braces in a class body denoted an initializer block) didn’t work out too well when an initializer followed a property declaration (which is pretty common):
An error was reported on the call of
baz(), because the initializer looks exactly like a trailing lambda passed to it. The only workaround was to put a semicolon after the property initializer, which looks rather unnatural in Kotlin. So, since M11, we require
init before the initializer block:
The old syntax is deprecated, i.e. you’ll get a warning, not an error. Also, the IDE provides an Alt+Enter quick-fix action to convert the old syntax to the new one, which has an option to bulk update the whole project.
See user docs for more details.
Companion Objects (Class-Objects Rethought)
As you all probably know, Kotlin classes do not have static members. Instead there may be a special singleton
object associated with a class, which we used to call “class object” ‐ a rather unfortunate term. So, we somewhat redesigned the concept, and, with your help, chose another name for it: companion object.
The unfortunate wording was not the only reason for this change. In fact, we redesigned the concept so that it is more uniform with normal objects.
Note that a class can (and always could) have many objects (usual, named singletons) nested into it:
Since M11, one of these objects may be declared with the
companion modifier, which means that its members can be accessed directly through class name:
Accessing members of
Obj1 requires qualification:
KotlinClass.Obj1.foo(). For members of
Obj2 the object name is optional:
KotlinClass.foo().
One last step: the name of a companion object can be omitted (the compiler will use the default name
Companion in this case):
Now you can still refer to its members though the name of the containing class:
KotlinClass.foo(), or through full qualification:
KotlinClass.Companion.foo().
As you can see, unlike what we used to have with class objects, companion objects are completely uniform with normal objects.
Another important benefit is that now every object has a name (again,
Companion is used when the name of a companion object is omitted), which enables writing extension function for companion objects:
Function Expressions
Kotlin has higher-order functions, which means that you can pass a function around as a value. Before M11, there were two ways of obtaining such values: lambda expressions (e.g.
{ x -> x + 1 }) and callable references (e.g.
MyClass::myFun). M11 introduces a new one, which is very logical, if you think about it:
So, you can use a function, in its traditional syntactic form, as a value. See user docs and the spec document for more details.
Lambda Syntax Restricted (for future enrichment)
Among other things, function expressions enable us to make a step toward supporting multi-declarations in parameters of lambdas. The final goal (not implemented yet) is to be able to, say, filter a list of pairs with the syntax like this:
Here,
(a, b) is a multi-declaration, i.e.
a gets the first component of each
Pair object, and
b gets the second one. Currently, multi-declarations are not supported, but we deprecated some of the syntactic forms of lambdas to drop them in M12 and make the multi-declaration syntax possible.
What is deprecated:
- specifying return types of lambdas, e.g.
{ (a: Int): Int -> a + 1 }
- specifying receiver types of lambdas:
{ Int.(a: Int) -> this + a }
- using parentheses around parameter names of lambdas:
{ (a, b) -> a + b }
Whenever you really need one of these, please switch to using function expressions instead.
The IDE provides a quick-fix that migrates your code automatically.
Labeled Returns in Lambdas
For a long time there was a restriction on using
return expressions in lambdas: a local
return was only allowed if the lambda has an explicit return type specified. This was caused by a limitation in the type inference algorithm. Now, the restriction is removed, and we can use local returns freely:
Import Semantics Changed
Importing is one of the least-visible language features for IDE users, but it has a great influence on how tools work, and occasionally on the users, too.
In M11 we made the order of *-imports (also called “on-demand imports”) insignificant, and made some other tweaks that enabled us to implement efficient automatic management of import directives in the IDE.
Reflection
Implementing Kotlin-specific reflection (rather than making you use Java reflection on Kotlin classes) is a long-running project that has required a lot of work in the compiler. Essentially, we have to factor out a large portion of the compiler and ship it as part of the runtime. This includes: loading Kotlin-specific metadata from the binaries, representing Kotlin symbols as objects (historically, we call them descriptors), loading Java declarations as Kotlin ones (because Kotlin reflection should work on Java objects too) and so on.
At last, we present the first results of this work: the ability to introspect properties, provided through a new kotlin-reflect.jar that ships with the compiler (a lot more functionality will be added soon).
The New Reflection Jar
We ship
kotlin-reflect.jar separately (not as part of
kotlin-runtime.jar), because it is rather big at the moment: about 1.8MB. We will look into reducing its size, but it is likely to always be rather substantial, so making everyone always ship it with their applications is not an option (especially for Android developers).
As a consequence, you may need to add this jar to your classpath, if you use property literals (
::propertyName). The M11 compiler will yield an error if you don’t, but later this requirement will be relaxed. The IDE will offer you a quick-fix action that adds the jar automatically to your project.
Class Literals
To obtain a reflection object for a class in Kotlin, use the following syntax:
You get an instance of
KClass<MyClass>, which you can introspect, e.g. get its properties.
See more in the user docs.
Compatibility with Java Reflection APIs
Kotlin reflection API works both for Kotlin and Java classes, and you can “convert” from Kotlin to Java reflection objects and back. For example, you can say
kClass.java and get a
java.lang.Class instance, and vice versa:
jlClass.kotlin gives you a
KClass instance.
@Nullable and @NotNull in Java
As always, Java interop is a big priority for us, and this time we are improving on the platform types feature we shipped in M9: now the compiler issues warnings on misuse of Java values annotated as
@Nullable and
@NotNull. This is not as strict as it used to be before M9, but it doesn’t break as often either.
Next step would be to issue Java nullability errors in a safe way (so that an error can always be fixed reasonably), and this is planned for the next milestone.
Android Extensions
Good news for Android users: M11 brings a useful extension that makes Android development in Kotlin easier.
We all know about
findViewById(). It is a notorious source of bugs and unpleasant code which is hard to read and support. In Java the way around this problem is through libraries such as ButterKnife and AndroidAnnotations, which rely on JSR 269, but it is a
javac-specific API and is not supported in Kotlin (yet).
Since M11, Kotlin has its own solution to the
findViewById() problem, which does not require JSR 269: the new
kotlin-android-extensions plugin for the Kotlin compiler allows you to access views in a type-safe way with zero extra user code (no annotations or other such things) and no runtime libraries required.
To use this extension, you need to enable it in your Gradle build and install an extension plugin into your IDE. See more here.
IntelliJ IDEA Support
More improvements and features for IntelliJ IDEA
Refactorings and Intentions
The following refactorings and intentions are now available:
- Introduce Property
Ability to introduce property and to define whether we want an initializer, a getter or lazy property
- Create from Usage with Java Interop
It is now possible to invoke “Create from usage” on Java types being used in Kotlin files.
- Receiver to Parameter Conversion
A special case of Change Signature refactoring, whereby a parameter can be refactored to a receiver, thus allowing
converting a function that takes a parameter of type T into an extension function of T. It also allows for the reverse, whereby a receiver can be transformed into a parameter.
- Function to Property
- Unused declarations
Ability to convert a function to a property and vice versa
Unused declarations inspections is now available project wide, allowing these to be highlighted in any context.
Evaluate Expression
We now have the ability to evaluate lambda expressions and anonymous objects in the debugger:
KDoc Support
We now have a fully fledged language to provide inline documentation. It is called KDoc and based on a combination of JavaDoc and Markdown. The full reference is available online. With M11, IntelliJ IDEA provides support for KDoc via completion, validation and refactoring support.
Standard Library
One of the big changes in the Standard library is renaming streams to sequences. This was done so as to avoid confusion with Java 8 streams.
will generate an infinite lazy sequence of numbers starting at 1. There are also extension functions allowing to convert existing iterables to sequences
There have also been several bug fixes in the standard library.
In addition the team contributed to some updates to RxKotlin, making it more idiomatic in light of the recent changes in Kotlin.
Last but not least, we have also revamped the standard library API reference, which is now available on the Kotlin Web site.
JavaScript Support
The JavaScript backend now has better support for inline functions. We’ve also made it simpler to reuse libraries targeted at JavaScript through the new concept of “JavaScript binaries”, which will be covered in more detail soon.
Other changes and improvements
In addition to the above, this release also brings:
- Gradle Changes
Kotlin and Java files can now reside in a single source folder
- Bug fixes and compilation speed improvements
Over 220 bug fixes since M10 as well as many external pull requests from the community.
As always, you can update the plugin in IntelliJ IDEA 14 (or earlier versions) if you have a previous version installed, or install directly from JetBrains plug-in repository. You can also download the standalone compiler from the release page.
Pingback: 1p – Kotlin M11 | Profit Goals
Great changes for Android Developers! Kotlin Android Extensions and multiple constructors support are so good.
“Gradle Changes
Kotlin and Java files can now reside in a single source folder”
… so now better practice is to put them under src/java or src/kotlin folder?
Wow! Big release here! Great work guys!
The plugin for 14.0.3 is not yet out though.
Thank you!
It is, check it out:
They prepared 4 versions of plugin
EDIT: ok, it’s for 139.1408+ ;/
Should work now. Check please
Yup see it now, thanks a lot!
Great work. Thansk.
It seems the size of Kotlin reflection-jar is too large. I have a question:
In which situations, we should use Kotlin-reflection instead of Java-Reflection?
Java reflection is less precise than Kotlin reflection: it does not know about properties, for example. But if it works OK for you, you can stick to it
Dex method count for kotlin reflection and dependencies: 5313 for com.google.protobuf and 7387 for kotlin reflect. 12.5k is a bit too much for android if you don’t want to use multidex or proguard(they both increase build time by an order)
Fair point, we will look into it in the future versions, for now you have an option not to include reflection at all.
My code depends on functions ::, which is now moved to Kotlin-reflect.jar.
If I want to use :: function only, and don’t want to depend on Kotlin-reflect.jar, what should I do?
You can use callable references as pure functions without reflection freely. The IDE inspection gives a false warning at the moment, this will be fixed (see KT-7059)
Wow! Awesome! You did a great job!
So it is the companion in the end. Nice!
The main question for me, is when the language is going to be frozen? I mean, knowing the release plans would be better… =) But at least, what are the plans for feature freezing?
We are finalizing the design now, hence the deprecations etc. Should be 1-2 milestones more, but I can’t be sure.
Reading the primary/secondary/init constructors in the docs, it all reads like a bit of a mess. Particularly, the “class Customer private () { … }” is ugly.
It would be much more consistent to just have one syntactic approach, based on the secondary one (and Java). Take constructors out of the class definition and treat them as proper methods (which they should be). Brevity, as in a method embedded within the class header, is losing clarity.
Using “new” as the keyword instead of “constructor” would be nicer too.
Constructors in class definition are quintessence of Kotlin (e.g. data classes) and code is much more clear.
Agree that “constructor” is a bit too long, maybe “construct” or “constr” acronym would be better.
I thought ‘ctor’ is the standard acronym for the constructor.
There is no standard, ctor was only invented in c#.
Really? But it is short and clear any way =)
+1 for the separation of class and its constructor(s). Although with the changes in M11 they can, optionally, be separated:
I would also vote for “new” or “make” instead of “constructor”. both are shorter and nicer.
kotlin-android-extensions doesn’t work for me. it doesn’t generate the files.
It shouldn’t generate any files, you can simply call
activity.textViewif your layout XML file has such a control.
I’ve got
classpath “org.jetbrains.kotlin:kotlin-android-extensions:$kotlin_version”
in the right place. However, when I’m trying to import
import kotlinx.android.synthetic.activity_main.*
It mark kotlinx in red, and don’t compile (“Error:(12, 8) Unresolved reference: kotlinx”)
Forgot to say, latest android studio beta (AI-141.1793788)
Plugin, oh
Do you have the IDE plugin installed?
Now I have everything set. the ide doesn’t show me errors, but compile does. “Error:(8, 8) Unresolved reference: kotlinx”
What build process are you using? Gradle? Or IntelliJ’s built-in make?
Gradle. (As I said, Android studio)
See the 2 files here:
Please update to the latest kotlin-android-extensions plugin version (0.11.91.2). Hope it would solve the problem.
I have the same issue.
In my case, “Kotlin Android Extensions” plugin does not appear in the plugin repository browsing window.
I use IntelliJ IDEA 14.0.3 CE on OS X Yosemite 10.10.3.
Plugin is currently available for Android Studio and IntelliJ IDEA 14.1.
imports are ok for me, but plugin can’t parse layouts. Started discussion on kotlin forums:
Doesn’t work for me either. I have the plugin installed as well. No lint errors, but the actual compilation (using gradle-aware make in the IDE or gradle wrapper from the command line) results in this:
:palettehelper:compileDebugKotlin
e: /Users/hsweers/dev/android/PaletteHelper/palettehelper/src/main/kotlin/io/sweers/palettehelper/PaletteDetailActivity.kt: (40, 8): Unresolved reference: kotlinx
e: /Users/hsweers/dev/android/PaletteHelper/palettehelper/src/main/kotlin/io/sweers/palettehelper/PaletteDetailActivity.kt: (64, 9): Unresolved reference: toolbar
e: /Users/hsweers/dev/android/PaletteHelper/palettehelper/src/main/kotlin/io/sweers/palettehelper/PaletteDetailActivity.kt: (65, 9): Unresolved reference: toolbar
e: /Users/hsweers/dev/android/PaletteHelper/palettehelper/src/main/kotlin/io/sweers/palettehelper/PaletteDetailActivity.kt: (109, 62): Unresolved reference: image_view
e: /Users/hsweers/dev/android/PaletteHelper/palettehelper/src/main/kotlin/io/sweers/palettehelper/PaletteDetailActivity.kt: (219, 13): Unresolved reference: grid_view
e: /Users/hsweers/dev/android/PaletteHelper/palettehelper/src/main/kotlin/io/sweers/palettehelper/PaletteDetailActivity.kt: (220, 13): Unresolved reference: grid_view
FAILED
Just an update, I got it working by making sure the plugin was installed and having the buildscript closure in the same build.gradle file as my dependencies
Thanx for the hint, with that help i also managed to run it. Just for documentation here are my actual running gradle files
Hi,I encountered the same problem, would you solve it? Thanks.
I had this problem too, and the solution is:
[build.gradle]
buildscript –> dependencies –> add the line below:
classpath “org.jetbrains.kotlin:kotlin-android-extensions:$kotlin_version”
This did the trick, thank you!
Thank you very much, everything looks wonderful!
Really looking forward to be able to use that
pair.filter {(a,b) -> …}
stuff.
Also I miss from time to time a support for type aliases (helps when dealing with lots of hierarchical and templated code). You once said that it is on your list, but with minor priority, I hope this will not fall off the radar
I am sorry to raise the discussion again, but I cannot pass by.
So finally the syntax is:
And now I as a user wonder how should I add an extension to the ‘companion object’
So the only proper way to do things would be to allow referencing to the companion object with the default name
Consumer.Companion
Maybe you already considered that and it will be in M12. Maybe you disagree with the argument and I should post a bug ticket for further discussion.
And so the #Kotlin puzzle began!
There are no magic names here. The compiler does not care whether an object is named
Companionor not.
I can’t see the problem here: you have a named object, its name is
HeyObject, why do you want to refer to it by some other name (
Companion)?
Because for me it is an object, related to class
Consumer, and that’s all I care about.
Something tells me that whatever we do about it, named companion will become an anti pattern for Kotlin
jackson-kotlin module will update soon, need to take a look at the constructor changes, reflection api and more. Weekend-project.
For Kotlin Jackson module and M11
com.fasterxml.jackson.module:jackson-module-kotlin:2.5.1.1.KotlinM11
Great Job. Awesome features.
Pingback: Yested updated to Kotlin M11 | Yested Framework
Pingback: 1p – Kotlin M11 is Out | Profit Goals
Pingback: 1p – Kotlin M11 is Out – Exploding Ads
Pingback: Kotlin M11 发布,基于 JVM 的编程语言 | 技术迷
Are you going to have formal Video tutorial for Android developement?
Such great work!!
I am having one issue. A good chance I’m doing it wrong, but synthetic properties are doing unexpected things for me
import kotlinx.android.synthetic..*
Seems like if you have multiple instances of the same fragment on the activity, in my case a
ViewPagerwith
FragmentStatePagerAdapter, then something is awry. Almost seems like there are not multiple instances of my views in the activity, only the last Fragment created seems to be able to change its views state.
I haven’t look at the source, but almost seems like there is not a hidden caching function generated for each fragment.
Thank you so much.
Pingback: Kotlin M11 Lands: Could this JVM Newbie Become Your Favourite Android Language? | Voxxed
Pingback: JVM-Sprache Kotlin erreicht 11. Meilenstein - jaxenter.de
Pingback: Google Places API for Android and iOS, Android Wear device locator, and Atlassian’s new Snippets feature—SD Times news digest: March 20, 2015 - SD Times
Great work Kotlin team! We are excited to experiment with M11
Thanks for M11 (secondary constructors means less java and more kotlin :).
And many thanks for the painless transition.
Great job !
What are the major features that you want to put in kotlin for 1.0 ?
Will there be some annotation processing support ?
Any ETA for incremental compilation support in Android ?
Having to recompile the hole app for a single line change is a huge waste of productive time…
We’ll do our best, but can’t talk about ETAs yet.
First off, congrats on the release. I’m a very new user, but liking the language so far, with a few annoyances (mostly related to Java interop).
I don’t know if I’m missing the obvious, but since the switch to sequences, all the higher order functions I’ve used on iterable Java types (map, fold etc,) are flooding me with deprecation warnings, and I can’t see an easy way to convert them from Iterator to Sequence. I can chain a sequence() call to the Iterator to****() extension methods, but they are themselves deprecated. Do I just ignore these warnings when dealing with Java iterables?
Thanks for Kotlin!
Could you please create an issue () and provide some sample code you are using? Without a code it’s hard to answer your question.
We love Kotlin and have been using it since M4 on the server side, great to see the language progressing. We use Dynamic Transforms and Dependency Injection quiet a bit and having a simple class literal is key for us. The MyClass::class syntax is great but I am surprised it returns a kotlin.reflect.KClass object and not a java.lang.Class object. Can I suggest something like
MyClass::java -> java.lang.Class
MyClass::kotlin -> kotlin.reflect.KClass
OR
MyClass::class -> java.lang.Class
MyClass::klass -> kotlin.reflect.KClass
Right now I know I can do
MyClass::class.javaClass
Finally if I just used the java class literal in kotlin maybe it would not require the reflection package. (not a problem on the serverside, but for android devs)
This is a language feature and the language semantics should not depend on the particular platform. Also Java reflection doesn’t suit our needs because everything is different in Java, including the type system and various kinds of symbols that need to be reflected (Java has methods/fields vs Kotlin’s functions/properties, for example).
Could you share some use cases you have for class literals: do you use them mostly as annotation arguments, or maybe mostly as arguments to a few particular API methods?
Java
diSession.getInstance(MyClass.class)
//-> returns an instance of MyClass constructed using Dependency Injection
transformer.transform(input,MyClass.class)
//-> transforms the input object into MyClass if it can
Kotlin
diSession.getInstance(MyClass::class.javaClass)
diSession.getInstance(javaClass())
...
Thank you
Off the top of my head, I’d suggest to create extensions on
diSessionand
transformerthat take
KClassand transform it to
java.lang.Classinternally.
The result would be:
BTW, I strongly suspect that
MyClass::class.javaClassis a bug, because it always returns
java.lang.Class<KClass>, and what you meant is
MyClass::class.java
You are right, MyClass::class.javaClass, is a mistake. It is too easy to make of an mistake and another reason I feel kotlin needs to include clean java class literals. Yes know know I can also use reified generics, but his requires me to augment all of the place classes are passed with extension functions.
Another reason for Java Class Literals in kotlin is to skip the overhead of creating a KClass just to get a reference to the Java Class Literal.
This will be optimized by the compiler
I’m eagerly awaiting to hear more about JavaScript binaries!
I hope something to read about Kotlin/Javascript libraries will be ready in near future (beginning of the next week).
Go is coming to android. Go has coroutines. Kotlin is the only reason why I would code for Android. I hope that Kotlin will have async-await in near future.
It’s so great that the secondary constructor was implemented! Thank you for adding this feature, and now I can choose Kotlin instead of Scala.
I love Kotlin because it’s truly simple and has tiny runtime.
Is there a way to not create auto property on constructor? Because at whatever constructor the parameters will always be auto properties.
Only parameters marked as
valor
varbecome properties
Pingback: How I wrote my first iOS app while learning Kotlin for Android Development | Honey, Ice & Jelly
What is the equivalent of Java 8 :: for passing method as an argument in Kotlin? Currently I can only use lambda to overcome this.
Pingback: Roadmap de #kotlin | Kotlin.es | https://blog.jetbrains.com/kotlin/2015/03/kotlin-m11-is-out/ | CC-MAIN-2017-30 | refinedweb | 4,203 | 55.13 |
Details
- Type:
Bug
- Status:
Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: 3.0.1
-
- Component/s: MMTk: GenMS
- Labels:None
- Environment:Linux IA32 in host and target with production config (GenMS GC). The machine is a bi xeon with 2GB of RAM with a 32 bit Debian GNU/Linux
- Number of attachments :
Description
When allocating a 200 MB tab, a OOM is thrown. Here is a code which lead to a OOM :
package lip6.jikesrvm.bench;
public class Test
{
public static final int NB_ELEM = 200 * 1024 * 1024 / 4;
private static int[] tab;
public static void main(String args[]){ tab = new int[NB_ELEM]; }
};
The program is run with rvm -cp barrier/bin -Xms1G -Xmx1.5G -X:processors=all lip6.jikesrvm.bench.Test
This perfectly run with sun JVM or with NB_ELEM set at 20 * 1024 * 1024
Issue Links
Activity
I did a few test and it seems that the bug appears at 200MB of memory ask precisely. There is no problem with a 199 MB tab allocation. It sounds like a hard coded limit in Jikes for me, but maybe it is just a coincidence.
I have found that it works fine if you use a full heap collector (Immix or MarkSweep) and explicitly set the minimum heap size. Seems that there is at least one bug in the heap growth manager and its reporting of current heap size.
OK, so there's also some kind of a bug in the discontiguous space allocation. For some reason it is failing to find large chunks of discontiguous space which are in fact available. Not a problem for full heap GC, which is interesting. Investigating further.
Thomas,
Your immediate problem should have been fixed by r15632. Please follow up if there is still a problem.
The problem was that in the generational collector, the nursery is by default limited to a fixed amount of virtual memory. If an object does not fit in this space, allocation fails with an out of memory error. I have fixed things now so that any such allocation will go to the large object space.
I'm not closing this yet, because I still have a handful of followups:
1. The heapgrowth manager needs to be fixed, as per my previous note.
2. We probably should re-explore discontiguous nurseries (it is all there, just a matter of flipping a boolean)
3. We need to update the MMTk tutorial now
4. We should probably change the failure mode for out-of-address-space failures
I've just tested r15632 and it works indeed perfectly well.
Thanks for the fix, it was very quick.
I just updated the tutorial and Daniel has a bunch of refactoring in-flight which among other things changes the handling of OOMs.
Moving all unscheduled issues to 3.1. Please close or retarget to a different fix target as appropriate.
Each of the three residual issues arising from this entry are subsumed by other issues. The specific problem that lead to this JIRA was fixed, as noted above.
My guess would be that this indicates that MMTk has broken up the virtual address space such that it is unable to find 200Mb of free contiguous space to satisfy the request. | http://jira.codehaus.org/browse/RVM-808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=171325 | CC-MAIN-2014-42 | refinedweb | 537 | 71.65 |
i'm making a sort of android lock thing on kivy, and to draw the line, I need to get the id of the widget the mouse is on, so I assing an id to each one like this in the .kv file:
ClickableImage:
id: one
source: 'button.png'
etc.
and I know I can get all the ids (I have 9, of course), with the
self.parent.ids.id
or
self.parent.ids['id']
but is there a way to get the ID the mouse is in? or the one I click? I have a hoverable class so it detects when it enters in a Widget, but I don't really know how to get its position, or change its source.
Is there any:
self.parten.ids.current
or something like that?
thanks for the help
You can use collide_widget or collide_point and in the widget set a method that will change a variable in the parent, let's say selected_widget to the current widget's like this:
collide_widget
collide_point
parent
selected_widget
if self.collide_point(*Window.mouse_pos):
self.parent.selected_widget = self # or its id
Then you can do with it anything. Maybe it'd be even better to put your logic into the widget itself and handle collision directly there. Obviously you'll need to bind a method you create with that if block above to an event such as on_release or on_press so run the method, otherwise it won't do a thing.
if
on_release
on_press
You can also get a hoverable behavior from this PR or even from this snippet.
Edit:
Please note that the id will not be available in the widget instance
Please note that the id will not be available in the widget instance
Which means self.ids.my_id.id == None and therefore to actually get id you need to do this:
self.ids.my_id.id == None
id
def find(self, parent, widget):
for id, obj in parent.ids.items():
if obj == widget:
print id
return id | http://jakzaprogramowac.pl/pytanie/57542,get-current-id-of-widget-on-python-kivy | CC-MAIN-2017-43 | refinedweb | 333 | 71.34 |
Introduction to Methods in Java
A method in java can be defined as a set of logical java statements written in order to perform a specific task. They provide a way to reuse code without writing the code again. In Java, any method should be part of a class that is different from Python, C, and C++. The existence of methods is not possible without a java class. Here is the list of components involved while creating java methods:
Components for Creating Java Methods
Here is the list of components involved while creating java methods:
- Access Modifier: In java, there exist four different types of access modifiers:
- Public: Methods declared as public are accessible from all classes within an application.
- Protected: Methods declared as protected are accessible from the class within which it is defined and all subclasses of that class.
- Private: Methods declared as private are only accessible from the class within which it is defined.
- Default: Methods declared as default are accessible from the class within which it is defined and from classes declared within the same package as the class enclosing the method.
- Return Type: This contains the data type of the value the method is supposed to return or it is void if the method does not return anything.
- Method Name: This is the named assigned to the method, which may or may not be unique. It is to be noted that the method name should be verbs and words used show follow camel case notation.
- Parameters: This includes a list of input parameters separated by commas with their data types. If the method does not require any input parameters then () is used.
- Exceptions: In case a method may throw one or more exceptions, we can list exceptions separated by commas.
- Method Body: It is the programming content enclosed between braces. The method body contains one or more logical statements for executing a particular task.
Syntax:
Here is a basic syntax of methods:
//declare Enclosing class
public class Myclass{
//declare java method
public String concat(String s1, String s2){
// combine two strings with space
String s3= s1 + " " + s2 ;
//return resulting string
return s3;
}
}
Types of Methods in Java
Methods can be categorized in the following two types:
- Build-in Methods: These methods are available in the java library and does not need to be created by a developer. For example max() method present in Math class in java.
- User-defined Methods: These methods are explicitly defined by a developer in java classes.
Calling a Java Method
When a method is called by a calling program, the control goes into the method body. After control goes to method body, it returns to the calling program under the following three conditions:
- All statements written inside the method body are executed successfully.
- Any return statement is encountered.
- An Exception is thrown.
Static methods are called using class name and non-static methods are called using object instance.
Example #1
Now we will see java code examples show how methods are declared and called using java. In this example, we will see how to create a static method and how is it called.
Code:
package com.edubca.methods;
public class MethodDemo{
public static int getMaximum(int a , int b){
if(a>b){
return a;
}else {
return b;
}
}
public static void main (String args[]){
int maxvalue1 = getMaximum(10,23);
System.out.println("Out of 10 and 23, " + maxvalue1 + " is greater" );
int maxvalue2= getMaximum(40,20);
System.out.println("Out of 40 and 20, " + maxvalue2 + " is greater" );
}
}
Output:
Example #2
In the next example, we will see how to call non–static methods.
Code:
package com.edubca.methods;
public class MethodDemo{
public int getMinimum(int a , int b){
if(a<b){
return a;
}else {
return b;
}
}
public static void main (String args[]){
MethodDemo demo =new MethodDemo();
int minvalue1 = demo.getMinimum(10,23);
System.out.println("Out of 10 and 23, " + minvalue1 + " is smaller" );
int minvalue2= demo.getMinimum(40,20);
System.out.println("Out of 40 and 20, " + minvalue2 + " is smaller" );
}
}
As we can see above an instance of an enclosing class is required to call a non-static method. The above code will produce the following output:
Output:
Example #3
In the next example, we how to create methods throwing exceptions.
Code:
import java.io.*;
package com.edubca.methods;
public class MethodDemo{
public void mymethod() throws IOException{
throw new IOException("IO Exception occurred...");
}
public static void main (String args[]){
MethodDemo demo =new MethodDemo();
try{
demo.mymethod();
}catch(Exception e){
e.printStackTrace();
}
}
}
As we can see from the above code, whenever a method throws an exception caller of the method must handle exception using try-catch or any other suitable error handling mechanism. The above code shows the below output on screen:
Output:
Conclusion
From the above article, we have a clear idea about methods in java. Therefore will the help of methods we can achieve any task. Using methods make our code reusable and easy to test, understand and debug.
Recommended Articles
This is a guide to Methods in Java. Here we discuss the types of methods and list of components involved while creating java methods along with examples and its code implementation. You may also look at the following articles to learn more – | https://www.educba.com/methods-in-java/?source=leftnav | CC-MAIN-2020-34 | refinedweb | 872 | 53.21 |
#include <wx/dcgraph.h>
wxGCDC is a device context that draws on a wxGraphicsContext.
wxGCDC does its best to implement wxDC API, but the following features are not (fully) implemented because wxGraphicsContext doesn't support them:
wxCOPY,
wxOR,
wxNO_OP,
wxCLEARand
wxXORfunctions, attempts to use any other function (including
wxINVERT) don't do anything.
Constructs a wxGCDC from a wxWindowDC.
Constructs a wxGCDC from a wxMemoryDC.
Constructs a wxGCDC from a wxPrinterDC.
Construct a wxGCDC from an existing graphics context.
Note that this object takes ownership of context and will delete it when it is destroyed or when SetGraphicsContext() is called with a different context object.
Also notice that context will continue using the same font, pen and brush as before until SetFont(), SetPen() or SetBrush() is explicitly called to change them. This means that the code can use this wxDC-derived object to work using pens and brushes with alpha component, for example (which normally isn't supported by wxDC API), but it also means that the return values of GetFont(), GetPen() and GetBrush() won't really correspond to the actually used objects because they simply can't represent them anyhow. If you wish to avoid such discrepancy, you need to call the setter methods to bring wxDC and wxGraphicsContext font, pen and brush in sync with each other.
Retrieves associated wxGraphicsContext.
Set the graphics context to be used for this wxGCDC.
Note that this object takes ownership of context and will delete it when it is destroyed or when SetGraphicsContext() is called again.
Also, unlike the constructor taking wxGraphicsContext, this method will reapply the current font, pen and brush, so that this object continues to use them, if they had been changed before (which is never the case when constructing wxGCDC directly from wxGraphicsContext). | https://docs.wxwidgets.org/3.1.5/classwx_g_c_d_c.html | CC-MAIN-2021-31 | refinedweb | 295 | 50.67 |
Christoph,You wrote: > imo PCI_DMA_BUS_IS_PHYS should be a propert of each struct device > because a machine might have a iommu for one bus type but not > another, e.g. > dma_is_phys(dev);As pointed out by DaveM, this isn't sufficient for the block layer,which needs to know the page size of the I/O MMU so it can makemerging decisions about physically discontiguous buffers.I think we also need: /* * Returns a mask of bits which need to be 0 in order for * the DMA-mapping interface to be able to remap a buffer. * DMA-mapping implementations for real (hardware) I/O MMUs * will want to return (iommu_page_size - 1) here, if they * support such remapping. DMA-mapping implementations which * do not support remapping must return a mask of all 1s. */ unsigned long dma_merge_mask(dev)Then you can replace: BIO_VMERGE_BOUNDARY => (dma_merge_mask(dev) + 1)Of course, this doesn't work literally, because we don't have a devicepointer handy in the bio code. Instead, it would probably make themost sense to add a "iommu_merge_mask" member to "structrequest_queue" and then do something along the lines of: #define BIO_VMERGE_BOUNDARY(q) ((q)->iommu_merge_mask + 1)Note 1: the "+ 1" will get optimized away because the only wayBIO_VMERGE_BOUNDARY() is used is in BIOVEC_VIRT_MERGEABLE, whichreally needs a mask anyhow; this could be cleaned up of course, butthat's a separate issue.Note 2: dma_merge_mask() cannot be used to replace dma_is_phys() (asmuch as I'd like that), because they issue of (virtual) remapping isreally quite distinct from whether a (hardware) I/O MMU is present(not to mention the _other_ reason that a bus may not be "physical").Note 3: I'm not comfortable hacking the bio code, so if someone wouldlike to prototype this, by all means go ahead... ;-)Thanks, --david-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2003/6/10/241 | CC-MAIN-2017-22 | refinedweb | 321 | 53.34 |
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
Many frequentist methods for hypothesis testing roughly involve the following steps:
Here, we flip a coin $n$ times and we observe $h$ heads. We want to know whether the coin is fair (null hypothesis). This example is extremely simple yet quite good for pedagogical purposes. Besides, it is the basis of many more complex methods.
We denote by $\mathcal B(q)$ the Bernoulli distribution with unknown parameter $q$ (). A Bernoulli variable:
import numpy as np import scipy.stats as st import scipy.special as sp
n = 100 # number of coin flips h = 61 # number of heads q = .5 # null-hypothesis of fair coin
xbaris the estimated average of the distribution). We will explain this formula in the next section How it works...
xbar = float(h)/n z = (xbar - q) * np.sqrt(n / (q*(1-q))); z
pval = 2 * (1 - st.norm.cdf(z)); pval
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages). | http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter07_stats/02_z_test.ipynb | CC-MAIN-2018-13 | refinedweb | 198 | 67.55 |
RationalWiki:Saloon bar/Archive88
Contents
- 1 Not sure what to make of this
- 2 Happy new year, and a question
- 3 LQT on user talk pages
- 4 Just in
- 5 Sidebar templates
- 6 If you had to pick a religion to follow which would it be?
- 7 I weep for the USA
- 8 (ex)MP Jailed
- 9 Dr Zoe D. Katz PhD
- 10 WIGO RSS feeds?
- 11 The Enemy Within
- 12 Loughner
- 13 Google Scribe
- 14 Cracked's 6 Crackpot Conspiracy Theories (That Actually Happened)
- 15 Mongolian Crystal Skulls
- 16 Tibia ME
- 17 Touching kids
- 18 Has Sarah Palin been wronged?
- 19 Eeny... meeney... miney...
- 20 WND polls
- 21 Jared Loughner vs. Nidal Hassan
- 22 Telemarketers
- 23 Them darn Muslims
- 24 What article should this be stuck in?
- 25 Anagrams
- 26 Hah
- 27 Heh, heh, heh
- 28 Watson
- 29 On extremist rhetoric in general
- 30 Will this world survive?
- 31 Banning Funeral Protests
- 32 WTF?!?
- 33 What the hell has happened?
- 34 Floods
- 35 Wikipedia vs Britannica
- 36 Many kinds of awesome
- 37 An interesting read:
- 38 Ship of Spies
- 39 Laptop help please...
- 40 Fox News vs...Pedobear?
Not sure what to make of this[edit]
US DOJ are subpoenaing twitter messages. Big Brother? Him (talk) 10:12, 8 January 2011 (UTC)
- I have to confess to some ignorance here, since I don't use Twitter and think it's for morons, but aren't Twitter messages public (or at least somewhat public)? I sort of thought that was the point? Also, subpoenaing something is much better than reading messages without a warrant. DickTurpis (talk) 21:23, 8 January 2011 (UTC)
Happy new year, and a question[edit]
Hey there. Happy new year! Could be that I'm a bit dense, but has the ability to move pages been removed? I can't see if anymore. Concernedresident omg!!! ponies!!! 16:20, 8 January 2011 (UTC)
- Arse biscuits! Ignore that question, but the happy new year thing still counts. Concernedresident omg!!! ponies!!! 16:22, 8 January 2011 (UTC)
- You see. This is why we don't allow the peons to take holidays. Takes forever to retrain the bastards. And a very happy New Year to you too, WorriedDweller person. --Ψ GremlinParla! 16:25, 8 January 2011 (UTC)
- Cheers Gremlin. Any big changes I should look out for? I'm assuming that my wikibreak was long enough for the abortion debates to end, but I'm scared to check. Concernedresident omg!!! ponies!!! 16:29, 8 January 2011 (UTC)
- Besides everything useful now being under the "page" tab and the advent of sloshy threads (which are inhumane, against the natural order of things, and quite possibly fattening too) everything else is pretty much as is. I think the abortion debate is over. --Ψ GremlinPraat! 16:34, 8 January 2011 (UTC)
- The abortion debate is never over. Who is debating is the only factor which changes. WeaseloidMethinks it is a Weasel 20:12, 8 January 2011 (UTC)
LQT on user talk pages[edit]
I seem to recall that user talk pages are community property, not user property, and as such using LQT on them should require community consensus. Do we actually have this, or is it just going to happen because some prominent members espouse it? Bastard wisest Phantom! 15:17, 4 January 2011 (UTC)
- As someone with LQT on their talkpage, I don't see what the harm is in letting users who prefer LQT have it on their talkpage. To require a community vote for someone to do this on their own talkpage is, in my opinion, a laughable waste of time. Lord of the Goons The official spikey-haired skeptical punk 15:24, 4 January 2011 (UTC)
- I suggest that the suggestion is more like we should have community wide agreement that users can have it on their talk pages if they want to. Such an agreement might have been a good idea but now it seems to be a fait accompli.--BobSpring is sprung! 15:29, 4 January 2011 (UTC)--BobSpring is sprung! 15:29, 4 January 2011 (UTC)
- (EC, which wouldn't happen in LQT) What Goonie said. postate 15:31, 4 January 2011 (UTC)
- So we have a carte blanche to use whatever interface we want on our talk pages? Bastard wisest Phantom! 15:32, 4 January 2011 (UTC)
- Insofar as I'm concerned, as long as people can still figure out how to post (and are still able to post) on your talkpage, go for it. Whatever. Gooniepunk2010 Oi! Oi! Oi! 15:34, 4 January 2011 (UTC)
- Well, that is definitely something for which community agreement ought to have been sought. Still, I suppose that would just have allowed the ignorant, Luddite masses to retard the course of progress, so it's for the best. Bastard wisest Phantom! 15:37, 4 January 2011 (UTC)
- I agree that it would have been better to discuss it.--BobSpring is sprung! 15:39, 4 January 2011 (UTC)
- We have never once issued dictates on the layout of user talk pages, and the reason the talk pages are community property is because of the nature of the content. The spirit of the rule was to protect from unilateral deletion of content without archival. Many people have many different choices in layout and presentation on their talk pages and we have never standardized it or demanded standardization. Things like floating boxes, defaultsort names, alterations in font style and size, obtuse layouts, etc. are all far more intrusive than lqt and we have long had carte blanche on all that. I think the site enforcing layout strictures on a user talk page would be the change in policy and be what required discussion. Tmtoulouse (talk) 15:44, 4 January 2011 (UTC)
- I think it would have been discussion solely for the sake of discussion. The purpose of a user talk page is to allow people to post messages for that user, and LQT in no way prevents that from happening. –SuspectedReplicant retire me 9:44 am, Today (UTC−6)
- Too right discussion would be for the sake of discussion. This is just clutching at straws because some people don't like liquid threads. Big deal. Get over it already. If you demand that talk pages are "community property" and then are dictating what people cannot put on them, you're equally guilty of what you're accusing others of. There are no rules and guidelines on RW that can't be broken anyway, I can't see where the problem is. postate 15:48, 4 January 2011 (UTC)
- You may be getting carried away with the mind-reading there Armondikov.--BobSpring is sprung! 15:56, 4 January 2011 (UTC)
- (EC, sigh) Unless I'm mistaken, the current situation is, "Have LQT if you want, have the other system if you want." I can't, honestly, see any issue with that. For it to be a problem, somebody would have to actually say, "I want your talkpage to be like X." That, as Trent said, would be the policy change requiring discussion. EXTERMINATE 15:54, 4 January 2011 (UTC
- Yea for everyone doing what they want! I don't want vendor lock-in (can't easily copy conversations to other wikis) and conveniently my comments will appear at the tops of talkpages that use LQT when I'm not using it. <snark snark> ~ Lumenos (talk) 04:29, 5 January 2011 (UTC)
- Hmm, that's something else that needs improvement, I think, before it's applied to more than a few isolated
nutjobstalk pages. Also citing difflinks. I just tried on ADK's page, and it's a bit baffling to non-technical me. But then, alpha software, etc., etc. --Kels (talk) 05:03, 5 January 2011 (UTC)
- What needs improvement? Copying conversations? I don't think there's a need to copy a conversation (moving is possible). I could write an extension or even a userscript that exports an lqt page as a traditional talk page, that wouldn't be too hard, but I don't think that would be a good use of my time (you can try to convince me otherwise by presenting valid use cases for copying a whole talk page). As for citing difflinks, that's just a side effect of how wiki talk pages work, you don't cite difflinks in forums either. What you want is Link to in the More menu, but if the comment has been edited, you can use Fossil record from the more menu. See, best of both worlds. -- Nx / talk 06:41, 5 January 2011 (UTC)
- I'm sure what you say it true Nx, but I can't help wondering how many people have not contributed to liquid threads conversations because they felt intimidated by them.--BobSpring is sprung! 07:13, 5 January 2011 (UTC)
- I don't know. What I do know is that many times I gave up commenting, even after writing a lengthy reply, because I didn't want to fight edit conflicts. And seeing a huge block of text, or two huge blocks of text in case you got an edit conflict, is far more intimidating IMHO. -- Nx / talk 07:30, 5 January 2011 (UTC)
- A good point too. I didn't know that they had that advantage.--BobSpring is sprung! 07:32, 5 January 2011 (UTC)
- I don't like lqt: the way they can fill up Rc even if you've got the javascript thing engaged: you can't hover on a diff to see what's been said (you have to go to the page): If you've missed the earlier part of a discussion it's a total pain tracking it through a full page of boxes (after working out which word to click on to see the thread). Change for the sake of change? Honestly, how often do you get ECd? (Got EC'd while doing this!!!!)Him (talk) 07:38, 5 January 2011 (UTC)
- I think you just answered your question there. Anyway, I'm going to make some changes to integrate it better with RC. As for tracking trough a full page of boxes... I don't see how that's worse than the current system (and archiving/moving conversations can make it even more annoying), but thanks to Special:Newmessages, it's easier to find new replies (provided you're watching the thread or the page). -- Nx / talk 08:06, 5 January 2011 (UTC)
LQT: plus: no edit conflicts. People who can't click "back", "copy", "edit", "[EC] paste" need this tool desperately.
LQT: minus: not like the rest of the wiki. Are difflinks supported? Is the point to allow some to use them on their talkpages, or is the goal to implement this crappy format across all our talk pages? To me, that is the real question. I don;t care if people's talk pages are a bit strange - or a lot strange - but if the goal is a sitewide change eventually, we need to discuss it at length. ħuman 07:48, 5 January 2011 (UTC)
- Yes, those were my thoughts too. If people want to use it on their talk pages so be it - though there will be some people who consequently won't comment on them. But more importantly - are we testing this for full implementation across the wiki. If so then we need to talk about it.--BobSpring is sprung! 07:56, 5 January 2011 (UTC)
- The goal is to, eventually, implement this superior format across the wiki, in particular in places that would hugely benefit from it, such as the Saloon bar and TWIGO:CP. I am willing to listen to complaints and I will try to address them as best as I can, but I realize that there will be people who'll just complain and bitch about how the wiki has now become some kind of playground for nerds or leave to make their own wiki or whatever, regardless of what I do (short of undoing the whole thing). C'est la vie. If you're lucky (and persistent) enough though, you may be able to get me to LANCB and you'll be spared of the horrors that is LQT. -- Nx / talk 08:06, 5 January 2011 (UTC)
- Quite frankly, I see no benefits from lqt at all, so LANCB all you want. Him (talk) 08:29, 5 January 2011 (UTC)
- Noooooooooooo! Don't you leave me with these madmen! I think we need an article explaining LQT (RationalWiki:Liquid threads?). Many of the same comments are coming up. Then we can say RTFM! I agree it is better in many ways (especially if you continue to improve it) but your Technocracy is clearly designed to keep a brotha down (unless I can figure out how to work it). Where would I go to learn about making the userscript to convert back to regular talkpage? ~ Lumenos (talk) 08:59, 5 January 2011 (UTC)
- I really do feel that we should have some sort of discussion about the merits or otherwise of the apparently superior technology before it is implemented. I don't think that I'm known for "bitching" about changes but I do feel that in this case we need to talk about it. I fully acknowledge that you put in a hell of a lot of good work Nx and produce real miracles.--BobSpring is sprung! 09:10, 5 January 2011 (UTC)
- I was replying to Human, not you, sorry for being rude, but I'm tired of his whining. I'm happy to see that a few users have started using LQT as hopefully that will result in more feedback. I really do appreciate it if you come to me with specific problems you have and I will try to fix them or at least come to some sort of compromise. I have been collecting these on my todo list from various past and current discussions, as well as my own experiences.
- As for activating lqt site-wide, or on something like the Saloon bar... yeah, I'm probably not going to do that myself. I'm weary of the HCM that usually results from such changes, so I'll leave it to people who are willing to push it through. -- Nx / talk 09:36, 5 January 2011 (UTC)
- Hi Nx! Fuck you too. Don't be rude if you have to apologise for it one comment later. Quit complaining about my "whining" just because, what, I make comments? I don't actually care if you are "tired" of my comments or attempts to work on the site. No kindly go fuck yourself and your omnipotent ("this superior format") attitude. Your method is incredibly poor and your personal commentary is worse. ħuman 06:51, 6 January 2011 (UTC)
- See it almost keeps me from doing such nefarious activity as putting the answer to Human's/Kel's question somewhere where others can find it. ; ) ~ Lumenos (talk) 09:30, 5 January 2011 (UTC)
This is RationalWiki, not IntelligentWiki. The whole problem is that LQT may be very rational, but to operate it requires a modicum of intelligence, which is not readily available to some of the RW crowd. --82.145.210.97 (talk) 09:42, 5 January 2011 (UTC)
- Well actually that is a problem. The talk pages should be easy to use and intuitive. It should be easy for idiots and non-technical people and especially newbies to read understand and respond to. At the moment it seems confusing to me. Now that may be as a consequence of my lack of intellectual capacity. Having said that the fact that it seems that way now does not mean that it will be that way forever.--BobSpring is sprung! 09:48, 5 January 2011 (UTC)
- If you tell me why you find it confusing, we may be able to find a solution :) -- Nx / talk 10:09, 5 January 2011 (UTC)
- Also, I'm not promoting lqt (and spending a lot of my free time improving and fixing it) because I like playing with new software. I'm doing it for two reasons: edit conflicts and unsigned comments annoy me, and I do want talk pages to be more easy to use and more intuitive. -- Nx / talk 10:15, 5 January 2011 (UTC)
- Basically, the wiki needs to work like a toilet. Open the lid, take a dump, push the button. Anything beyond that and you risk that people start crapping on the carpet. --79.45.99.168 (talk) 10:11, 5 January 2011 (UTC)
- Interesting analogy. With LQT: Open the lid - click reply. Take a dump - type in textbox. Push the button - push the save page button. Without lqt: Open the lid - click edit. Take a dump - more like sift through other people's shit finding the correct place for yours. Push the button - push the save page button. Then pray the toilet doesn't back up. -- Nx / talk 10:15, 5 January 2011 (UTC)
OK. I've not had much to say about this because it seems like a bit of a hot button issue for some but here goes.
- New messages. This confuses the hell out of me so I just ignore it. When I click on it I get a load of liquid threads which I assume are things on my watch list. But given that it's in liquid thread format (see below) I just clear it out periodically.
- Indents on threads. I'm not sure how they work and consequently looking at them makes the page hard to read.
- White spaces. OK this is just aesthetics but the vast amount of white space on the things seems weird.
- The "reply" "parent" "more" buttons. I'm guessing that the "reply" is to reply to the comment in the box which has that reply button and perhaps the "parent" will put a comment at the bottom of the list of messages? But I don't know which makes me a bit reluctant to play about with them. I could make some guesses about the additional options under "more" but they confuse this old boy.
- Show N Replies I can't figure out what's supposed to be happening when this appears. Why some threads have it and some don't.
OK, some of the answers to some of these may be obvious and they may, in fact, be small issues. It's the difficulty of overcoming a series of small issues that make my brain break down a bit when I look at a Liquid Threads page. As I said above the format should be easy to understand and immediately intuitive and at the moment I don't think it is.--BobSpring is sprung! 10:35, 5 January 2011 (UTC)
- New messages. The issue with new messages is that you'll be watching my talk page (for example) because with the old system, you wanted to do that. With LQT, you no longer want to be watching my talk page, only the threads that you have replied to or started. So for example, if you ask me something on my talk page, and I reply, you'll see it on new messages. This is one of the cool features of LQT, but sadly the transition is a bit bumpy.
- Indents on threads. LQT tries to emulate traditional talk page indentation (to a degree). The difference is that you can't really unindent (well, you can reply to the first comment, that's sort of like an unindent, but see my next sentence), and you can't inject a comment before another user's comment. However, LQT also changes indentation a bit: on traditional talk pages, every new reply gets indented one level more than the previous one, even if it's not a reply to the comment above it, because otherwise you wouldn't be able to easily differentiate comments. You have to forget about that practice a bit, and just click the reply button for the comment you are actually replying to. Your reply then gets indented one level more than the comment you are replying to. So for example:
On a traditional talk page:
- This is comment 1
- This is comment 2. It's a reply to comment 1
- This is comment 3. It's also a reply to comment 1
- This is a reply to comment 3.
- This is comment 5, and it's a reply to comment 1
etc. And comment 5 could've been indented in a different way or injected before comment 4 etc., there's no set rule
On LQT, you instead get this (which makes more sense IMHO):
- This is comment 1
- This is comment 2. It's a reply to comment 1
- This is comment 3. It's also a reply to comment 1
- This is a reply to comment 3.
- This is comment 5, and it's a reply to comment 1
As you can see, each reply to comment 1 is indented one level more than comment 1. Of course you have to shake the old habit and instead of just clicking reply for the bottommost comment, click reply for the comment you are actually replying to. Comment three might go off on a tangent with replies to that, but the discussion continues below, with comment 5. In the future, it will be possible to collapse comment 3 (and its replies) if you're not interested in that part of the discussion. That should make long threads more readable.
- White spaces. I've already reduced whitespace as much as possible compared to the default, I can't really do much more there - there's still the issue of the toolbar (the reply, parent, more buttons in the lower right of every comment), which is too big and obtrusive, but I'm planning to fix that.
- The "reply" "parent" "more" buttons. See the part about indenting for exactly how reply works. Parent takes you to the parent comment (i.e. the comment that this reply is a reply to). "Link to" gives you a link to a specific comment (and its replies, which is called a fragment), e.g. Thread:User talk:Nx/Reality check/reply (5). Fossil record and edit should be self explanatory - in liquid threads, each comment is actually a separate page in the Thread namespace. So you can edit a comment and it has a history. Similarly, Vaporize allows you to delete a comment (actually it also deletes all the replies too), but that's something that won't be used much around here. Finally, drag to new location allows you to reorganize comments (again, dragging a comment also takes all the replies with it), or even split threads. Currently, you can't cancel a drag, so if you click that button the drop targets will remain on the page - they're harmless if you don't start dragging, and you'll be asked for a reason and a confirmation before you can actually move a comment, so you shouldn't be afraid to click it - it's just a bit annoying. And of course you are free to experiment on my lqt test page if you are afraid of breaking things.
- Show N Replies In your preferences, you'll find a new tab, called "Threaded discussion". There you'll find two settings, maximum reply depth to show and maximum number of replies to show. You can change these to really high numbers to always load the entire thread, but the purpose of this feature is to avoid unnecessarily loading long discussions that may not interest you. If you click the Show N replies button, it'll dynamically load more (unfortunately, it still won't load the entire thread, so you'll have to keep clicking show n replies - guess that's another thing to add to my todo). You can read the first few comments, and then if you decide that you want to read the rest, just click Show N replies.
Skip indent
Hi Nx. Thank you very much for taking the time to respond.
- New Messages. You tell me that I will only see stuff that I have started or contributed to. I'm not sure if that is how it's working. One of the fist things is see is "Ajax category rename?" I'm sure that I've never contributed to that in my life. Same with other stuff. Or is this part of the transitional difficulties of which you spoke? Secondly is it supposed to work on anything to which I have contributed or started or only things which are already on liquid threads?
- Indents on threads. OK, I think I've got that. Is it "better" or "different" though? I agree that it's certainly more formalised and logical - but it's also more controlled. And when people are used to the present semi-controlled chaos it takes some getting used to.
- White spaces. OK, if you can make it better it would be better.
- The "reply" "parent" "more" buttons. I can see how these facilities would be useful. But it does rather take me back to the intuitive bit. I don't suppsoe there could be any kind of floating pop-up help over them which tells people what they are for? (Although I see that actually clicking them gives you some idea anyway.)
- Show N Replies . Thanks.
Following your responses the thing is certainly less daunting, but I'm afraid that it still doesn't seem that intuitive. On the other hand the fist time I edited a wiki talk page I couldn't believe how cumbersome the process was - so it may be just a question of what you're used to.--BobSpring is sprung! 12:08, 5 January 2011 (UTC)
- You are getting the new message notification for that thread because you have contributed to my talk page, and because of that it's on your watchlist (if you edit a page, it's added to your watchlist). Now that I've converted it to LQT, you're getting a notification for every new thread and reply there - simply unwatch my page. See RationalWiki:LiquidThreads#Why am I getting "new messages"?. Of course if you want to be notified of new threads and all new replies, you have to watch a talk page - currently it's not possible to watch a page in order to be notified of new threads but ignore a specific thread that doesn't interest you (e.g. you'd probably want this for the saloon bar); this will be implemented in the future.
- I agree that the indentation is going to take getting used to, and I personally prefer linear forums with quotes, but I think lqt is much better than the mess that we have now. And indenting long replies like these is a pain in the ass - goat help you if you also use something special like pre or source or whatever.
- The buttons are missing tooltips, this is a known bug, so it's going to be fixed (and I don't want to fix it myself in order to avoid stepping on the lqt devs' toes), that should help a bit. I think parent should also be changed to "go to parent". But in any case, the very basic function of replying is pretty simple.
- If you have any ideas to make it more intuitive, I'm listening -- Nx / talk 13:05, 5 January 2011 (UTC)
[edit]
- "(if you edit a page, it's added to your watchlist)" You mean a liquid thread or a regular page? I don't think that is the default behavior for regular pages but you can turn it off at Special:Preferences / Watchlist / Advanced options. ~ Lumenos (talk) 09:38, 6 January 2011 (UTC)
- I don't know if it's the default. Anyway, when you edit a lqt page by clicking [Edit↑] (which in lqt terminology is the discussion header), it's also added to your watchlist. -- Nx / talk 10:43, 6 January 2011 (UTC)
- There is no need to do that to create/edit threads and we should probably lock the pages as you have done on yours. ~ Lumenos (talk) 15:00, 6 January 2011 (UTC)
- I don't think that's necessary. -- Nx / talk 15:09, 6 January 2011 (UTC)
- To be honest, I think we should just implement it Facebook style. Force it onto the SB and in about three days no one will care any more and will happily use it. postate 15:40, 6 January 2011 (UTC)
- I guess we still have the edit buttons for the sections so you could use both at once until the old sections are archived. Trent or the Foundation just have to decide if they are gonna have a vote or just do it. But I did find a bug or two, that Nx said he was going to work on. Not sure how stable it is presently. ~ Lumenos (talk) 16:20, 6 January 2011 (UTC)
- In a sense simply implementing it would "work" as the people who didn't like it or found it too daunting to use would simply not edit the page to complain about it. So by definition it would appear to be a success as "everybody" would be using it. The voices of the people who were frightened away by it would not appear.--BobSpring is sprung! 07:53, 7 January 2011 (UTC)
- Of these two groups, the latter is the one that worries me. I understand it's different and confusing and all the new functions may seem daunting, but I don't see how it can be so daunting that it prevents you from even commenting. -- Nx / talk 08:50, 7 January 2011 (UTC)
- I "agree" with you as far as the complaints I've read from others. Thing is, people have a healthy desire to kick the tires and new (beta) software often has unforeseen problems. (I find that to be the case with many "liberal" ideas; the "conservatives" find it "weird"/unnecessary for superficial reasons although there really are disadvantages that they don't discover because they are not interested in it.) There are currently "bugs" that screw up edit history and loose posts. Nx won't let me get a word in sideways on the LQT article so I put them here instead. ~ Lumenis (talk)
- Bob, Nx keeps telling us that LQT uses the same wiki markup. Well I found that if one doesn't like the new format they could actually reply in the post, to make a oldschool talkpage inside a post, so he is "right" about that. : ) No one has to use the new format on LQT pages, unless we make it a rule/custom. ~ Lumenos (talk) 01:55, 10 January 2011 (UTC)
- I often post like this and edit pages just like LQT suggests we do (ie here I am responding to Bob; now if everyone wants to reply to Nx, as people would usually do in the current format, this post will stick out because it is always at the bottom). It is a great way for an attention whore to get their message noticed. (If new messages were highlighted they would stand out when not at the bottom of the page.) Notice how I put a signature on my last paragraph so that anyone can reply to that paragraph specifically and they don't have quote it. Nx used to hate that, but maybe it will be okay to do that in the LQT format? ~ Lumenos (talk) 01:55, 10 January 2011 (UTC)
- There is kinda a "quirk" with LQT that there is no way to "reply" to multiple posts. You only have these "reply" buttons that make it look like you are replying to one post, when in fact you may have just felt that was the best place to put a reply to many posts. How do you express that you are replying to multiple posts? Do you reply to the last one, the first one you are replying to, or do you attempt to "undent" by "replying" to the first post in the thread? Most I suppose will choose to reply to the last one or they will do the "attempted undent" like an attention whore. Not much different than regular talkpage, except with a regular talkpage you can undent to the edge, so it doesn't look like you are replying to the first post (only). Maybe we should collaborate on a guideline for this in the unlikely event that anyone is interested. ~ Lumenos (talk) 02:55, 10 January 2011 (UTC)
Just in[edit]
- This discussion was moved to Forum:Gabrielle Gifford shooting. 21:18, 9 January 2011 (UTC)
Sidebar templates[edit]
- See Template:Portal_portal for more.
I have replaced several of these templates with ones based on
{{navsidebar}}. This was done several days ago, at least, but I have since received a complaint. Hence I am bringing it up here for "discussion". Here is the reasoning in short bullet points for your convenience.
Disadvantages of the previous system
- Raw code is daunting and complex
- Changing colours is a complex task
- Making stylistic changes across the board is a difficult task
Advantages of a master template
- Removes complexity of the css styling and divs
- Simpler to create a new sidebar template from scratch (read the documentation)
- Easy to change the colours with a single parameter, rather than finding each instance and remembering what it does (read the documentation)
- Simple to make overall stylistic changes to each template
- Unifies to the look of the templates
So, that's what I've done. I did it unilaterally, sue me. I wanted it done some time before the Rio de Janeiro Olympics. postate 14:56, 9 January 2011 (UTC)
- Also, for the most part the colours are part of the original versions of the template and weren't changed.That's a separate discussion as far as I'm concerned and can be open to anyone with sensible suggestions for what to use. postate 14:58, 9 January 2011 (UTC)
If you had to pick a religion to follow which would it be?[edit]
In a local atheist group I used to belong to someone did a poll asking this question and I was surprised that a small majority chose Buddhism. I am curious what our resident heathens would choose. Try not to pick a joke religion i.e. FSM or say, "Why would I have to choose a religion?" This is just for fun.
For me I would chose Buddhism or some sort of nature worship. NetharianCubicles are prisons! 22:09, 6 January 2011 (UTC)
- Why were you surprised? That it was not more? Buddhism for me. I've considered looking into Buddhism not for anything more than that its tenants seem to support a peaceful, healthy lifestyle. --Leotardo (talk) 22:15, 6 January 2011 (UTC)
- "Choosing" a religion is not just for fun. Religious practice, at its naked core, boils down to what you do when there is nothing else left to do. Sprocket J Cogswell (talk) 22:41, 6 January 2011 (UTC)
- I think I like your point, but would you explain it more? --Leotardo (talk) 23:02, 6 January 2011 (UTC)
- In my ideal view, the best role of religion is to comfort the afflicted, and to afflict the comfortable. One thing I like about Buddhist contemplative practice: If you need it, you go do it; no one is there to sell it to you. Try to get into a monastic Buddhist practice, and you need to get past the gatekeeper, who will be actively discouraging. If you want to make a Zennie feel at home, just tell them, "I can't help you. I have nothing for you."
- When I had kids of an age to go to elementary school, I hauled them to a nearby Unitarian-Universalist congregation on Sunday mornings. That way I could enjoy singing in the choir without the need to spout any particular creed. YMMV; congregations in that denomination are pretty individual, each with its own character. Sprocket J Cogswell (talk) 00:50, 7 January 2011 (UTC)
- Islam. Because it would really catch people off guard. Blue Talk 23:41, 6 January 2011 (UTC)
- I'm fortunate enough to be of Iranian-Muslim heritage while not looking a thing like what the stereotypical Middle Eastern would look like, so I have the luxury of catching people off guard as you describe. Most people think I'm Irish because of my name. It's pretty entertaining to see the reactions that some people have upon finding my heritage out, especially when the people are of the bigoted bunch. However, it also allows me to see exactly how ignorant/clueless people can be, even when it comes to knowing what country your nation has been fighting in. For example...
- Me: "I'm Iranian." Clueless person: "Oh my gosh! Is your family back in Iran all right?" Me: "Why wouldn't they be?" Clueless person: "Well, America is at war with them!"
- Alas, me not looking like a Middle Eastern doesn't help a bit with passing through airport security, since my father's name if Mohammad. Ah well. ~SuperHamster Talk 00:04, 7 January 2011 (UTC)
- Just carry some worry-beads, and keep mumbling something that sounds like Besmelleh. That should put their minds at ease. Sprocket J Cogswell (talk) 00:50, 7 January 2011 (UTC)
- Given that I am already some sort of Celtic Neopagan, I think I'll stick to this religion if I have to pick. The Spikey Punk I'm punking my punk! 01:50, 7 January 2011 (UTC)
- If I were to abandon the Anglican Church, most likely the Church of Ed Wood. I really want to try using Bela Lugosi's death as a religious holiday--Thanatos (talk) 04:16, 7 January 2011 (UTC)
- I worship rotting and growing wood. Is that a religion? I also really really like bacteria. ħuman 05:04, 7 January 2011 (UTC)
- As far as religions go, Daoism isn't too bad. Röstigraben (talk) 07:39, 7 January 2011 (UTC)
- Not to be confused with the Wall Street cult, Dowism. Howard C. Berkowitz (talk) 18:54, 7 January 2011 (UTC)
- I think the Olympian Gods were kind of fun. Sen (talk) 05:34, 7 January 2011 (UTC)
- WAIT WAIT! You all should know that Lumenosity is 100% Truthism. The first religion that is absolutely certain and based on a larger intentional culture project which is open to collaborative development by YOU. Lumenosity: Micromanaging your life since the dawn of history. ~ Lumenos (talk) 07:27, 7 January 2011 (UTC)
- If I were forced, and all choices being 'equal', then I would follow the neo-Pagan teachings of my late stepfather, Isaac Bonewits. A very laid back philosophy that embraced a lot of the 'Do as you will, if it do no harm.' Speaking of him, I must say I was a bit relieved that when he faced his fight with cancer, he fully used every modern medical science treatment that was available/affordable for him. Now, he augmented it with 'magick' cast by him, mom and supporters, but he never headed off in the direction of the woo some of his peers support. - Ravenhull (talk) 07:53, 7 January 2011 (UTC)
- Was Isaac Bonewits actually your stepfather? ListenerXTalkerX 07:59, 7 January 2011 (UTC)
- Yes, he married my mother a few years back. A nice man, and he made her happy. Unfortunately, I only had the chance to spend any time with him when I went to help out for a week only a few weeks before he was gone. I might not have agreed with his spirituallity, but I enjoyed his company. As for his philosophy, I rather enjoyed reading The Pagan Man and some of his other works, and that book still sits on my bookshelf. -Ravenhull (talk) 11:56, 7 January 2011 (UTC)
- Probably an extremely liberal version of reform Judiaism.... You know, the versions where they treat god like a piece of luggage.... Either that or Jedi. SirChuckBPlease Excuse me, I have to go out and hunt giraffes 08:07, 7 January 2011 (UTC)
My contribution to this section (I actually picked a religion to follow) got a little long, so I put it in an essay. ListenerXTalkerX 09:02, 7 January 2011 (UTC)
- For several years when I had to fill out immigration cards in the likes of Saudi Arabia, where it asked for your religion I would write "Hedonist". I can see why people might choose Buddhist although some Buddhist practices are quite demanding and not for me - all that prostrating yourself round a mountain. I know several Quakers and admire their pacifism, quietness and modesty, although Quaker "services" can be a bit boring. Генгисevolving 09:21, 7 January 2011 (UTC)
- A number of religious people claim that atheism is a religion. Can I chose their definition and chose atheism as my religion while holding a personal mental reservation that I don't think it is? If not, can anyone recommend a religion with which involves no worship, faith, ceremony, ritual or prohibitions but has lots lots of sex and drugs and rock and roll? Or has Genghis already nailed that with "hedonist"?--BobSpring is sprung! 10:38, 7 January 2011 (UTC)
- Church of the Fonz? postate 10:39, 7 January 2011 (UTC)
- I agree with Bob, the question is inherently flawed, you are being asked to pick the least worst option. As an alternative try "If you had to be a serious criminal what crime would you commit?" Lily Inspirate me. 10:48, 7 January 2011 (UTC)
- Well, in answer to that question I can only cite The Gospel According to Ali G, where he compares prison terms for stealing $100 in a shop robbery vs a £1 million bank robbery and concluding that it makes far more sense to commit big crimes than small ones. So ruthless bank robber it would have to be, rather than knocking over small stores for the cash in the tills. postate 10:51, 7 January 2011 (UTC)
I am a member and ordained minister of The Church of the Divine Elvis. (Come to think of it, I am the Church of the Divine Elvis.) "Now, turn to page 312 in your hymn books, and join in singing Heartbreak Hotel, as fried peanut butter and banana sandwich communion is served in the Jungle Room." MDB (talk) 12:44, 7 January 2011 (UTC)
- Hearbreak Hotel? In early January? Heathen!!! Everybody knows that a proper Elvis congregation sings Jailhouse Rock, Blue Suede Shoes until the festival of Elvis' devine birth on the 8 of January before switching to Heartbreak for a period of remorse before Valentine's Day wherein we begin singing Love Me Tender. Who taught you to worship Elvis? SirChuckBBoom Goes the Dynamite 18:45, 7 January 2011 (UTC)
I've met a few atheist Buddhists - i.e. those who follow the teachings of the Buddha as a philosophy rather than a religion. There are even Christian atheists - i.e. those who follow Jesus's philosophy on life while rejecting all the stuff about God & the afterlife; it's like cafeteria Christianity dragged to its limits.
Anyhoo, if I had to follow a religion for some reason, I guess it would probably be Buddhism, or possibly Shinto if I was feeling particularly Wapanese. WẽãšẽĩõĩďMethinks it is a Weasel 20:11, 7 January 2011 (UTC)
- Wicca or a similar new-age thing. I'd be an irritating and sometimes amusing chap, but at least I'd be mostly harmless. The worst thing I'd probably end doing is boring people with my inane attempts to describe my 50-year-old grab-bag of beliefs as some kind of ancient (and totally not comical) belief system. Only thing funnier than wiccans are vampires. Concernedresident omg!!! ponies!!! 18:50, 8 January 2011 (UTC)
I went trolling the Scientologists many years ago and talked the guy into giving me a six-month trial membership of the International Association of Scientologists for free, without me buying anything. (This is very much against the rules.) IAS membership is what they claim their count of members represents (except when they claim it represents something else). So for six months, I was an honest-to-Ron card-carrying Scientologist. Oh frabjous day! - David Gerard (talk) 19:13, 10 January 2011 (UTC)
Take your pick[edit]
7 really weird religions article.The Church of Ed Wood claims 3000 followers worldwide.--Thanatos (talk) 06:40, 7 January 2011 (UTC)
I weep for the USA[edit]
Seems to me as if US public discourse has gotten testier and more rancorously polarized in recent years, with more finger-pointing and sketchier fact-checking. More bullshit in the media, in other words, dating from about the time Bush fils stole his first election.
It reminds me of how I have seen groups of people act when they find themselves on a losing side, or try to accomplish a demanding task without the requisite skill level. One accessible example may be seen in the argy-bargy of the cart-pulling, can-scavenging helots in A Boy and His Dog. Not at all the same flavor as I saw in the prosperous sixties, when we also saw resistance to poorly chosen war. Sprocket J Cogswell (talk) 21:48, 7 January 2011 (UTC)
- It goes back to the Clinton era. Somehow, during the Reagan/Bush pere era, the right wing became incredibly angry and vocal about it. ħuman 06:22, 8 January 2011 (UTC)
- There was a recent paper that makes a strong argument that the beginning of the polarization was with Newt Gingrich's election in 1978. That Salon link is a good summary of the argument. Ezra Klein at WaPo even charts it --Leotardo (talk) 06:35, 8 January 2011 (UTC)
- I would agree that Gingrich and the 1994 Congressional class brought in a level of rancor, for which you'd generally have to go back to the 1860s Radical Republicans to find. Before that, you might find it in extremists in primaries, but just not in general politics. I spent a good many years in Washington and in politics, and James Carville married to Mary Matalin didn't seem that unreasonable. Also, do remember the increase in hostility in TV journalism, starting with the McLaughlin Group and Point Counter Point, and exponentially escalating with talk radio and cable TV pundits. Howard C. Berkowitz (talk) 07:25, 8 January 2011 (UTC)
- So, do you think it is only a matter of having the equipment and the ability (try not to think of a dog licking his own tasty bits here) to broadcast the "conversation" and a healthy vociferous back-and-forth, or is it a symptom of declining capability on the world stage? As I said before, I am reminded of the kind of sniping I have seen in groups with weak to iffy competence for the task at hand. Sprocket J Cogswell (talk) 14:59, 8 January 2011 (UTC)
- Well, after reading this from WIGO:Blogs, I'm actively depressed. The US is going to tear itself apart, and when it does, they'll say that we should have seen it coming. You know that part in V For Vendetta when Prothero says "The former United States is now so desperate for medical supplies...". Yep. I, for one, welcome our new Chinese overlords. postate 13:23, 10 January 2011 (UTC)
- Meh. Gun ownership is not even on this particular screen, in my opinion. A steady well-trained individual can put a bullet right where they want it, but only out to a few hundred yards or so. Measured in metres, it is a bit fewer. The average dolt who gets hold of a gun has slim chance of raising splinters on a barn door, the broader side thereof, while standing in its shadow.
- Public discourse, on the other hand, now has a global reach, even if coverage is spotty in some jurisdictions. The rabble now requires scant prompting to divide against itself; my point is the following: even the would-be rulers of that rabble are squabbling like a bunch of losers, quite plain to see and hear. One more time, with feeling: our "leaders" and public mouthpieces are squabbling like a bunch of losers. This neither encourages nor amuses me. Sprocket J Cogswell (talk) 18:07, 10 January 2011 (UTC)
- Surely that makes your defensive weapon of gun more dangerous? postate 00:21, 11 January 2011 (UTC)
- The lethal weapon of gun has a vanishing relevance to this discussion. Sprocket J Cogswell (talk) 00:31, 11 January 2011 (UTC)
(ex)MP Jailed[edit]
Occasionally the law does its job - David Chaytor gets 18 months for expense fiddling! Him (talk) 00:37, 8 January 2011 (UTC)
- The disappointing thing is that so many of them have got away with it. –SuspectedReplicant retire me 00:45, 8 January 2011 (UTC)
- Ah, but the wonders of the British 'justice' system mean he'll serve a couple of months at most: All sentences less that four years are automatically halfed (only a third off for longer sentences), so he's being sent to prison for 9 months. Most jails release convicts after two-thirds of their sentence, so we're down to 6, and that's before we look at "early release" or release on licence etc. (To be honest this rant should be about the dangerous repeat violent offenders who get this same treatment, not some troughing MP) DeltaStarSenior SysopSpeciationspeed! 00:52, 8 January 2011 (UTC)
- It tends to be a world-wide "let's let off the celebs" attitude - see Lindsay Lohan's jail record for a perfect example. But it's SO hard for these people! "Cor guv'nor - it must be right 'ard for the likes of you and no mistake. Apples and pears, my old man's a dustman etc etc". –SuspectedReplicant retire me 00:57, 8 January 2011 (UTC)
- perhaps he can use Paris Hilton's method of getting out of prison and just cry a bitAMassiveGay (talk) 12:26, 8 January 2011 (UTC)
- He's old enough to develop a case of Ernest Saunders Alzheimers. He'll get all the benefits of alzheimers, such as early release from prison, but he'll be barely home before he'll join Saunders in medical history as one of the very few people to make a complete recovery from an incurable disease. Hooray! Concernedresident omg!!! ponies!!! 17:31, 8 January 2011 (UTC)
- It's political correctness gone sane! - David Gerard (talk) 19:17, 10 January 2011 (UTC)
Dr Zoe D. Katz PhD[edit]
One for the diploma mill article indeed. I have to recommend this week's QI if you can get it. postate 01:41, 9 January 2011 (UTC)
- Re: QI. I get them on utubez after a slight delay. Currently searching for "qi s08e14" and getting nothing. Is this a new season perhaps? Pls orange box me with details... ħuman 02:59, 9 January 2011 (UTC)
- It was literally on either today or yesterday so it might be a little longer. It's in episode 13; Hypnosis, Hallucinations & Hysteria. They go into some RW relevant woo-like stuff. postate 03:02, 9 January 2011 (UTC)
- Episode 15 according to YouTube, but 13 according to iPlayer. postate 03:04, 9 January 2011 (UTC)
- Ha that's awesome. It actually made me want to get a couple of degrees just for fun. £779 for some hands on crystal healing however? Procreate that, for that price I can print my own. Sen (talk) 03:42, 10 January 2011 (UTC)
- Well, that's the entire point about credentials. You can print your own no problem. The power comes from who (usually an institution or established organisation) endorses that credential. We could print degrees in Wingnut Analysis and have them endorsed by the RationalWiki Foundation - indeed, I think we should sell them as part of RW merchandise! It'd be awesome. postate 13:25, 10 January 2011 (UTC)
- Ooh, I'm in! Can I have a PhD in Wingnuttery please? CrundyTalk nerdy to me 13:33, 10 January 2011 (UTC)
- Why certainly! That'll be $455.95 please. postate 13:41, 10 January 2011 (UTC)
- I can see a PhD awarded to anyone who contributes more than, say $100, to the running costs. Jack Hughes (talk) 13:46, 10 January 2011 (UTC)
WIGO RSS feeds?[edit]
Would it be possible to set an RSS feed up for WIGOs? As far as I can tell, I can only subscribe to a feed for all changes made to RW, which I don't particularly want. With that said, I am fairly technically incompetent; if WIGO RSS feeds already exist, then disregard all that, I suck cocks. but please tell me where to find them Webbtje (talk) 18:14, 9 January 2011 (UTC)
- For each page, you can get a RSS feed for it from its "history" page ("fossil record" here). Example feed. It doesn't look very good, though.--ZooGuard (talk) 18:29, 9 January 2011 (UTC)
- All that you can really do is subscribe to changes on a page. As that would include edits to WIGOs, comments out and updates, you'd end up with a mess. The only way would be to generate a separate feed manually - but again, this would leave you without the aforementioned updates and corrections. Just add the WIGO pages to your watch list, it's pretty much the same thing. postate 18:34, 9 January 2011 (UTC)
- Gotcha. Cheers guys. Webbtje (talk) 18:53, 9 January 2011 (UTC)
- You can also get a RSS/Atom feed for your watchlist. It lists the titles only, not the code of the changes.--ZooGuard (talk) 18:56, 9 January 2011 (UTC)
- I had done a yahoo pipe to do this at one point, to reprocess the rss feed, although it didn't turn out great. Maybe I can dig it up, sterile apple 03:00, 10 January 2011 (UTC)
- Don't RSS the saloon bar whatever you do. Loads of diffs in the wrong order, etc. Not good. Bernard Quatermass (talk) 19:26, 10 January 2011 (UTC)
The Enemy Within[edit]
I've just been watching an old Star trek episode. It was the one with two Captain Kirks - one kind-hearted, reasonable but indecisive, the other decisive, self-serving and ruthless. It reminded me the way America has gone. Bernard Quatermass (talk) 21:52, 10 January 2011 (UTC)
Loughner[edit]
So the news is saying Loughner isn't saying anything to authorities. This sounds strange to anyone? Usually these guys do things like this to make a political statement, so why refuse to make a more coherent one, using, you know, words? Also from what's been said so far this guy seems like your basic conspiracy theorist. From what I've seen of them on both the left and the right, there's little difference between them. 9/11 Truthers, fluoridation alarmists, all those guys are on the fringes of both the left and right. Likewise, anyone who likes the writings of Marx and Engels, as well as works of some of the most anti-Marx people (Hitler and Rand) doesn't have much of a consistent set of beliefs, other than perhaps embracing controversy. DickTurpis (talk) 16:07, 10 January 2011 (UTC)
- That's why I really don't think this was a true political attack. Usually terrorists (which he would be if this were politically motivated) are only too happy to talk to the authorities. It's a pretty poor terrorist who doesn't explain what change he actually wants. From everything I've read, this guy is just bat shit insane, or as we in the Psych world like to say: He's whicky in the whacky woo (Drew Carey Show reference.... Anyone? Anyone at all? Ok then). It's really hard to accept that this was a politically motivated act when he won't even assign a motive. SirChuckBObama/Biden? 2012 16:45, 10 January 2011 (UTC)
- A bit early to say, either way. In general, an attack on a politician is a political action, but it doesn't have to be. I guess Hinckley would be an example to the contrary. One can be both a whackjob and a person making a less than coherent political statement at the same time. His youtube videos and the like seem to indicate political motives, but not standard left or right wing ones. His silence is confusing, certainly. DickTurpis (talk) 16:51, 10 January 2011 (UTC)
- You know what they say: the best-running machine still needs tools and cranks. Sprocket J Cogswell (talk) 16:59, 10 January 2011 (UTC)
- I think you hit the nail on the head Dick. An attack can be political in nature without being a political action. Hinckley was perfect example of that. SirChuckBFurther bulletins as events warrant 17:08, 10 January 2011 (UTC)
- The political blame game is generally rotten anyway. We'd have to see what evidence can be gathered to see if he was influenced by politics - although you could interpret the silence as "constitutional right", which would fit with right-wing constitutional idolatry, perhaps. But if you're nuts and paranoid to start with, I don't think the likes of O'Reilly and Beck would be the most healthy thing to consume. postate 17:13, 10 January 2011 (UTC)
- I don't get why the media are so hung up over his books. Just because someone owns a book doesn't mean that he automatically agrees with the author, and simultaneously agreeing with Marx and Hitler would be kinda hard anyway. He could've just kept them out of a general interest in history and the individual perspectives of famous ideologues, reference purposes, or simply to show off. He's been captured, and he'll probably talk about his motivation eventually. But that's obviously not good enough for the media, we need to find out how he thinks NOW NOW NOW, and stick one of those convenient ideological labels on him, no matter how sparse the evidence is. This whole "He ranted against the government? He must be a right-wing nut job!";"Someone on Twitter called him a leftist!" style of reporting is beyond ridiculous. Röstigraben (talk) 18:04, 10 January 2011 (UTC)
(undent) May I suggest Richard Hofstadter's 1964 essay, The Paranoid Style in American Politics? The American political system, long before the Web, had great difficulty understanding that observation and correlation do not necessarily mean causality, and past demagogues such as Joe McCarthy simply ignored it. I generally remember a case where a "loyal citizen" reported an individual for having an extensive library of "Commie" books at home. Since he was a Soviet specialist for the Department of State, I should hope he would be well-read in such material.
Yes, the reach has increased, as well as the rate of suggestion. I have a housemate that keeps Fox News on most of the time, and I really would have liked to hear my professor (and Army analyst) specializing in propaganda to hear it. My friend insists, for example, that Beck's individual statements are true (I do question that at times), but he doesn't want to hear anything about the intense emotional bias given to them.
Actually, I was looking for my copy of Mein Kampf, which seems reasonable enough to research a number of Nazi historical articles at CZ. Both Hitler and Marx needed editors, although I have, in amazement, gotten through Mein Kampf, but never failed to fall asleep reading Das Kapital.
Unfortunately, I do hear more talk of violence -- not sure it's more than that -- from Americans who feel disenfranchised. Now, I personally do support the right to keep and bear arms, but more as a tradition, and, I'm afraid, for self-defense as well as sport. Nevertheless, I think it's absolutely ludicrous that small groups with sporting weapons, or even military small arms, are a remote deterrent to government -- they simply do not understand modern warfare. One was utterly shocked when he boasted about how his friends could coordinate their efforts with cell phones, and was absolutely shocked asked what he would do if the military facing them, for example, had an AN/ULQ-30 cell phone jammer -- or simply administratively disabled cell service. Howard C. Berkowitz (talk) 19:25, 10 January 2011 (UTC)
- Lol, cellphones. Aka portable and identifiable energy emissions, protected by a protocol that can be broken with a laptop's worth of hardware and for which the government probably has the keys to already. Jamming is the least of their problems. Sen (talk) 19:44, 10 January 2011 (UTC)
- Yeah. The police already have vans that can do pinpoint accuracy cell phone triangulation, even if you're not making a call. If you were really planning some sort of uprising, you'd probably want to turn your phone off and take the battery out before you started. --JeevesMkII The gentleman's gentleman at the other site 19:55, 10 January 2011 (UTC)
- Modern phones already know exactly where you are without the police having to triangulate your position. Генгисevolving 20:15, 10 January 2011 (UTC)
- P.S. I have at least one Bible, the Book of Common Prayer, works by Thomas Aquinas and Milton's poetry in the house. Does this make me a religious nut? Генгисevolving 20:19, 10 January 2011 (UTC)
- I think the thing about his books was that he listed those as his favorites, not books he happened to possess. I own the Communist Manifesto, but I would hardly consider it a favorite (likewise a couple books by Ann Coulter and Bill O'Reilly, to name a few). Also, about phones, isn't there a difference between a phone knowing where it is and a third party knowing where the phone is? A phone with GPS can read the satellite signals to calculate its position, but it's not sending a signal back to the satellite that others can read and track, is it? DickTurpis (talk) 20:29, 10 January 2011 (UTC)
- Dick, while the position won't be as accurate as GPS from the phone, but, if the cellphone is to be able to receive calls, it has to constantly update the cell network where it is, so the network knows to what cell to route incoming calls. Especially in urban areas, several cell towers hear the signal but only one is in active use. If those towers, in addition to basic equipment, have time-of-arrival sensors for the signal, geolocation becomes much more precise. Howard C. Berkowitz (talk) 21:06, 10 January 2011 (UTC)
- Sen, jamming is the greatest of the problems that they easily can understand. Electronic warfare, signals intelligence, and communication deception just are not the first things that come to their minds, and, having worked in them, I sometimes wonder if psychosis is hard reality by comparison. It's ironic that when I've said that if I were, in fact, going to act against a tyrannical government, while I'm a decent rifle shot, my weapon of choice would be a computer -- and, very selectively, active RF emitters. Howard C. Berkowitz (talk) 22:09, 10 January 2011 (UTC)
- Guys (except Howard), I said "energy emission". GPS signals and triangulation are irrelevant, this thing is essentially glowing like a light in the dark (a light powerful enough to be seen by the mobile phone tower several km away) and what glows, you can lock a missile on it. It's not even a new technology, I think Clinton once fired a couple of missiles at Bin Laden (or was it drug lords in S.America? Dammit America, stop firing missiles everywhere) talking at his phone and missed just because of the missile travel time. Sen (talk) 22:56, 10 January 2011 (UTC)
- Go to mediamatters to watch Beck on this issue. So far he has claimed he does not use a violent rhetoric, Beck has no agenda, the guy read the communist manifesto and mein kamphf and is what Beck has been warning people about. He also goes on about how Palin and him love peace and they want the truth. If I ever get a time machine, I'm going back in time to punch Beck in the face after a few key points in tonights show.--Thanatos (talk) 23:14, 10 January 2011 (UTC)
(undent) LOL...thanks, Sen, I needed that "except Howard", since I'm under fire by a UFOlogist at CZ who is demanding appeals because recognize her expertise as a general journalist as equal to the radar knowledge of a practicing engineer. I also, believe it or not, will not accept that unspecified "government reports" provided by a newspaper are equivalent to a technical report on a sighting. Unfortuately, I have not been able to get across the idea that weather radar operates differently than air search radar, not getting into anything semi-exotic such as antenna polarization, but just trying to explain that one wants to detect air movement and the other wants to treat it as interference.
My impression is that targeting of bin Laden was not on a classic cell phone, there being few of them in Afghanistan or Sudan, but on a Thuraya (probably) handheld satellite phone. Caveat: some Thuraya models will first try to connect to a cell system if they can detect one, and only use the more expensive satellite if they can't. Howard C. Berkowitz (talk) 23:38, 10 January 2011 (UTC)
- Mobile phones do send location information back to the system, which is how you get special offers at a local supermarket or the nearest ATM info. Also parents can track where their kids are through their mobile phones. Генгисevolving 09:53, 11 January 2011 (UTC)
- The last name "Loughner" immediately brings to mind the pronunciation of "loner", although it may be "lockner" or "Loffner". Anyone know the correct pronunciation? FreeThought (talk) 11:55, 11 January 2011 (UTC)
Google Scribe[edit]
Sure this has been brought up before, but this is fracking hilarious [1]. Sounds like Palin at an interview Blue Talk 04:54, 11 January 2011 (UTC)
Cracked's 6 Crackpot Conspiracy Theories (That Actually Happened)[edit]
kablang... Discuss? - VezzyRattlehead (talk) 14:58, 11 January 2011 (UTC)
- I enjoy the thought of a website titled "Stupid Shit Michele Bachman said." Majintahu (talk) 00:13, 12 January 2011 (UTC)
- I have a 50% higher risk of getting oesophagal cancer than most UKians because the UK government decided to test chemical weapons on Norfolk in the 60s. Kinda hard to separate the cadmium cases from the genetic deformities though :) CrundyTalk nerdy to me 11:21, 12 January 2011 (UTC)
Mongolian Crystal Skulls[edit]
Que? Sen (talk) 00:03, 12 January 2011 (UTC)
- Well, someone has been watching a bit too much Indiana Jones. postate 12:47, 12 January 2011 (UTC)
- The crazy just oozes from that group. Note the mandatory talk of energy. CrundyTalk nerdy to me 15:09, 12 January 2011 (UTC)
Tibia ME[edit]
Anyone here play Tibia? I keep getting buttraped on a (supposedly easy) quest and need some friends to march forth with. I'm Crundy on world 14 if so or if you fancy signing up :) CrundyTalk nerdy to me 13:39, 12 January 2011 (UTC)
Touching kids[edit]
No, not that way, but as part of music lessons. This is interesting to me as someone married to a professional musician, specifically a singer. Because of the muscle control needed to sing properly (and by which I mean Met Opera, not X-Factor) - and sometimes just to point out that you even have those muscles - you can often find singing teachers prodding their students quite thoroughly. So avoiding touching entirely is potentially counter-productive to some lessons - indeed, I've taught guitar to some pretty cack-handed first timers (all consenting adults at the time, of course) and had to physically prise their fingers into the right positions to get them to work as no amount of pointing and demonstrating actually did the trick. Otherwise we may as well just avoid the concept of teachers entirely and stick all kids in an isolation with a couple of self-tuition DVDs! Actually, raising children in high-tech control pods... never mind. And as much as I think Gove has had some odd educational positions as a Tory, I can't fault his reasoning that it's stuff like this that plays into the culture of fear and the assumption that someone who works with children is automatically a paedophile. postate 22:34, 7 January 2011 (UTC)
- Married to a music teacher here, specifically violin. Mostly one-on-one lessons, often with a parent or other adult minder in the next room, with a clear view of the lesson. Molding the shapes of both hands is flippin essential in the early stages. Touchless instruction of string instruments? I don't think so.
- She has run evening groups at a local kid-music school. I helped, tuning the kids up beforehand, and flailing away with my own bow as a role model and back-bench wiseacre during the sessions. I had to get my criminal/sexual background (n.b. lack thereof) vetted by the state, and a certificate, before I could participate. That much of it makes sense, I suppose. Sprocket J Cogswell (talk) 02:27, 8 January 2011 (UTC)
- Ironic. Since I'm a pedophobe, I can't speak of experience with guiding children--with toddlers, I emulate Brave Sir Robin. I do, however, occasionally teach really basic introduction to computers at a senior adult center, and also had to get a criminal/sexual check. While it's pretty automatic for us, it actually can be a challenge getting someone to use a mouse for the first time -- and guiding a hand often is useful. Howard C. Berkowitz (talk) 07:21, 8 January 2011 (UTC)
- Interesting. I've often wondered about that with foreign language teachers in non-English nations. I can't speak for the whole world but if somebody is a native speaker in Spain they have a good chance of being employed by somebody. (What kind of money you can ask for is a different thing.) But I've never heard of anyone being asked for a background check before being left with kids. And there are some pretty weird people in this profession. Apart from the students passing through on the gap year, many of the "lifers" are really strange.--BobSpring is sprung! 12:36, 8 January 2011 (UTC)
- Late reply, I know, Bob, but I've given this some thought...
- TEFL (or teaching ESL, English as a Second Language, as we call it in the States, since polyglottism can be a foreign concept here; folks tend to think that one either speaks English or doesn't, or is bilingual) may be done at a public distance without any need for touch. I suppose there might be occasional need to demonstrate the difference between an anglo handshake and a heartfelt abrazo, but isn't that a stretch?
- I can easily believe that the lifers are a special breed. Having no doubt observed and catalogued their constellations of specialnesses, you probably have an inkling of how that comes about. Sprocket J Cogswell (talk) 14:50, 9 January 2011 (UTC)
- I don't find toddlers all that threatening. Kids of about eight or nine are a different story, m'kay? Way too easy for them to cast you in the role of the retarded adult. I think that is what made me a pedophobe until I fell in with a companion who presented me with some rug rats of our own. Brave Sir Robin's option was not then available.
- About that same time I took some summer jobs teaching archery and marksmanship at a Boy Scout camp. There I discovered that eleven- and twelve-year old boys can be kindly disposed if you meet them in their own space. There again, the instruction couldn't all be done with demonstration and bellowing. Sometimes a quiet word and a steadying hand on the shoulder were the ticket to progress. Sprocket J Cogswell (talk) 15:13, 8 January 2011 (UTC)
- If only we could have such a pleasant conversation on race and genetics. ~ Lumenos (talk) 12:49, 9 January 2011 (UTC)
- What kind of pleasant conversation were you expecting when you start by proposing the creation of a master race? WëäŝëïöïďMethinks it is a Weasel 13:38, 9 January 2011 (UTC)
- I put forth the position that supporting technocratically brute forcing our "genes" to "perfection" is bound to be more effective/faster than voluntary trans-generational selective breeding and you totally ignored it. )': Sen (talk) 23:15, 9 January 2011 (UTC)
- Well I wasn't but I thought usually you get an even less pleasant conversation when making some of the above comments. One would think that if that could be discussed (or ignored) without passion, so could a thread on "eugenics". And it wasn't master "race" it was "races". Multicultural ones, at that. ~ Lumenos (talk) 20:33, 13 January 2011 (UTC)
It's like political correctness gone mad!! I don't think teachers should really be "banned" from doing anything. If it gets their job done they should be able to do it. Pegasus (talk) 00:48, 10 January 2011 (UTC)
- It is not political correctness gone mad. From the article "A union training video says touching pupils could expose music tutors, who often teach in one-to-one sessions, to charges of inappropriate behaviour." - The union, quite correctly, warns its members that physical contact, however innocent, can be misconstrued and end with career wrecking allegations. No one is banning anything. Jack Hughes (talk) 14:06, 10 January 2011 (UTC)
- I don't think anyone who writes the words "political correctness gone mad" can do so with a straight face... postate 17:16, 10 January 2011 (UTC)
- I think writing anything at all with one's face is quite a challenge, straight or otherwise. ωεαşεζøίɗMethinks it is a Weasel 00:45, 11 January 2011 (UTC)
- Yes sorry. I should have put a smiley face on there or something. Perhaps some people might say it's "political correctness" but we know it isn't really. Pegasus (talk) 17:05, 11 January 2011 (UTC)
Has Sarah Palin been wronged?[edit]
Possible rival Tim Pawlenty smells blood and criticizes Palin for her crosshairs. The NY Times reported, ." Even though there is no evident connection with Loughner, Palin's brand continues to plummet. Is this fair? --Leotardo (talk) 17:00, 11 January 2011 (UTC)
- (EC)Well, Palin is one of those politicians who thinks that it's perfectly okay to use military terminology and analogy in their rhetoric, and of course, all's fair in love and war. So from that point of view, yes, it's perfectly fair. Whether it's right or not, that's a different matter. More seriously this is, of course, politics, or more specifically US politics, where the name of the game is to try and control the Story (the image and accepted history as seen in the moderate public's mind) of each candidate and, as long as you don't out-and-out lie, most things would be considered fair, after all it can always be argued that it is each voter's responsibility to check the facts of what they believe about each candidate before they vote. As for specifically Palin, and the fact that her brand is plummeting, that was inevitable. She gained visibility as much through notoriety as through popularity and that's the kind of visibility that fades very quickly (think of her as a Big Brother contestant after that series of Big Brother has ended and you will see what I mean). Her brand was already plummeting before Arizona happened, the second season of the Alaska show that she was doing was cancelled and the viewing figures that she was getting for the first series I believe averaged out at less than 3.5 million, much lower than the axing threshold that most American Sci-Fi shows face (US sci-fi shows are considered a niche market and aren't expected to get the viewing figures that mainstream shows enjoy). 'Even though there is no evident connection with Loughner' , that one is tricky. Loughner is refusing to say what his particular trigger was. We can assume, from what information is given, that he held a grudge against Giffords prior to the posting of the poster, but it is interesting that he doesn't appear to have acted on that grudge until now, and that does suggest, or more than suggest, that whilst his motive was based on anti-establishment views that were then focused specifically on Giffords because of a perceived slight, that wasn't enough to set him off, it took something additional to set him off. Whether that additional factor was an additional build-up of anger or hate that was fuelled by anti-Democratic, anti-liberal, anti-establishment or anti-them rhetoric, or whether that trigger was a continually deteriorating state of mental health, or whether it was a little from column A and a little from column B, is impossible to know until he starts co-operating with the investigation.-- Spirit of the Cherry Blossom 17:39, 11 January 2011 (UTC)
- I think it's unfair in one sense, in that she's being made a scapegoat for the entire right wing and their rhetoric which she just inherited, but didn't invent. The US right has been describing their opponents as traitors, destructive, corrupt, etc. etc. etc. for the last two decades as a matter of deliberate policy. It probably won't stop if you only Palin. Actually, it probably won't stop at all because it works, and they don't give a solitary shit about the consequences. --JeevesMkII The gentleman's gentleman at the other site 18:15, 11 January 2011 (UTC)
- While she doesn't have anything to do with the shooting, she set herself up as the most prominent and visible of the right-wingers engaging in this sort of propaganda. She profited disproportionally from it (second maybe only to Glenn Beck), and now she'll have to bear much of the backlash. Apart from that, it's high time that even her ardent fans begin to realize what an utterly thoughtless, incompetent, irresponsible and narcissistic fraud she is. If they see that she doesn't have anything to offer apart from her cookie-cutter right wing rhetoric, an asset that is now at least somewhat discredited, they'll hopefully turn away from her and leave her in obscurity. Which is all she deserves, so yes, I don't see anything unfair in her fate. And keep in mind that even if this is the end of her status as a national celebrity, she'll still have raked in upwards of $10 million during the last two years. Röstigraben (talk) 19:07, 11 January 2011 (UTC)
- There is no fairness in politics. Palin exploited a tide of sentiment and if it ebbs away then so be it. She came from nothing during the last election and like the reality show analogy she should probably fade away again. Let's look at what her contribution to the political landscape has been - she has a degree of physical attractiveness (or a lot when compared to the general body politic) and she's a maverick (and so is Loughner). Apart from a load of tea-party rhetoric I don't think she has made any concrete political suggestions (but being a Brit I may have missed them). Lily Inspirate me. 15:55, 12 January 2011 (UTC)
Eeny... meeney... miney...[edit]
Ok, fellow RWians, I'm turning to you for advice. I have a choice between seeing Imogen Heap or Rammstein. Sadly not both, as they're on successive days & one in Cape Town & one in Jo'burg. I like both - which should I go and watch?
--Ψ GremlinPraat! 12:22, 12 January 2011 (UTC)
- Yes. --Idiot numbre 188 (talk) 12:41, 12 January 2011 (UTC)
- I'm going to stay Rammstein on the basis of the fact my brother-in-law is stationed at Rammstein AFB, as I have no other knowledge on which to base the decision. MDB (talk) 13:03, 12 January 2011 (UTC)
- Personally I'd risk the flamethrower to the face over the trendy singer-songwriter type, there's just no competition there. postate 15:54, 12 January 2011 (UTC)
- I have seen Rammstein last summer in Québec city ( vip places ! ) Honestly , a great show even if i was not particularily a fan Great music, a lot of special effects; very impressive. Alain (talk) 16:29, 12 January 2011 (UTC)
WND polls[edit]
I enjoy going to WorldNetDaily, putting on my crazy pants and trying to guess the number one answer of the loonies. The other day, they posed a question asking who else is to blame for Jared Laugner. For the first time ever, the number one answer is the most sensible. Obama is still the third most responsible, but this is still a big deal for me. How did they manage to get that right? Occasionaluse (talk) 15:48, 12 January 2011 (UTC)
- At least punk rock isn't to blame. That answer being number one would make me want to go nout and kill people after I listened to The Exploited's song "Fuck a mod." AnarchoGoon Swatting Assflys is how I earn my living 15:53, 12 January 2011 (UTC)
- What was Tucson law enforcement supposed to do? I haven't read that Loughner's craziness was directly threatening before, and it's not against the law to be crazy in a way that simply makes people uncomfortable. --Leotardo (talk) 15:55, 12 January 2011 (UTC)
- Wow, there's some serious crazy amongst the various options. Pls explain to dis furriner what the "you bring a knife, we'll bring a gun" jab about Obama is. "American educational system and Bill Ayers' curriculum" WTF? Was Ayers tutering the guy. I like how only "left-wing" blogs and media are listed. Still, at least WND is right up there with the rest of the inhuman scum in trying to squeeze political gain from a tragedy. Again. --Ψ Gremlin講話 16:10, 12 January 2011 (UTC)
- The "you bring a knife, we'll bring a gun" line is a quote from Obama, where he was quoting the film The Untouchables, describing "the Chicago Way".
- As far as the American educational system goes, that has something to do with the fact the Bill Ayers, one of the great boogiemen of the right, apparently was vaguely connected with something used at Loughner's school. MDB (talk) 16:20, 12 January 2011 (UTC)
- It's great how it's out of the question that all the violent right-wing rhetoric could possibly have anything to do with it, but that one statement by Obama is right up there. DickTurpis (talk) 16:27, 12 January 2011 (UTC)
- It's WND. If they believed in global warming, they'd probably blame it on the fact Obama exhales warm air. MDB (talk) 17:21, 12 January 2011 (UTC)
- I often wonder why WND hides the poll results before you vote. I mean, its not like you can't predict the top answer just by saying to yourself "Hmm, which one of these answers represents the craziest right wing fundamentalist view of reality?" Might make an interesting contest to see who could pick out the top results on each poll without reading them. But then,a nyone here could do that with near 100% accuracy EternalCritic (talk) 04:22, 13 January 2011 (UTC)
- Crazy options, but at least they're roughly split 50/50 in a left/right kind of way. Slightly more honest than most ganz rechts news organisations manage.-- Spirit of the Cherry Blossom 16:51, 13 January 2011 (UTC)
Jared Loughner vs. Nidal Hassan[edit]
What is the difference between Jared Loughner and wp:Nidal Hassan? Both are mentally disturbed loners possibly influenced to kill by extremist rhetoric that appeals to crazy people. Why is one just a lunatic for whom it would be irresponsible to assume a political motive, while the other is an Islamist? --Leotardo (talk) 16:03, 12 January 2011 (UTC)
- Mostly because Hassan gave a motive. Laughner, as of this writing, is still refusing to talk. SirChuckBI brake for Schukky 17:38, 12 January 2011 (UTC)
- Loughner has given plenty of motive, it's just incoherent to us. --Leotardo (talk) 17:44, 12 January 2011 (UTC)
- Ranting about lucid dreaming, illiteracy and schools being unconstitutional provides no motive for shooting 20 people. Hassan voiced clear motive that was religious-based. That's why he's considered an Islamist. Had Loughner been screaming Bible quotes as he shot away we'd be labeling him a Christian extremist. He didn't. In fact, he claimed to be an atheist. δij 02:01, 14 January 2011 (UTC)
Telemarketers[edit]
I had some Indian (Elizabeth) phone me up yesterday saying that they had received a report that there was a problem with my hard-disk. She asked me to run a CMD box and type something in - E for elephant, V for victor, ... after wasting 5 minutes pointing out that E is echo and then getting confused with R for romeo, er "I thought you said alpha romeo", "No no not alpha, R is for romeo". SO the command was always misspelled and wouldn't run. Eventually I got passed on to the Technical Manager (Rebecca), "So what are you trying to sell me today?", "Are you so smart that you think we are trying to sell you something?" "Yes." "Please put the phone down you are wasting our time." Lily Inspirate me. 16:16, 12 January 2011 (UTC)
- I'm not following any of this. You feeling okay? DickTurpis (talk) 16:23, 12 January 2011 (UTC)
- Fine, I just enjoyed wasting their time. Lily Inspirate me. 16:30, 12 January 2011 (UTC)
- I find it insulting when given an English name; not by the operators, but by the companies who make them do that. --Leotardo (talk) 16:44, 12 January 2011 (UTC)
- Since I started working from home I've been getting Indian scam calls every other day. I had one today, someone asking me if I've been having problems with my Sky TV box. "Well, the only problem I have with my Sky TV box is that it doesn't exist" "*click*". Actually that's a good point, they're so rude these days they don't even bother signing off. As soon as they realise you're not going to fall for it or that you're not a viable target they just hang up. Cunts. If you're going to bug me while I'm working at least have the common decency to say "thank you, goodbye". CrundyTalk nerdy to me 17:06, 12 January 2011 (UTC)
- A friend of mine from an old job who recently moved to Glasgow to be with her fiancee told me she had an interview for a telesales job. I immediately disowned her. SJ Debaser 17:17, 12 January 2011 (UTC)
- I used to get regular calls from the "we're calling from your credit card company" scammers. It amused me to ask them either "which bank issued my card?" or "which type of card do I have?" (And good luck to them guessing the bank -- it's a tiny credit union.) MDB (talk) 17:20, 12 January 2011 (UTC)
- I get emails from HSBC asking me to confirm account details. They'd be slightly more believable if I was with that bank. MDB, wasn't it you that was saying ages ago your Facebook friend who you hadn't spoken to you in years started messaging you asking for money? SJ Debaser 18:03, 12 January 2011 (UTC)
- Yep, that was me. It was the "I'm on vacation in London and got mugged at gunpoint" trick. MDB (talk) 18:12, 12 January 2011 (UTC)
- That's ridiculous. I live in London and have only been robbed at gunpoint a few times, and only shot once. SJ Debaser 18:28, 12 January 2011 (UTC)
- Afterwards, I did think.... wait a minute, isn't gun crime incredibly rare in England? MDB (talk) 18:31, 12 January 2011 (UTC)
- I always laughed when I get 'about your credit card' calls due to the fact that I haven't had credit card in well over a decade. (And I don't count debit cards in this, and they aren't calling about those anyway...) - Ravenhull (talk) 10:43, 13 January 2011 (UTC)
- When I was in college I worked for a reputable public opinion polling company. Ugh, it was awful, and you really, really understand first hand how uninformed people are with their opinions. I used to enjoy the 'fucking with me' things, like people acting like they are having sex while they gave me their opinions about the Kansas City Star or some Congressional candidate. --Leotardo (talk) 17:23, 12 January 2011 (UTC)
- Actually, what is pissing me right off recently is that I've been getting random phone calls and when you answer them you hear nothing for a few seconds and then a click and someone goes "Hello? Hello?" in an Indian accent, of course, you immediately assume "telemarketing" and hang up... except no, it's HSBC calling me to tell me my account is heavily overdrawn and I need to transfer them some cash pronto. No problem as I have £150 always spare to shuttle between accounts every few weeks to keep the overdraft happy - except could these morons please use something other than the telemarketing style random call and connect thing and actually introduce themselves as a bank first! Fucksake. </rant> postate 19:38, 12 January 2011 (UTC)
- Spoiled person. Mine sends me a letter. As in a physical, dumb matter, made out of tree corpses, easily lost, second class, letter. Funny how they have no problem sending me sms messages every friday telling me the account balance, but can't notify of overdrafts the same way. Sen (talk) 22:53, 12 January 2011 (UTC)
Them darn Muslims[edit]
Here's a nice list of ancient Islamic scientists. Lily Inspirate me. 16:50, 12 January 2011 (UTC)
- It's too bad there aren't any modern ones who have had an impact. --Leotardo (talk) 16:59, 12 January 2011 (UTC)
- I very much doubt that that is true. Visit a science/engineering research department in any major international university & you'll find dozens of students from Arabic and Islamic countries. WëäŝëïöïďMethinks it is a Weasel 19:09, 12 January 2011 (UTC)
- Erm, not quite. That's a list of Arabic scientists who happened to muslims because of where they lived. Great work by them though. It's a bit similar to a certain person's claim that Newton's work was based on christianity, when he was simply a scientist who happened to be a christian (due to the time in which he lived). DeltaStarSenior SysopSpeciationspeed! 18:18, 12 January 2011 (UTC)
- Not really, since that article isn't particularly attributing these scholars' work to religious inspiration, though some seem to be related to practicalities of Islam such as finding the correct angle to face Mecca & calculating inheritance according to Islamic lore. It's specifically a list of Arabic scientists rather than Muslim ones, although as you say they would have Muslim because of their culture. & Even if they may have been inspired by religious fervour, that neither validates nor invalidates their achievements. ωεαşεζøίɗMethinks it is a Weasel 19:09, 12 January 2011 (UTC)
- You talking to me? </deniro> Your somewhat cavalier approach to linear posting has left me wondering to whom you are responding. DeltaStarSenior SysopSpeciationspeed! 19:26, 12 January 2011 (UTC)
- Well that's where LQT could solve the problem... just saying. Anyway, it's fairly uncontroversial that scholars in the Islamic Golden Age were highly successful. Having developed the scientific method, empirical and experimental rigour and many forms of communication and publishing years before Francis Bacon. What is slightly more controversial is what role the religious fundamentalisation of Islam played in the downfall of science and technology in the period - although this is mostly "controversial" because it says something bad about religion, there are plenty of signs that point to dogma and doctrine overruling science and reason towards the end of the 12-13th Centuries. postate 19:30, 12 January 2011 (UTC)
- The article says Arabic scientists--and a few were in Persia it seems. Nor are we sure that they were Islamic.Civic Cat (talk) 19:34, 13 January 2011 (UTC)
- The Wikipedia article on Al-Hassan ibn al-Haitham says he was either Arab or Persian, and a Shiite.Civic Cat (talk) 19:38, 13 January 2011 (UTC)
- The Wikipedia article on Omar Khayyaam says he was Persian and not particularly religious.
"Robertson (1914) believes that Khayyám was not devout and had no sympathy for popular religion,[26] but the verse: "Enjoy wine and women and don't be afraid, God has compassion," suggests that he might not have been an atheist. He further believes that.[26]. He then wrote that Khayyám "performed pilgrimages not from piety but from fear" of his contemporaries who divined his unbelief."Civic Cat (talk) 19:43, 13 January 2011 (UTC)
- Still another Persian, Al-Razi, seemed to have had little use for religion.Civic Cat (talk) 19:46, 13 January 2011 (UTC)
- This Persian---Muḥammad ibn Mūsā al-Khwārizmī--might have been a Zoroastrian. Here's a link to the Hindu Arabic numeral system. The Indians did it first. :-D Civic Cat (talk) 19:59, 13 January 2011 (UTC)
What article should this be stuck in?[edit]
CP/McCarthyism to a tee, methinks. Any ideas? Should this be in WIGO CP? Is this desperately unfunny? Should you disregard this on the basis of my sucking cocks? Webbtje (talk) 12:30, 13 January 2011 (UTC)
- Nowhere really. Maybe Conservapedia:RobSmith? Rob warming up before editing the Obama article? And no, we will not disregard this because you suck coks. We will disregard this because you are a 'orrible little man, in need of a good spanking and being sent to bed with no supper. Or something. --Ψ GremlinSnakk! 12:35, 13 January 2011 (UTC)
Anagrams[edit]
RationalWiki has no article on anagrams. Perhaps it should, if only to document this one:
-
(and it's doubly-true: the atomic number totals equate also) ONE / TALK 13:23, 13 January 2011 (UTC)
- Very impressive. What is aluminum? Bondurant (talk) 15:20, 13 January 2011 (UTC)
- It is a reduction of bauxite, mostly found in America. In other parts of the English-speaking world those poor sods must make do with aluminium, a poor substitute, and easily fatigued. Sprocket J Cogswell (talk) 15:39, 13 January 2011 (UTC)
- I'm crap at anagrams, although I did come up with a good one once when a trainee teacher called Luke Sadowski was on the news for getting arrested in a police child porn sting:
- Teacher, Luke Sadowski = weakhearted sick soul
- But then someone trumped me with:
- Teacher, Luke Sadowski = real weak, touches kids
- CrundyTalk nerdy to me 16:46, 13 January 2011 (UTC)
- I've known my wife for almost exactly six years now and we still argue about pronouncing Aluminium. But then I'm a Brit and she's an Oregonian. Darkmind1970 (talk) 16:55, 13 January 2011 (UTC)
- Wasn't it that the guy who formally studied called at aluminum, but IUPAC or whatever the relevant authority was wanted it to fit in with the -ium trend, resulting in some transatlantic side-taking? 86.165.18.160 (talk) 18:14, 13 January 2011 (UTC)
- Yes. That's exactly what it is. And all you Europeans failing to respect our obvious superiority on the subject. On that note, a flashlight is never, ever, a "torch". (one of my neighbor's when I was younger got married to an Aussie, and we used to pick on each other for such things.. it was especially funny, as I do mostly understand weird british idioms, but no one else seems to. So I was the only one able to provide him with a "torch" when they lost power) Quaruplague - You can't explain that! 19:23, 13 January 2011 (UTC)
- I think it's Michael McIntyre who had some stand up on American terminology - namely about how it was always weirdly descriptive and functional; it's not a pavement, it's a "side" "walk" as if it's too complicated to know that a pavement is where you "walk" and it's to the "side" of the road... and so on. postate 19:56, 13 January 2011 (UTC)
- Yeah! like driveways. And parkways. Damnit! And on that note.. Conversation at University: Other guy: "If it's called 'Fish and Chips' why does it come with fries?" Me: "Well, it's a British dish?" Him: "And?" Me: "They call french fries 'chips'" Him: "Then WTF do they call chips?" Me: "Crisps" Him: "That's fuckin' retarded" So take that, brits. Quaruplague - You can't explain that! 20:12, 13 January 2011 (UTC)
- I'd be most annoyed if my local chippy gave me fries with my fish and chip supperAMassiveGay (talk) 21:10, 13 January 2011 (UTC)
- I understand that the real reason Britain resisted merging its currency with that of the EU is that people wanted to continue, when necessary, to spend a penny, rather than Euronate. Howard C. Berkowitz (talk) 22:30, 13 January 2011 (UTC)
Hah[edit]
I found a funny. ĴαʊΆʃÇä₰ is writing a comment 18:02, 13 January 2011 (UTC)
- Cyanide and Happiness? Welcome to the internet, comrade! - VezzyRattlehead (talk) 18:08, 13 January 2011 (UTC)
- Been there. ~ Lumenos (talk) 05:49, 14 January 2011 (UTC)
Heh, heh, heh[edit]
This. Not going to get anywhere mind, but I wonder how long Beck would last if you paid for a full page ad say, in the NYTimes, quoting Beck's worst moments and excesses.-- Spirit of the Cherry Blossom 21:55, 13 January 2011 (UTC)
Watson[edit]
Pretty amazing fellow. Imagine what else it could do. Like write my English essay for me in about 5 seconds. ~SuperHamster Talk 06:14, 14 January 2011 (UTC)
On extremist rhetoric in general[edit]
It looks as if Loughner was "just" a whackjob not especially affiliated with the right or the left. Perhaps he was set off by extremist rhetoric, but considering he was a seriously deranged person, me might just as well been set off because Starbucks fucked up his order one day. We may never know.
However, there have been other cases where there have been individuals quite clearly inflamed by extremist rhetoric who took, or planned to take, actions of extreme violence.
- The Knoxville Unitarian Universalist Church shooting, where the gunman dreamed of killing the entire Congress and Bernard Goldberg's 100 People Who Are Screwing Up America, but since he couldn't get to them, decided to shoot up a liberal church.
- The attempt to attack the Tides Foundation and the ACLU, the Tides Foundation being a frequent target of Glenn Beck.
- A few shootouts with police by people who were convinced their guns were going to be taken away under Obama.
- The murder of George Tiller, a frequent target of Bill O'Reilly.
Now, you could argue that all of criminals in those cases were deranged to some degree or another. But it seems apparent that this type of rhetoric is inflaming people. Yes, ultimately, people are responsible for their own actions. But if I repeatedly tell you "Bob is plotting against you. Bob wants to hurt you. Bob is coming for your family" and you eventually go out and shoot Bob, don't I bear some responsibility? MDB (talk) 17:26, 11 January 2011 (UTC)
- Responsibility is difficult, if not impossible, to prove in that sort of case. Undoubtedly thousands of inflammatory remarks are made every day but not all are acted upon. Therefore it's only in hindsight that you can attribute an action to otherwise detached remarks made about something or someone. For example, would RationalWiki be held to account if someone was to send a letter bomb to Ramanand Jhingade, Esther Hicks or any number of people we're critical of? The line between something that is critical and something that is inflammatory or incitement is very blurred, and usually only after the murder or attack do we start to crystallise this line into something more absolute. postate 17:31, 11 January 2011 (UTC)
- Good point, but
- RationalWiki doesn't have a fraction of the audience Glenn Beck does.
- RationalWiki doesn't speak to a group that thinks in terms of "Second Amendment Solutions".
- RationalWiki generally speaks to people who enjoy mocking their opponents.
- MDB (talk) 17:57, 11 January 2011 (UTC)
- Also, legal responsibility may be hard to prove, but moral responsibility is something only your own conscience can convict you of. Did Glenn Beck feel anything at all when someone went to shoot up the Tides Foundation? MDB (talk) 18:19, 11 January 2011 (UTC)
- I watched the Daily Show where Jon Stewart talks about it, and to be honest his speech nearly brought me to tears. "When the reality of that rhetoric, when actions match the disturbing nature of words, we haven't lost our capacity to be horrified. And, please God, let us hope we never do." The part that made me almost burst into tears is that, even as he's speaking, there are nutjobs out there using this to score political points for their side. It sickens me, to be honest, that people can take death an tragedy like this and turn it into another goalpost for their election plans. Then again, These Fuckers are predictable, as always. -- CodyH (talk) 18:29, 11 January 2011 (UTC)
- Let's not get too smug. A lot of people on the Left, and I include myself in that number, immediately jumped to the conclusion, or at least wondered if, the shooter was a Tea Partier. And that number included people with media platforms. MDB (talk) 18:31, 11 January 2011 (UTC)
- +1. I don't think that point can be overstated from a rationalist POV. What we need to analyse is media whoring and knee-jerking and both ends of the political spectrum do that quite a bit. It's completely forgivable for people to want to build narratives and motives and explanations (it's how the human mind works) but it's a different thing entirely to do so without evidence or indications. postate 19:15, 11 January 2011 (UTC)
- And also remember that as recently as the Sixties, violently extreme rhetoric was common on the Left, and not just rhetoric, actions too. And while the American left is largely non-violent now, you still have environmental and animal rights extremists who are willing to discuss, and even use, violence (though most of it is directed at property and not lives.) MDB (talk) 19:22, 11 January 2011 (UTC)
- All I can say in defense of my own knee-jerking is that a Teabagger flipping out and killing someone seems VERY PLAUSIBLE these days. --Gulik (talk) 00:07, 14 January 2011 (UTC)
A brief disclaimer[edit]
I should point out that the Knoxville UU Church shooting affected me deeply. I'm from Knoxville originally, and one of the people there that day, who ended up being the church's unofficial media spokesman after the shooting, was a college classmate of mine. Further, I can't help but think that if I still lived in Knoxville, I'd have known a lot more people that were there, if not been there myself, considering how small the progressive community is in Knoxville. So, please forgive me if I'm a little emotional about this issue. MDB (talk) 18:45, 11 January 2011 (UTC)
The Young Turks[edit]
I have become a bit of a fan recently of this YouTube channel. The other day they spent sixteen minutes detailing out the violent threats made against liberals these last ten years and the number of violent acts or potentially violent acts in the last two years, including people that started shooting the police because they were paranoid the government was coming to take their guns (remember that when Obama was first elected). This has been more than an isolated incident of anti-Government violence. - π 00:46, 12 January 2011 (UTC)
- I saw that too. It was great - Cenk just blew me away with his segment. Darkmind1970 (talk) 11:36, 12 January 2011 (UTC)
- Angry Cenk is always fun to watch. Although I'm impressed by the butthurt in the comments. "How dare liberals blame this tragedy on the right!!" - say the same sort of people who blamed the Columbine shootings on atheism and evolution. You know, the fact that I wouldn't outright attribute Giffords' shooting to politics doesn't mean I weep tears of sympathy for the wingnuts claiming persecution. postate 16:21, 12 January 2011 (UTC)
Speaking of YouTube[edit]
Has this been mentioned? (Gabrielle Giffords warns Sarah Palin there will be consequences) Him (talk) 05:48, 12 January 2011 (UTC)
Will this world survive?[edit]
I was just walking back from the shop and got given a pamphlet from a Jehovah's Witness. Fortunately I had my iPod on at the time and was listening to Toot and the Maytals, so the guy couldn't engage me properly. I've just thumbed through it, and it's basically five pages of Old Testament quotes, saying the end of the world is nigh, and begins with "Has a world ever really ended before? Yes, a world did end. Consider the world that became very wicked in the days of Noah." I haven't seen a great deal of religious
propaganda writings like this. Are all Jehovah's biblical literalists, as in Global Flood, creation, etc.? SJ Debaser 15:30, 12 January 2011 (UTC)
- Every one of them I've known has been, and then some. Jehovah's Witnesses are kooky. Lord Goonie Hooray! I'm helping! 15:32, 12 January 2011 (UTC)
- Aye-ay-aya! Aye-ay-aya! You were hugging up the big monkey man! As for the Jobos, yes, biblical literals they be. I always engage them, as most of the ones I've met actually know very little about the bible, other than the choice quotes they learn rote. I normally have them saying "erm, we've got to go now..." DeltaStarSenior SysopSpeciationspeed! 15:50, 12 January 2011 (UTC)
- I actually feel a bit sorry for them in a way. They're convinced the world will be destroyed and they will ascend into heaven, except by their teaching the gates of heaven are closed and so they'll have to stay on earth, which has been destroyed (oh, but when the earth gets full up they get to live on the moon, or something). They preach about being saved from the end of the world which is going to happen any day now, and yet it doesn't. Time drifts along and one by one they pop their cloggs like the rest of us, with nowhere to go because they can't get into heaven and the earth isn't ready for them yet, on account of the world's destruction not happening. They set dates for the end of the world time and time again and the date comes and goes time and time again. Sad really. CrundyTalk nerdy to me 16:22, 12 January 2011 (UTC)
- I remember reading one of their Watchtower pamphlets years ago, and yes, they believe the Bible is literally true. They explained it thusly: Is the Bible true? Yes! It says so in the Bible! DickTurpis (talk) 16:36, 12 January 2011 (UTC)
Ways of dealing with such persons:
Bring in black pudding, blood doning and suchlike.
'I am a very busy Rational/other-wiki-of-choice-ian: can I convert you?'
'Actually I am waiting for Ragnorak/other doomsday of choice. 82.44.143.26 (talk) 16:33, 12 January 2011 (UTC)
- Ragnorak? What movie is that from? --Idiot numbre 188 (talk) 17:30, 12 January 2011 (UTC)
- I think it's this one, with a smattering of lutefisk. Sprocket J Cogswell (talk) 18:37, 12 January 2011 (UTC)
I never really understood the religion. Only 144,000 people can get into heaven, it's a primarily a Calvinistic school of though (pre-determinism, you are saved or dammed no matter what)...isn't it a cosmic joke to have 18 million converts and less then 1% predetermined to be saved after a life of devotion? Why not live if it didn't matter what you did? If we are supposed to have free will and our actions are being judged, why are we damned/saved before we can even make the choices? Is that really a good definition of a fair and loving deity?
I loved asking these questions, it was an easy way to get a witness red in the face and raving mad. ~ Subsound ~ 15:54, 14 January 2011 (UTC)
Related[edit]
You know, I was going to post about a Jehovah's witness coming around all the time, but didn't get around to it. When my step-dad passed away 2 years ago, the vultures started coming around every month to hand us their literature, to help us through our difficult times.. And I hope no one's mentioned this, but maybe. Last November's issue of "Awake!" was all about the evil atheists on the move, and how they should "keep their opinions to themselves, and let others worship in peace" or some such claptrap. I found the irony very delicious. Took me a bit to find it, the above was written from memory, as my wife chucks that crap out the moment she find it, so here: November 2010 Awake! "A NEW group of atheists has arisen in society. Called the new atheists, they are not content to keep their views to themselves. Rather, they are on a crusade, “actively, angrily, passionately trying to persuade the religious to their point of view"." What jerks. Stop trying to convert people to your point of view, Atheists!! Quaruplague - You can't explain that! 19:52, 13 January 2011 (UTC)
Banning Funeral Protests[edit]
Arizona has passed a law banning protests at funerals.
It's clearly directed at the plans of the Westboro Baptist Church to picket at the funeral of the nine year old victim of the Arizona shooting.
I'm of mixed feelings on this. One the one hand, protesting a funeral is a horrible act, especially when it's the funeral of an innocent little girl.
On the other hand, the entire point of free speech is to protect controversial and offensive speech; what's the point of the freedom of speech is you're only allowed to say nice things?
Thoughts?
MDB (talk) 17:27, 12 January 2011 (UTC)
- Free speech has limits. I would think that everybody should be able to agree that invading such a personal and heart-wrenching moment to simply scream offensive and hateful language, especially when the deceased is not a public figure, nor are they in any way connected to the cause you are "protesting" certainly falls into a protected zone. Speaking just for Arizona, I love how they know that this will probably be thrown out by the courts, but not in time to allow WBC a platform. SirChuckBCall the FBI 17:42, 12 January 2011 (UTC)
- WBC has announced they won't protest the funeral of the little girl. --Leotardo (talk) 17:45, 12 January 2011 (UTC)
- (EC) Good taste has limits. Using the law in this way is heavy handed. I fail to see what point WBC could possibly make other than that they are hateful people, however, I can well imagine that there might be inflamed passions on both sides so preventing access by WBC might be seen as a public order issue. Lily Inspirate me. 17:48, 12 January 2011 (UTC)
- Yep. As I said earlier, it's just a publicity stunt. postate 19:06, 12 January 2011 (UTC)
- Come to think of it, I'm saying "publicity stunt" as if that's better - it's far worse to do it for that sort of reason. I'd be far happier if the WBC actually grew a pair and carried out half their threats and stunts these days, rather than building a media story and quietly doing away with it. Fucking assholes. postate 19:32, 12 January 2011 (UTC)
- I must be missing something here. Why would they want to picket the funeral? What point is it that they wish to make that this law will prevent them from making?--BobSpring is sprung! 19:47, 12 January 2011 (UTC)
- If you want to get technical, they're not so much picketing the funeral as using the funeral as an excuse to picket to remind people of their views. They think the shootings were a sign of God's hatred of America. MDB (talk) 19:54, 12 January 2011 (UTC)
- Yeah, it's more a "LOOK AT US LOOK AT US GOD-DAMMIT LOOK AT US!!!" and doing it somewhere they can get a lot of attention, hence funerals. They've never really protested a funeral, more used the fact that they know it's tasteless to gather media attention. This is why banning them from making their points plays into their hands. Also, lets think of anyone who donated to the "angel wings" project and has effectively lost money now that WBC have cancelled. postate 21:03, 12 January 2011 (UTC)
(undent) Before looking at the speech and legal aspects, I remind all that Phelps & Co. were not especially prepared for counterprotests at Comic-Con]. Still, Comic-Con didn't go far enough. I picture a group of counterprotesters, alongside WBC, mostly in pink and lavender, with signs referring to the sexual desirability of selected WBC members. A few really bad drag queens, and more standard leathermen/leatherwomen, would help.
Absolutely legal psychological warfare. WBC seems to have a strategy of provoking people into attacking them and suing; let's see how it works when the motorcycle boots and stilleto heels are on the other foot.
There is some interesting case law that restricts protests against people deemed especially vulnerable, such as hospitalized patients.
Armondikov, I respectfully disagree that they are assholes. The anus has several useful purposes -- where else would you put a rectal thermometer? No, I don't think WBC qualifies. Howard C. Berkowitz (talk) 22:13, 12 January 2011 (UTC)
- Sorry, I knew I would be insulting the humble pooper with that comparison. Comic-Con was a classic, it was measured, good-natured and quite absurdest. Going a little further would be interesting (there was a similar "kiss-in" for the Pope, for those who may not remember!) but I think the absurdum point has already been made so it's best to now hit them where it hurts and starve them of the publicity (not going to happen, of course, even me sat here discussing it gives them the publicity! But you never know when people will get truly bored of this shite). postate 23:20, 12 January 2011 (UTC)
- The major problem is this is all about the publicity for WBC. The Arizona state legislature knows that this won't stand up in appeals, and the WBC, well staffed full of lawyers will just show up anyway, be arrested, and file a nice fat lawsuit for violating their free speech rights. WBC counts on this as it supplies their continued funding. They effectively put themselves in a no lose situation, as they always do. Unless the supreme court would actually back a free expression restriction, which they will never do, they know how to play the law precisely to their advantage. EternalCritic (talk) 04:31, 13 January 2011 (UTC)
- The biggest problem with the counter protest idea is that it doesn't stop the actual protest. By that I mean, for comic-con and such, no one really cared that they were there. The people who went there most likely know who the WBC were, what their deal was, and were ready to laugh at them. The counter-protesters were just having some fun (as well as doing some good), and the whole affair was just a decent laugh, frankly.
- But that only works when their targets are something silly, like a play or nondescript public event. When it's a funeral, the counter-protest doesn't really do much good. I mean, again, I like the idea of following them around with oiled up mostly naked guys who dance around them the whole time, but that doesn't do the friends and family of the deceased much good when there's this giant clusterfuck going on within sight of the coffin. That's not the counter-protesters fault, but it doesn't really address the main issue of them showing up at all. I really don't like the idea of banning any form of expression either, though.
- Eternal has it right, above me. They've put themselves in a no-lose situation, and they always do. Once you understand that they really, really just don't give a flying fuck what anyone else thinks about them, and that they're on pretty solid legal ground, there's not much you can do about them. X Stickman (talk) 10:17, 13 January 2011 (UTC)
I glanced at some news reports, but I can't do indepth googling from here at work, but did WBC actually show up yesterday at the girl's funeral? They do have a history of saying they'll show up, and not doing so, though we all wish they would use that tactic far far more... - Ravenhull (talk) 10:39, 13 January 2011 (UTC)
- And the ACLU would literally have no choice but to defend them on that... postate 13:01, 13 January 2011 (UTC)
- The Phelps family is largely made up of civil rights attorneys. They Neither want nor need the ACLU's help. EternalCritic (talk) 19:44, 13 January 2011 (UTC)
Portray Phelps as a Satanist. Not by conservative religious leaders. He's immune to their pablum. Hit 'em in a way that the WBC can't hit back.
Try this.
You know how he remakes pop songs. The same could be done to him.
Make a video. Have children sing.
"Fred Phelps hates the little children. All the children of the world.
Yellow, brown, black, and white.
They are god-damned in his sight.
Fred Phelps hates the little children of the world."
Or do a version of this video. Replace "bitch" with "wretch," and "Kyle's Mom" with "Fred Phelps."
FRED PHELPS HATES AMERICA
FRED PHELPS WOULDN'T LAST 5 MINUTES OUTSIDE OF AMERICA
FRED PHELPS IS A SELF-HATING FAGGOT
FRED PHELPS LOVES EVIL
FRED PHELPS WANTS YOU TO HATE JESUS
:-D Civic Cat (talk) 21:00, 14 January 2011 (UTC)
WTF?!?[edit]
I was checking here and Wikipedia on Raëlism.
Came across these.
Here's the pic (and here's the article).
While we're at it, is this music video ("GAYE BYKERS ON ACID GIT DOWN PROMO") racist?Civic Cat (talk) 20:41, 13 January 2011 (UTC)
- Ace? Is that you? --Kels (talk) 01:29, 14 January 2011 (UTC)
- I don't know. Kids these days. The Raelian pic looks worse, but then they seem like they are trying to honour Africans. I would say I don't think either one makes black people look bad. ~ Lumenos (talk) 01:43, 14 January 2011 (UTC)
- Fuck. I forgot I still have to finish reading the copy of Rael's book I own. Thanks for reminding me. Fuck. –SuspectedReplicant retire me 01:45, 14 January 2011 (UTC)
- Just to be clear: No, I didn't buy it. It was given to me (actually, I think he loaned it to me and I just never returned it) by one of the high-up people in the UK branch of the cult. –SuspectedReplicant retire me 01:48, 14 January 2011 (UTC)
Here's the only Ace I know of, save for Ace Freely. [2], [3], [4], [5], [6]. Them topless babes is cute though. :-D Civic Cat (talk) 20:40, 14 January 2011 (UTC)
What the hell has happened?[edit]
Alan Keyes is making sense for once (right up until the last line at least). - π 06:21, 14 January 2011 (UTC)
- Well, anyone who opens with a Sherlock Holmes quote has quite a bit to live up to. And yeah, it makes sense; although you can still see it's nothing more than an attack on liberals. Replacing "liberal" with "elite" as if he was referring to pundits and leaders on all sides of the spectrum but it's clearly meant to mean "liberal elitists". In the middle of a "national crisis" - that would probably mean health care reform and the fear the US is slipping into Communism (haha!). He's just taken an anti-left rant and dressed it up as an anti-partisan rant. Still, you're right, the meat of the point about liberty and danger is fairly true. postate 12:19, 14 January 2011 (UTC)
Floods[edit]
We all know about the terrible Oz floods, but there's also Brazil and SA. Given the snow storms up north and the rain down south, only a moron would continue to say that nothing's wrong. --Ψ GremlinPraat! 14:05, 14 January 2011 (UTC)
- You're right. We must scour our souls to find what we've done to offend god and appease him without delay. --JeevesMkII The gentleman's gentleman at the other site 14:55, 14 January 2011 (UTC)
- Sacrifice more virgins! Генгисevolving 14:56, 14 January 2011 (UTC)
- No! I'm not ready to die just yet. oh... wait... --Ψ GremlinParla! 15:02, 14 January 2011 (UTC)
- Don't worry, I think that only works for volcanoes anyway. I think floods call for the sacrifice of clean animals. --JeevesMkII The gentleman's gentleman at the other site 15:04, 14 January 2011 (UTC)
- In seriousness, though, we have a larger population and a wider ability to broadcast events like this, and of course the possibility of statistical clustering... so I'm not sure we can say something is going incredibly "wrong" as such. The environment is always going to have more confounding variables than you're aware of. postate 15:12, 14 January 2011 (UTC)
- Whilst there is much truth in what you say the increase in extreme weather conditions is outstripping the increase in, for example, volcanic eruptions - see this WP article. If it was all better information exchange you would expect the two to correlate. Jack Hughes (talk) 16:15, 14 January 2011 (UTC)
- I don't doubt that climate change leads to this sort of thing (having 20% of a degree directly relating to environmental science). But it's still worth remembering that reporting of events can easily exaggerate the severity of the increases. Flooding, for instance, can be affected by flood defence failures and where settlements are placed. If your flood defences work, then you have little or no flooding, but when they fail you definitely hear about it. While I don't think this explains 100% of the increases we see, it's something that should be remembered whenever the subject is broached. postate 17:17, 14 January 2011 (UTC)
- Of course, I have to sigh at the comments (just skip straight to "best" comments) in this recent Mail article. I'm not going to mince words, if you think a bit of ice confined to one country for one winter says that global temperature increases are false you're a fucking idiot. postate 18:23, 14 January 2011 (UTC)
Wikipedia vs Britannica[edit]
Okay, so this Nature study is now a few years old (mentioned in a recent BBC article). But I'm really not surprised by it. As much as WP's critics like to say "anyone can edit it and put in something wrong" that applies equally to "anyone can edit it and remove something wrong". And they can do it quick. The thing about science is that it is constantly being updated and improved - so WP's system I believe is far superior to any other for science. With humanities like history it may not be, as collaborative editing might dilute the structure and order of the wordy and convoluted essays expected of such subjects, but science is a different thing when it comes to both development and writing style. If you can't immediately correct a fact then you will have a problem keeping your information up to date, and the rate that science can be studied and improved means that this is a far bigger problem. A suitably out of date copy of Britiannica may well say that the atom is modelled like a plum pudding. postate 16:00, 14 January 2011 (UTC)
- I remember while working at my sister's school, doing networking BS, we found an old copy of some dictionary. Not that we did much with it, but thought it'd be funny to look up "computer". "one who computes". When I was at university, I had to take a "Senior Seminar", essentially a bunch of seniors discussing the implications of new technology. The instructor had a beef against wikipedia, and my counter was, "Yeah, that's why we don't use Linux, since anyone can just modify it, it's less secure and more prone to hackers! Oh wait.." (in the CS department, we only had linux machines) I tried to get across that even if you assume absurd statistics, like 25% of wikipedia editors are wandals, that still leave 3 people to instantaneously fix BS.. He didn't really get it. That was actually a common thing. I guess that's what happens when you have the oldest faculty member teach about "bleeding edge" topics. Quaruplague - You can't explain that! 16:20, 14 January 2011 (UTC)
- One of my lecturers wasn't against wikipedia as a whole (although it was against university policy to allow it as a source/reference for papers), but he did repeatedly tell people to be wary of it. His reasoning was that although mistakes can be reverted, you might catch the page between the wandalism and the correction. Some things have been left up there for a long time simply because the page isn't particularly popular (or because everyone who notices it thinks it's funny), and you might be unlucky and end up getting some lies when you're researching.
- That said, anyone doing any academic research at all shouldn't be using *any* encyclopaedia for anything more than simple definitions, really. Use it as a starting point and move on. That's where wikipedia is more useful than most people give it credit for; it's very easy to trace the sources it uses, which are better for research anyway. X Stickman (talk) 18:25, 14 January 2011 (UTC)
- Oh Gods, yes. Got to love the ref list, makes the visits to the library one hellava lot easier.
- Whenever I have a research paper to do, I go to Wikipedia first thing, mostly because almost all the articles have a handy reference finder right at the bottom to speed along my search for relevant material. It's also great in a non-research sense when I don't know something about a topic and need a quick idea of what it's all about. In that same line, CP is great when I don't know something about a topic and want to find out why all athiests are fat. SirChuckBThat is all 18:36, 14 January 2011 (UTC)
- I think most people are adopting the approach that "Wikipedia is fine as a first port of call, but it's NOT okay for it to be the last or only reference". And I think anyone who knows anything about Wikipedia would be hard pushed to disagree with that, it's very important that you know how to use it properly; i.e., read the summary, check the references and read those, check the talk page for disputes and then try somewhere other than WP to see if it agrees. So long as it's used like that (if you want to use it for academic learning), it's a fantastic thing to have available. In the undergraduate practical marking I'm doing now I can tell which students have been to Wikipedia because they all use the exact same two references and make (almost) the exact same summary of the references (well, they're 2nd years, they don't any better at this point). The more honest ones say "references compiled from Wikipedia". The better ones actually get off their arses and read those references and you can tell in their writing and what they say about them. Of course, the mark scheme says nothing about this in a formal sense so it really just determines whether I give them a smiley face or a snarkcastic comment. My favourite being the time when one student copied the first half of the relevant WP paragraph and his partner copied the second half of it! Well, maybe the derogatory comment I left for them was a little unprofessional but, frankly, they deserved it.
- It makes me feel quite old because when I did the exact same undergraduate practical only a few years ago WP was still this very "new" thing and no one really knew how to react to it properly. Indeed, it's only reasonably recently that academic tutors have started thinking properly about how to cite the internet in their references. Increasingly, things found on YouTube are being used and cited. Most progressive and non-technophobic lecturers seem to be for this, but no one's found a good, consistent attitude towards it yet. postate 19:50, 14 January 2011 (UTC)
- Are things found on youtube accepted as legitimate citations?
- I see Wikipedia as a grand experiment in epistemological sociology that has gone
very far from wrongpretty much right in many respects. Us oldphartz are still unaccustomed to the fluvial transience of its morphology, it seems. I am old enough to remember citing a paper encyclopedia being frowned upon. Sprocket J Cogswell (talk) 20:14, 14 January 2011 (UTC)
- Interviews and such. Pretty much the way we'd cite someone saying x, y or z. But as I said, it's mostly the very progressive humanities lecturers who are okay with it. You just have to be honest about where you get your information from, and if YouTube hosts your source, you have to cite YouTube. There are also some lectures posted online - MIT podcasts are apparently very good - that would make adequate citations for information. I don't see it as any different to merely citing an academic textbook, which are just collections of information that are, in principle, exactly the same as an encyclopaedia. postate 21:03, 14 January 2011 (UTC)
- Of course, the casual user (say, when did roller derby become popular again? Oh, WP said this...) doesn't really bother with the refs, but that's fine. Most stuff of that nature is pretty accurate and even if it's off marginally it's no big deal. That's a lot of what I've seen it used for, conversations of that nature, or simply wandering through and learning the broad strokes of various things for pleasure. --Kels (talk) 16:18, 15 January 2011 (UTC)
The politics of encyclopedias aside, I work in IT when someone wants to provide a 101 brief on some topic they invariably point to a wiki article. I agree that it isn't the best source for "serious" research but it is an excellent jumping off point for all of the reasons named above. Me!Sheesh!Mine! 17:06, 15 January 2011 (UTC)
Many kinds of awesome[edit]
Pumping tunes + drugs + video games - an unbeatable combo:
DogP (I buggered up my sig) (talk) 17:46, 14 January 2011 (UTC)
- Wow, another more or less pointless faux 8-bit video set to pretty average techno to toss on the pile. Thanks! --Kels (talk) 05:42, 16 January 2011 (UTC)
An interesting read:[edit]
Sen (talk) 12:37, 15 January 2011 (UTC)
Ship of Spies[edit]
Heard this interesting documentary about a security and intelligence themed lecture cruise on Radio 4. Although it sounds like the sort of thing that RobS might go on it's actually a bit scarey. As it's BBC radio non UK residents can also listen to it. Генгисevolving 14:59, 15 January 2011 (UTC)
Laptop help please...[edit]
Yo, I know there are many computer
geeks enthusiasts on here so forgive me if I may seek some assistance.
The Missus has got a Dell Inspiron 6000 laptop, running XP (it runs like an absolute piece of shit but she won't let me put Ubuntu on it for her). Anyway, in the last few days the left mouse button (on the touchpad) has stopped working, initially I thought it was a dodgey button, but tapping the touchpad itself doesn't work either. The right button is fine. To prove how shit windows is and show that the problem lie there, I fired up a USB install of Ubuntu, yet the problem is still the same; the left button doesn't register, and neither does tapping the pad. What the feck is going on? DeltaStarSenior SysopSpeciationspeed! 17:26, 13 January 2011 (UTC)
- The fact that tapping it on the pad doesn't work makes me think that the button is stuck. Do things become selected as you mouse over them? If not, it's probably fucking broken. If that was the case go to (in Ubuntu, I'm assuming there is some windows analog to this procedure which I wish you the best of luck in finding) System -> Preferences -> Mouse and change the mouse to be left handed. Then go to accessibility and enable the simulated secondary click. That way you can keep all the functionality using only your working right mouse button. Occasionaluse (talk) 18:33, 13 January 2011 (UTC)
- on my laptop the pad and button thing is called a symantic pointing device and there is an icon for it in the tray at the bottom right. If you dont have that icon try control panel, system, hardware tab, device manager , mice and pointing devices. That will get you to troubleshooting for the device and you can disable it from there too. A usb mouse should install ok if you have a spare. Hamster (talk) 19:41, 13 January 2011 (UTC)
- Ah, so Linux didn't magically fix a blatant hardware issue! If your drivers for the touch pad are up to date and in working order then the problem is clearly physical. You'll have a loose connection or a dodgy button. I.e., it's frakked. postate 20:12, 13 January 2011 (UTC)
- No, you've just turned the mousepad off, that's all. Most laptops come with an on/off facility built in so that you can deactivate it & just use a plug-in mouse. My old laptop had an obvious, though small, button next to the mousepad for switching it. My new one has a tiny pinhole at one corner of the pad. If I tap on the pinhole twice, the mousepad is disabled & a light comes on in the pinhole to show it's off. I can tap on it again to turn it back on. Check your mousepad & keyboard area & I'll bet you'll find there's some sort of switching device in there somewhere. ₩€₳$€£ΘĪÐMethinks it is a Weasel 20:40, 13 January 2011 (UTC)
- How to turn the touchpad on/off on a Dell Inspiron 6000 is outlined here. If this fails, it could be a driver problem, esp if you've installed a different OS or other software which could interfere. Basic advice & links here. WéáśéĺóíďMethinks it is a Weasel 00:28, 14 January 2011 (UTC)
- Also, if you can do without the laptop for a few days, take it to your local computer repair shop, one of the small outfits, not one of the big chains. They should be able to at least diagnose the problem for you and with any luck it'll be a simple fix. And for those wondering why I keep pushing the local computer repair shop, no I don't own one, I just know they can be bloody brilliant for things like that. For example, the local place I use will quite happily diagnose a problem and not charge you for it, reasoning, quite rightly, that you'll then use them for the repair. Better than somewhere like PC World who'll want a second mortgage out of you before they'll look at the damned thing.-- Spirit of the Cherry Blossom 21:51, 13 January 2011 (UTC)
- biggest problem is getting the key off to clean under it , if its stuck, without breaking it 67.72.98.45 (talk) 04:19, 14 January 2011 (UTC)
- I had a problem with the keyboard on a Dell Inspiron once, the key mapping went awry. I tried everything I could think of and eventually called Dell tech support as it was still within its 3-year warranty. The Indian support lady started going through all the usual stuff - disconnect from mains, remove battery... - I said that I had tried all that but she persisted. Having removed all power she told me to press and hold the power button for 30 seconds (maybe longer). When I connected everything back up it all worked perfectly. Obviously there was a known problem with an internal setting being corrupted but completely discharging the power reset it. There may be a similar thing with the touchpad. However a quick Google search shows that touchpad problems with the Inspiron 6000 series are quite common. One solution I found was to change a BIOS setting for pointing device from "PS/2-Touchpad" to just "PS/2". Personally I dislike touchpads on laptops as they invariably are activated when I type so I disable them and use a wireless mouse. Генгисevolving 10:20, 14 January 2011 (UTC)
- "Shibboleet!" postate 19:52, 14 January 2011 (UTC)
Fox News vs...Pedobear?[edit]
This was too good to pass up- | Talk! Scream! Share! 04:13, 14 January 2011 (UTC)
- Do these idiots ever spend more than five minutes researching something? 86.142.142.223 (talk) 07:56, 14 January 2011 (UTC)
- Please, why use facts, when you can get a good old scare going on. Fox has run out of terror/Obama is evil stories, but needs to populace to be scared. A scared population is a controllable population. And a dumb population does things like attack paediatricians. Of course, in the US, they'd shoot them, then realise they fail English comprehension forever. --Ψ Gremlin講話 11:19, 14 January 2011 (UTC)
- In defence of Fox, dressing up as Pedobear would be a great cover - who suspects the man dressed as Pedobear? As that cop said, it's no longer an internet prank or parody if someone has taken it a step too far. postate 12:22, 14 January 2011 (UTC)
- Actually, that was a local fox station, not the cable network Fox News. While the 24 hour network is insanely conservative/paranoid/conspiracy theory driving twits, the local stations don't really have a true editorial slant, they're mostly human interest stories and shit like that to scare people. Local ABC, CBS and NBC stations are just as bad. SirChuckBGentoo Penguins is the best kind of Penguin 16:07, 14 January 2011 (UTC)
- Thank goodness they can be recognised by the bear suit. Think how much harder they would be to spot if they just wore normal clothes.Please tell me that doesn't need joke tags.--BobSpring is sprung! 17:02, 14 January 2011 (UTC)
- No, it doesn't need joke tags. We can tell this because you followed it with "Please tell me that doesn't need joke tags.". :P postate 17:25, 14 January 2011 (UTC)
- OH SNAP- Oh Fox, when will you ever learn...Quackpack11! | Talk! Scream! Share! 20:18, 16 January 2011 (UTC)
- "Police say pedophiles will use the Pedobear figure as a decal on their car window to signify a connection with other pedophiles."
- Excuse the daylights out of me? There are some parts of my disbelief which flat out refuse to be suspended. Sure, someone is going to tag their own car with a symbol that any pre-teen who can click a mouse will recognize. Get out of town... Sprocket J Cogswell (talk) 20:27, 16 January 2011 (UTC) | https://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive88 | CC-MAIN-2022-21 | refinedweb | 23,690 | 69.72 |
A Bit of Background
OpenCV is an open-source computer vision and machine learning library. It contains thousands of optimized algorithms, which provide a common toolkit for various computer vision applications. According to the project’s about page, OpenCV is being used in many applications, ranging from stitching Google’s Street View images to running interactive art shows.
OpenCV started out as a research project inside Intel in 1999. It has been in active development since then, and evolved to support modern technologies like OpenCL and OpenGL and platforms like iOS and Android.
In 1999, Half-Life was released and became extremely popular. Intel Pentium 3 was the state-of-the-art CPU, and 400-500MHz clock speeds were considered fast. And a typical CPU in 2006 when OpenCV 1.0 was released had about the same CPU performance as the A6 in an iPhone 5. Even though computer vision is traditionally considered to be a computationally intensive application, clearly our mobile devices have already passed the threshold of being able to perform useful computer vision tasks, and can be extremely versatile computer vision platforms with their attached cameras.
In this article, I will provide an overview of OpenCV from an iOS developer’s perspective and introduce a few fundamental classes and concepts. Additionally, I cover how to integrate OpenCV to your iOS projects, and share the basics of Objective-C++. Finally, we’ll look at a demo project to see how OpenCV can be used on an iOS device to perform facial detection and recognition.
Overview of OpenCV
Concepts
OpenCV is a C++ API consisting of various modules containing a wide range of functions, from low-level image color space conversions to high-level machine learning tools.
Using C++ APIs for iOS development is not something most of us do daily. You need to use Objective-C++ for the files calling OpenCV methods, i.e. you cannot call OpenCV methods from Swift or Objective-C. The OpenCV iOS tutorials tell you to simply change the file extensions to
.mm for all classes where you’d like to use OpenCV, including view controllers. While this might work, it is not a particularly good idea. The correct approach is to write Objective-C++ wrapper classes for all OpenCV functionality you would like to use in your app. These Objective-C++ wrappers translate OpenCV’s C++ APIs to safe Objective-C APIs and can be used transparently in all Objective-C classes. Going the wrapper route, you will be able to contain C++ code in your project to the wrappers only and most likely save lots of headaches further down the road in resolving hard-to-track compile errors because a C++ header was erroneously included in a wrong file.
OpenCV declares the
cv namespace, such that classes are prefixed with
cv::, like
cv::Mat,
cv::Algorithm, etc. It is possible to use
using namespace cv in your
.mm files in order to be able to drop the
cv:: prefixes for a lot of classes, but you will still need to write them out for classes like
cv::Rect and
cv::Point, due to collisions with
Rect and
Point defined in
MacTypes.h. While it’s a matter of personal preference, I prefer to use
cv:: everywhere for the sake of consistency.
Modules
Below is a list of the most important modules as described in the official documentation.
-)
- ml: various machine learning algorithms such as K-Means, Support Vector Machines, and Neural Networks
- highgui: an easy-to-use interface for video capturing, image and video codecs, and simple UI capabilities (only a subset available on iOS)
- gpu: GPU-accelerated algorithms from different OpenCV modules (unavailable on iOS)
- ocl: common algorithms implemented using OpenCL (unavailable on iOS)
- a few more helper modules such as Python bindings and user-contributed algorithms
Fundamental Classes and Operations
OpenCV contains hundreds of classes. Let’s limit ourselves to a few fundamental classes and operations in the interest of brevity, and refer to the full documentation for further reading. Going over these core classes should be enough to get a feel for the logic behind the library.
cv::Mat
cv::Mat is the core data structure representing any N-dimensional matrix in OpenCV. Since images are just a special case of 2D matrices, they are also represented by a
cv::Mat, i.e.
cv::Mat is the class you’ll be working with the most in OpenCV.
An instance of
cv::Mat acts as a header for the image data and contains information to specify the image format. The image data itself is only referenced and can be shared by multiple
cv::Mat instances. OpenCV uses a reference counting method similar to ARC to make sure that the image data is deallocated when the last referencing
cv::Mat is gone. Image data itself is an array of concatenated rows of the image (for N-dimensional matrices, the data consists of concatenated data arrays of the contained N-1 dimensional matrices). Using the values contained in the
step[] array, each pixel of the image can be addressed using pointer arithmetic:
uchar *pixelPtr = cvMat.data + rowIndex * cvMat.step[0] + colIndex * cvMat.step[1]
The data format for each pixel is retrieved by the
type() function. In addition to common grayscale (1-channel,
CV_8UC1) and color (3-channel,
CV_8UC3) images with 8-bit unsigned integers per channel, OpenCV supports many less frequent formats, such as
CV_16SC3 (16-bit signed integer with 3 channels per pixel) or even
CV_64FC4 (64-bit floating point with 4 channels per pixel).
cv::Algorithm
Algorithm is an abstract base class for many algorithms implemented in OpenCV, including the
FaceRecognizer we will be using in the demo project. It provides an API not unlike
CIFilter in Apple’s Core Image framework, where you can create an
Algorithm by calling
Algorithm::create() with the name of the algorithm, and can set and get various parameters using the
get() and
set() methods, vaguely similar to key-value coding. Moreover, the
Algorithm base provides functionality to save and load parameters to/from XML or YAML files.
Using OpenCV on iOS
Adding OpenCV to Your Project
You have three options to integrate OpenCV into your iOS project.
- Just use CocoaPods:
pod "OpenCV".
- Download the official iOS framework release and add the framework to your project.
- Pull the sources from GitHub and build the library on your own according to the instructions here.
Objective-C++
As mentioned previously, OpenCV is a C++ API, and thus cannot be directly used in Swift and Objective-C code. It is, however, possible to use OpenCV in Objective-C++ files.
Objective-C++ is a mixture of Objective-C and C++, and allows you to use C++ objects in Objective-C classes. The clang compiler treats all files with the extension
.mm as Objective-C++, and it mostly works as you would expect, but there are a few precautions you should take when using Objective-C++ in your project. Memory management is the biggest point where you should be extra careful, since ARC only works with Objective-C objects. When you use a C++ object as a property, the only valid attribute is
assign. Therefore, your
dealloc should ensure that the C++ object is properly deleted.
The second important point when using Objective-C++ in your iOS project is leaking the C++ dependencies, if you include C++ headers in your Objective-C++ header. Any Objective-C class importing your Objective-C++ class would be including the C++ headers too, and thus needs to be declared as Objective-C++ itself. This can quickly spread like a forest fire through your project if you include C++ headers in your header files. Always wrap your C++ includes with
#ifdef __cplusplus, and try to include C++ headers only in your
.mm implementation files wherever possible.
For more details on how exactly C++ and Objective-C work together, have a look at this tutorial by Matt Galloway.
Demo: Facial Detection and Recognition
So, now that we have an overview of OpenCV and how to integrate it into our apps, let’s build a small demo app with it: an app that uses the video feed from the iPhone camera to continuously detect faces and draw them on screen. When the user taps on a face, our app will attempt to recognize the person. The user must then either tap “Correct” if our recognizer was right, or tap on the correct person to correct the prediction if it was wrong. Our face recognizer then learns from its mistakes and gets better over time:
The source code for the demo app is available on GitHub.
Live Video Capture
The highgui module in OpenCV comes with a class,
CvVideoCamera, that abstracts the iPhone camera and provides our app with a video feed through a delegate method,
- (void)processImage:(cv::Mat&)image. An instance of the
CvVideoCamera can be set up like this:;
Now that we have set up our camera with a 30-frames-per-second frame rate, our implementation of
processImage: will be called 30 times per second. Since our app will detect faces continuously, we should perform our facial detection here. Please note that if the facial detection at each frame takes longer than 1/30 seconds, we will be dropping frames.
Face Detection
You don’t actually need OpenCV for facial detection, since Core Image already provides the
CIDetector class. This can perform pretty good facial detection, and it is optimized and very];
The
faces array contains a
CIFaceFeature instance for each detected face in the image. Face features describe the location and the size of the face, in addition to optional eye and mouth positions.
OpenCV, on the other hand, provides an infrastructure for object detection, which can be trained to detect any object you desire. The library comes with multiple ready-to-use detector parameters for faces, eyes, mouths, bodies, upper bodies, lower bodies, and smiles. The detection engine consists of a cascade of very simple detectors (so-called Haar feature detectors) with different scales and weights. During the training phase, the decision tree is optimized with known positive and false images. Detailed information about the training and detection processes is available in the original paper. Once the cascade of correct features and their scales and weights have been determined in training, the parameters can be loaded to initialize a cascade classifier:
//Path to the training parameters for frontal face detector NSString *faceCascadePath = [[NSBundle mainBundle]);
The parameter files can be found under the
data/haarcascades folder inside the OpenCV distribution. After the face detector has been initialized with the desired parameters, it can be used to detect faces:
cv::Mat img; vector<cv::Rect> faceRects; double scalingFactor = 1.1; int minNeighbors = 2; int flags = 0; cv::Size minimumSize(30,30); faceDetector.detectMultiScale(img, faceRects, scalingFactor, minNeighbors, flags cv::Size(30, 30) );
During detection, the trained classifier is moved across all the pixels in the input image at different scales to be able to detect faces of different sizes. The
scalingFactor parameters determine how much the classifier will be scaled up after each run. The
minNeighbors parameter specifies how many positive neighbors a positive face rectangle should have to be considered a possible match; when a potential face rectangle is moved a pixel and does not trigger the classifier any more, it is most likely that it’s a false positive. Face rectangles with fewer positive neighbors than
minNeighbors are rejected. When
minNeighbors is set to zero, all potential face rectangles are returned. The
flags parameter is a relic from the OpenCV 1.x API and should always be
0. And finally,
minimumSize specifies the smallest face rectangle we’re looking for. The
faceRects vector will contain the frames of detected faces in
img. The image for the face can then be extracted with the
() operator on
cv::Mat simply by calling
cv::Mat faceImg = img(aFaceRect).
Once we have at least one face rectangle, either using a
CIDetector or an OpenCV
CascadeClassifier, we can try to identify the person in the image.
Facial Recognition
OpenCV comes with three algorithms for recognizing faces: Eigenfaces, Fisherfaces, and Local Binary Patterns Histograms (LBPH). Please read the very informative OpenCV documentation if you would like to know how they work and how they differ from each other.
For the purposes of our demo app, we will be using the LBPH algorithm, mostly because it can be updated with user input without requiring a complete re-training every time a new person is added or a wrong recognition is corrected.
In order to use the LBPH recognizer, let’s create an Objective-C++ wrapper for it, which exposes the following methods:
+ (FJFaceRecognizer *)faceRecognizerWithFile:(NSString *)path; - (NSString *)predict:(UIImage*)img confidence:(double *)confidence; - (void)updateWithFace:(UIImage *)img name:(NSString *)name;
Our factory method creates an LBPH instance like this:
+ (FJFaceRecognizer *)faceRecognizerWithFile:(NSString *)path { FJFaceRecognizer *fr = [FJFaceRecognizer new]; fr->_faceClassifier = createLBPHFaceRecognizer(); fr->_faceClassifier->load(path.UTF8String); return fr; }
Prediction can be implemented as follows:
- (NSString *)predict:(UIImage*)img confidence:(double *)confidence { cv::Mat src = [img cvMatRepresentationGray]; int label; self->_faceClassifier->predict(src, label, *confidence); return _labelsArray[label]; }
Please note that we had to convert from
UIImage to
cv::Mat through a category method. The conversion itself is quite straightforward and is achieved by creating a
CGContextRef using
CGBitmapContextCreate pointing to the
data pointer of a
cv::Image. When we draw our
UIImage on this bitmap context, the
data pointer of our
cv::Image is filled with the correct data. What’s more interesting is that we are able to create an Objective-C++ category on an Objective-C class and it just works!
Additionally, the OpenCV face recognizer only supports integers as labels, but we would like to be able to use a person’s name as a label, and have to implement a simple conversion between them through an
NSArray property.
Once the recognizer predicts a label for us, we present this label to the user. Then it’s up to the user to give feedback to our recognizer. The user could either say, “Yes, that’s correct!” or “No, this is person Y, not person X.” In both cases, we can update our LBPH model to improve its performance in future predictions by updating our model with the face image and the correct label. Updating our facial recognizer with user feedback can be achieved by the following:
- (void)updateWithFace:(UIImage *)img name:(NSString *)name { cv::Mat src = [img cvMatRepresentationGray]; NSInteger label = [_labelsArray indexOfObject:name]; if (label == NSNotFound) { [_labelsArray addObject:name]; label = [_labelsArray indexOfObject:name]; } vector<cv::Mat> images = vector<cv::Mat>(); images.push_back(src); vector<int> labels = vector<int>(); labels.push_back((int)label); self->_faceClassifier->update(images, labels); }
Here again we do the conversion from
UIImage to
cv::Mat and from
int labels to
NSString labels. We also have to put our parameters into
std::vector instances, as expected by the OpenCV
FaceRecognizer::update API.
This “predict, get feedback, update cycle” is known as supervised learning in literature.
Conclusion
OpenCV is a very powerful and multi-faceted framework covering many fields which are still active research areas. Attempting to provide a fully detailed instruction manual in an article would be a fool’s errand. Therefore, this article is meant to be a very high-level overview of the OpenCV framework. I attempted to cover some practical tips to integrate OpenCV in your iOS project, and went through a facial recognition example to show how OpenCV can be used in a real project. If you think OpenCV could help you for your project, the official OpenCV documentation is mostly very well written and very detailed. Go ahead and create the next big hit app! | https://www.objc.io/issues/21-camera-and-photos/face-recognition-with-opencv/ | CC-MAIN-2017-30 | refinedweb | 2,618 | 50.57 |
Really early days:
- App is baked into iOS 10. Just install a beta to your favorite (newerish) iPad.
- Houdah’s Type2Phone is the best for working with iOS playgrounds. Really great for copy/pasting.
- Some (but not all) Emacs keybindings work, which makes editing much easier. However, ^A/^E don’t respect the onscreen folding, so in squish mode, they’ll jump to the start and end of each \n-delineated line, not the folded version.
- I can’t figure out how to get things onto and off of the iPad: tried emailing playgrounds, playground pages, tried going through iTunes, looked in Xcode, even used PhoneView (Ecamm) and iBrowser (my little hack app) to try to track this stuff down. No luck so far.
- XCPlayground seems to be PlaygroundSupport — took me forever to figure that out because it’s still XCPlayground on the desktop. And of course, all the new ObjC name import stuff isn’t working as expecting, so you do have to hunt around. Live views are
PlaygroundPage.current.liveViewassignments.
- To see both code and live view (or just live view), slide from the right.
Lots still to explore!
3 Comments
Nice example, but in Swift 3 I was not able to reproduce changing the border color of the frame.
let view = UIView(frame: CGRect(x: 0, y: 0, width: 300, height: 200))
view.backgroundColor = UIColor.blue()
// there is no borderColor attribute
// there is no borderWidth attribute
// there is no cornerRadius attribute
import PlaygroundSupport
PlaygroundPage.current.liveView = view
borderColor, etc, are layer properties. I use a convenience function. This one reflects the latest version of Swift 3:
nice. | https://ericasadun.com/2016/06/13/ios-playgrounds/ | CC-MAIN-2019-43 | refinedweb | 270 | 74.9 |
.Net devs that went to Java and won't go back?
I meet a *lot* of former Java devs doing .Net. Former as in "I'm not going back" - they have issues with the language, the IDEs (plural, and spoken with scorn), with broken frameworks, etc.
Obviously, in my line of work, those are the people I'm likely to meet. :-)
I'm curious - do we have people here who have delivered .Net apps, but left .Net for Java and now wouldn't go back if they were paid to? What are the reasons for disliking .Net?
Philo
Philo
Wednesday, July 14, 2004
I hear a lot of folks comment that .NET gets everything right that Java gets wrong, but I'm not yet knowlegable in .NET to make a jugement. I'm still learning C#
muppet
Wednesday, July 14, 2004
I've used both, and they're both tools. I think .NET is a better all around toolset for the work that I do compared to Java. I would be fine being paid to write either one. I'd prefer to be paid to write Ruby. ;)
Brad Wilson (dotnetguy.techieswithcats.com)
Wednesday, July 14, 2004
Make .NET run better on *NIX servers damn it. There is no cross platform story for .NET from Microsoft, although I give the mono guys a lot of credit.
As a developer I like .NET (except for value types and boxing (ack) ).
As an admin I hate that I am stuck on Windows and IIS. Why do you still have to do most of your admining through the UI? Plus I still don't trust its security. I've just been burned once too many times by IIS. Plus kernel mode HTTP -- what crack addict came up with that idea?
christopher (baus.net)
Wednesday, July 14, 2004
From 1997 up until last year I had been developing in Java. I really like the language from an OO perspective but in my current gig I'm developing in .Net (mixture of C# and VB.Net).
Would I go back to Java. Absolutely - I have barely any complaints with it.
Reasons for disliking .Net? Not so much language based, but more to do with the tools. Even with the Resharper plugin, Visual Studio 2003 just rankles me. I really miss the power of IntelliJ Idea.
As an aside, it amazes me the number of people who say "gee, isn't .Net amazing because it can do XYZ". I just shrug and tell them Java has had that functionality for ages... ;-)
TheGeezer
Wednesday, July 14, 2004
I do both. They're about equally good. .Net cleans up the language a bit, but there are a lot more good APIs available in Java. So, for my own projects I'd choose Java. But in general I'll do whatever pays. There's really very little difference.
Dan G
Wednesday, July 14, 2004
"Why do you still have to do most of your admining through the UI?"
Not everything is in here, but probably more than you're aware of:
(I have a copy on my desk)
Philo
Philo
Thursday, July 15, 2004
I meet alot of developers who are moving away from .NET to Java or even not considering .NET as anything that is mature enough to even get a second of their time. They are giving the .NET technology 5 more years to mature and waiting for all the numerous security holes to show up.
Ha
Thursday, July 15, 2004
Give me .NET anyday. Java is a mother in law to manage.
Java/JSP is pure hell man. I downloaded Sun Forte once when i was in college, and did i suffer?.
Everything is a damned struggle. There is this specific directpry you need to put the classes12.zip to connect to oracle from Forte. No proper help at all. The IDE is so slow that it takes forever to load. One day i moved some JSP files to the server and the same application is not working there.
We got it to work of course, but its a hell for beginners. Now, experienced folks, dont flame me when i say java sucks.It may be a good tool once you know it in depth, but just to get some work done quickly, java is hell.
Asa aprogramming language, VB.NET still sucks i think. But its light years ahead in usability and ease of use.
I actually use another tool from Gupta corporation for coding.
Karthik
Thursday, July 15, 2004
you're trying to set up an IDE for Java, and you complain that you can't get things done quickly in it?
Text editors are your friend, son. I write all my java code in syn, which is little more than a glorified notepad.
muppet
Thursday, July 15, 2004
A while back Phil (call me Phillip) Greenspun wrote a note about how the students in his web development class got to chose the language they used during the semester.
The PHP and .Net/c# guys were doing ok, but the Java boys were failing and wanting to switch languages.
Billy Joel on Software
Thursday, July 15, 2004
If all I cared about was developing apps for Windows devices, I'd probably go with .NET too. It's got a lot going for it.
I mean, even outside of the fact that Java runs on many architectures (not even is biggest selling point for me), it just has a lot of nice stuff and makes a lot of things possible. .NET seems to take all this a step further...for Windows.
Crimson
Thursday, July 15, 2004
Most of my experience is with Java, but i've been working with .Net recently. I'd prefer Java anyday as i find it's libraries easier to use and more logical. And unlike others, i just dislike the visual studio environment. Maybe we're doing something wrong, but it crashes quite frequently during debugging on all the machines at our office.
Karan
Thursday, July 15, 2004
I swing both ways baby, a bit of .NET and a bit of Java. In my view they are almost the same. Other than .NET being limited to Windows, I don't see much reason for using one over the other.
What gives Java the edge, as far as I am concerned, is the IDE I use. IntelliJ IDEA for Java is fantastic, and unfortunately after using it Visual Studio seems clunky.
Herr Herr
Thursday, July 15, 2004
Somewhat off topic -- but I've never talked to anyone who tried Python and didn't stay with it, despite (relative) lack of tool support compared to e.g. Java or .NET.
Me, I have mostly dislike for Java and no opinion of .NET yet.
Ori Berger
Thursday, July 15, 2004
I am using IDEAJ as environment and open source components as libraries. Once you have all the jakarta-commons packages and beanshell, well, it's a dream to work with (and you have the source of all libs so that learning becomes easier).
I started studying .NET and made some apps. Well, looks fine but the learning curve is not worth it for the apps I am doing
Phil
Thursday, July 15, 2004
"I've never talked to anyone who tried Python and didn't stay
with it"
Take a look at Ruby uses. A lot of them, like me, moved to Ruby from Python.
More here:
Ged Byrne
Thursday, July 15, 2004
I really love .NET, but it has to be said that at least for now Java is more mature. There is just too much stuff in 1.1 that requires calling out to unmanaged code.
Just me (Sir to you)
Thursday, July 15, 2004
Also the direction which .NET is heading for the web, i.e. abstracting all functionality into re-usable components, all wired together with javascript, and sent differently to different browsers, is enough to make me not want to use it EVER for web based stuff. I like it for WinForms stuff, it's quick and easy and gets the job done well, but I just don't have enough control over it's web output.
That's more than enough to stop me using it for web apps, which is a shame. Any chance of you passing a note to the planning team Philo? I could give them plenty of detail about why I don't like asp.net, and it would be constructive...
Andrew Cherry
Thursday, July 15, 2004
So, Andrew, you want a phone directly to God?
lol
Thursday, July 15, 2004
As a total language junkie, four (Algol, PL/I, Asm-360, Fortran) prior to graduating Junior High, and many, many since, I was quite struck that someone found Java, the framework, and/or the libraries, sensible or logical. I have always used structured/object oriented techniques here (despite the language blocks) but nothing in Java makes sense to me. After wading through several books on Java, including Thinking in Java and the Java Black Book, I still don't get it. I call Java 'C++ in a straight jacket and rubber room on 500 hits of LSD' and I mean it.
.NET? The languages (VB.NET/C#) aren't any better, IMNSHO. And if you want to protect yourself from the outside user/cracker-cretin, well forget it with them. They merrily will allow you to create buffer overruns, underruns, and every other cardinal sin at the drop of a hat. C++ wrapped around .NET is pretty doable but hard, worthwhile work. What I do love about .NET is that the namespaces (libs) make structural/cognitive sense here. I've been drilling down around in there since I met it and I've yet to be surprised at all. I sure can't say that about Java! Actually, I rarely see .NET code spewing its guts all over my system but Java sure likes to do it. In any case, no matter what if you want to write secure code, you need to validate inputs from the user (especially potential cracker-cretins) and your outputs for each function/block/program. I don't see any of these making that easy. [NB: could someone add strong typing with inherent grep-like pattern matching to a real language?)
I'm at work on a .NET project here and will be giving it a full workout as well as Mono 1.0. It'll be interesting since I have most of the pre-.NET application equivalent worked out as well but it won't be any fun at all (exception management from hell folks).
Asides: where Mono shines is the Mono-GTK# lash-up which will only get better methinks. Ruby/Python, neat languages, already keeping an eye on them, more Ruby than Python. I'd also like to see Forth.NET grow and I am most definitely NOT a Forth programmer/evangelist (Amiga/C, yep, dead on!). It is a definite candidate for inline parameter validation.
Brian J. Bartlett
Thursday, July 15, 2004
I learnt Java, and learnt enough to work out "there needs to be something better". What I particularly didn't like at the time was the lack of a decent IDE, a formidable class structure and the performance. Perhaps these days you could write a high performance server-side app in Java but in the heady days of 2000 it was nt so easy.
.NET is the dogs bollocks. The IDE is mostly good. The class library is probably just as formidable but I was more incentivised to learn it.
Unfortunately it doesn't work on non-Windows. The Mono project seems to be attempting this but everytime I look at the classes available the ones I want to use have not yet been implemented. It only takes one or two to scupper your plans.
I was also never sure just how relieable / universal JDBC was and as databases are the cornerstone of all my development this was a big worry.
Also, back in 2000 it was looking like Java wasn't going to be supported in Windows, and if it was then it would be very begrudgingly so... which doesn't imply a reliable / safe environment.
gwyn
Thursday, July 15, 2004
gwyn,
the inclusion of that joke of a JDBC/ODBC bridge driver by Sun was a serious mistake. It gave people a bad first impression of the platform. Then again, it seems a rather typical example of the nature of the Java effort by Sun. They seemed more concerned with preventing it from running good on Windows, than giving it a fighting chance.
The thing I liked about JAVA more than .NET is that while JAVA's code can look more complicated, its syntax seems to lend itself to cleaner implementations in the long run. Of course, I have a lot more experience with JAVA than .NET,.
sir_flexalot
Thursday, July 15, 2004
I am a long time Java programmer and did a small project in .NET about six months ago. I really like it. I do think .NET overall is slightly better (my subjective observation) but definitely not to a degree which you could not pay me to do Java again. The competetion should be good for Java. But since my employer uses UNIX .NET really isn't a good solution.
Bill Rushmore
Thursday, July 15, 2004
"But since my employer uses UNIX .NET really isn't a good solution."
I think that summarizes the difference in a nutshell. Client side Windows app: .NET. Anything server side: Java, because you're being irresponsible to your customer/employer if you tie yourself to just one server platform.
I work with WebObjects, which is still the most powerful web application framework I've seen, once you understand its design and architecture. People who don't understand the design have a very hard time with it, but once you "get" it, you can really do a lot with very little code. And Enterprise Objects Framework is still da bomb; similar open source Object/Relational tools are out there, but none are nearly as easy to configure, get up and running, develop with, and deploy, from what I've seen.
And I say this even without many significant features added in the last few years. Other frameworks are still struggling to catch up to WO, even with WO standing still.
Jim Rankin
Thursday, July 15, 2004
If I was in a completely Windows environment with no intention of going/using things on any of the *nix's, then I might consider .Net exclusively.
Realistically though, I've always found a wealth of libraries and functionality in Java (and the Java community) that .Net just doesn't seem to have.
For example, I found an "Open Source" .Net committee a few months ago. I started going through their hosted projects and quickly found that *none* of their things were Open Source. They were highly overpriced libraries with dismal support, minimal documentation, and no source code available.
If I'm going to get dismal support and minimal documentation, there's no way I'm going to pay you for the compiled code.
KC
Thursday, July 15, 2004.
Clay Dowling
Thursday, July 15, 2004
I've been working with Java for 7 years and .NET (C# only) for 1 year (both professionally), and I much prefer Java. The libraries seem more complete, they're better-documented, and I like having the option of looking through the library sources for more detail on how stuff works.
I greatly prefer C# to VB, C, C++, etc., and .NET would be my tool of choice for a Windows app, but it seems to me like .NET still lags behind Java significantly. If you compare to java 1.0, it might be a tossup, but .NET still doesn't come close to modern Java (IMHO).
Of course, I prefer inner classes to delegates, checked exceptions to unchecked (when the decision between checked/unchecked is made correctly according to the JLS), property-accessor methods to smart properties, no operator overloading, etc. - if my philosophies were more in sync with .NETs, my reaction might be different.
And .NET could have been much more compelling if DevStudio was more like Idea from the beginning. DevStudio was best-of-breed until Idea came along, and they're miserably behind in that race as well.
schmoe
Thursday, July 15, 2004
actually, I'd like to see operator overloading in Java.
One bit of functionality: .NET properties! Not having to write crappy getter & setter functions make life worth living!
anon
Thursday, July 15, 2004
why not just make them public members and be done with it, then?
How silly. The whole point of getter/setter methods is that you're not necessarily dealing directly with the property in question.
I think anon is saying that public field members are an advantage of .NET - that you don't need to write the get and set. Of course every best practice of .NET says that you should use a property (get/set) instead...
I really don't understand why they didn't just block public members, and added a keyword to expose a properly automatically for simple properties (where there are no rules). i.e.
private int _name;
public int Name property _name;
(or something like that)
Dennis Forbes
Thursday, July 15, 2004
Err.. I can have public members of a class in every language I code in, including PHP, Perl, Java....
how is it an advantage of .NET?
Why? Because many are confused into thinking that in .NET
public int age;
...is equal to...
public int Age
{
get
{
return _age;
}
set
{
_age = value;
}
}
While it isn't (the IL is entirely different, and it's a breaking changing when you switch to a property from a public field member).
>> "why not just make them public members and be done with it, then?"
>> "I think anon is saying that public field members are an advantage of .NET - that you don't need to write the get and set. Of course every best practice of .NET says that you should use a property (get/set) instead..."
I would never think of making a data member public. What I meant to say was, I love that .NET lets the client feel like he's working with a property.
Thing.Height = 1200
saveHeight = Thing.Height
feels more natural than
Thing.setHeight(1200)
saveHeight = Thing.getHeight()
Also, when looking at the class methods, you simply look for the property and you can see if it's read-only without having to scroll down to the set* methods. Just a syntactical thing, but many people love it.
Ah, sorry anon. I coincidentally had a discussion with a peer who confused Java best practices with the .NET possible, so I presumed the same.
Client transparency is nice, although there's the risk I mentioned in the prior post that it actually is creating get/set code in the class user (it's language transparency, but it isn't IL transparency), which means that a lot of people are surprized when they change a backend class from using a public field member (which is sadly very common) to using a property, and previously compiled components break.
Woohoo, .Net turns your getters and setters into a property! Just like VB6, VB5, C++/COM, and almost every other programming environment on Earth does.
Even Flash (Actionscript 2.0) allows you to do this. Did you live under a rock previously?
Wayne
Thursday, July 15, 2004
My current employer has been dallying with .NET for 18 months now, and is starting to (finally!) put an application in production.
However, we just hired a new CTO, and one day he asked the fateful question: "Why aren't you guys using Java?"
So, we just got through with an evaluation of the two technologies. He's made a presentation to the CEO & the board, and they've decided that our problems *aren't* the result of mismanagement, churn, & indecision (kindly refer back to the 18 months it took to get an app into production), but the fault of the technology that was used.
So.... we're changing to J2EE, using BEA WebLogic.
[sigh]
I haven't used Java since the 1.0 jdk days (I'm seriously rusty!). I hated it then because the tools were so primitive, but things have gotten better (I could grow to like Eclipse). Anyone got a suggestion for a intro to EJB and J2EE technologies book or two?
I'll let you know in a few months if I want to go back to .NET.
example
Thursday, July 15, 2004
I've been impressed by everything I've ever bought from O'Reilly. Although I can't recommend a specific book offhand, you might check there.
In the Enterprise Architecture vein, I'd recommend Martin Fowler's book on the EA Design Patterns. It will be useful in any OOP scenario.
O'Reilly has been a crap shoot for me. Between them and Wrox I've had good luck. I don't particularly like the O'Reilly Learning Java book, too much theory and not enough Try This sort of examples, like Wrox has (and is the best way for me to learn)
I switched to Java pre-.NET because I wanted the advantages of OOP without having to learn C++ (having been a former VB/ASP3.0 developer). Then I switched to .NET because I constantly found myself in an uphill battle to integrate with the Windows network domain environment. Also, I found managing the IIS environment far easier than that of the J2EE app servers for a one-man department.
But I'd have no problem going back to Java again. They both have their trade-offs.
Joe
Thursday, July 15, 2004
example: Check out the Spring Framework and leave EJBs in the past:
Chris Winters
Thursday, July 15, 2004
> Language snobbery isn't a luxury I can afford.
What he said.
To me the *only* important questions about languages are the big-ticket ones:
1. Can the language get the job done when wielded by reasonably competent people who are trying to make it work? (Consider the Yahoo site written in Lisp as an example)
2. Does the language have the broad industry support (tools, developers, mindshare etc) that it needs to succeed?
IMO, the Yahoo site in Lisp fails by this metric, even though the underlying implementation undoubtably succeeded.
Similarly, Ruby and Python simply don't have the same mindshare as Java and C#, regardless of their merits, and thus simply aren't considered for many tasks, again regardless of their merits.
3. Does the language have the support it needs in the particular environment that it's intended for?
.NET is probably not a good fit for a Unix shop, and if your boss is convinced that PHP is bad, it may be best to simply find another tool.
Portabella
Thursday, July 15, 2004
I'm ok programming either, but I have a feeling a lot of business owners/decision makers will go with java just because its open and free. Personally, I think .NET's web stuff is way superior to java, but thats only a small part of development, and not enough to convince managers to spend thousands on a microsoft only solution.
vince
Thursday, July 15, 2004.
Also, MS solutions don't necessarily cost big ticket dollars. Windows Server 2003 Web Edition (although only available via OEM) is only like $500 +/-. And while there is a lot of OSS stuff for Java, it has plenty of pricey software in its camp too (like WebSphere).
"Perhaps these days you could write a high performance server-side app in Java but in the heady days of 2000 it was nt so easy."
You mean like:
* eBay
* Google
* iTunes Music Store
???
Walter Rumsby
Thursday, July 15, 2004
Walter, in gwyn's defense, those are all projects with huge staff and budget allowances...to call it "easy," it shouldn't require a team of 50+ to accomplish :)
I'm not sure the implementation of (at least a rough cut of) eBay or iTMS would be that hard - it's the business model and marketing clout that are really the hard work in those two cases. eBay's architecture is a model of simplicity.
>> The thing I liked about JAVA more than .NET is that while JAVA's code can look more complicated, its syntax seems to lend itself to cleaner implementations in the long run....
Your criticism should be more appropriately aimed at certain .NET *developers*, not a *.NET* itself. Can you code .NET just as neatly if not more so than Java, but you have to know what you are going in both languages.
>>.
For clarification, discussions of .NET and ASP.NET should be seperate. MS definitely made some design decisions with ASP.NET which many don't agree with, but ASP.NET was built on top of the .NET framework not vice versa. Criticisms of ASP.NET should not be taken to apply to all of .NET, and definitely not to .NET languages like C# or VB.NET.
>> One bit of functionality: .NET properties! Not having to write crappy getter & setter functions make life worth living!
Agreed 100%!
>> why not just make them public members and be done with it, then? How silly. The whole point of getter/setter methods is that you're not necessarily dealing directly with the property in question.
It is not silly. 1.) Fields and Properties have different binary signatures (unfortunately) so if you code with one and later need to change your clients will need to be recompiled too. 2.) get/set properties are there for exactly that reason, i.e. not to deal directly with the property in question (though I am mixing your terms with .NETs) Properties in .NET allow you to use normal syntax (i.e. obj.prop= value) to assign a property, but still use code in get/set routines if need be.
>> private int _name;
>> public int Name property _name;
I would LOVE it if they would do something like that.
>> While it isn't (the IL is entirely different, and it's a breaking changing when you switch to a property from a public field member).
EXACTLY (and its a damn shame MS designed in that way, IMNSHO)
>> Woohoo, .Net turns your getters and setters into a property! Just like VB6, VB5, C++/COM, and almost every other programming environment on Earth does.
From my understanding neither Java nor C++ offer get/set properties. Am I wrong on this?
>> However, we just hired a new CTO, and one day he asked the fateful question: "Why aren't you guys using Java?"
You gotta love ideologists! :-0
>> To me the *only* important questions about languages are the big-ticket ones:
You forgot:
-- Can I find and/or afford to hire someone who has expertise in that technology?
-- What's its future?
>>.
A lot of business people tend to pick the thing that is most supported (see Geoffrey Moore's Crossing the Chasm.) By support, community website, message board, and newsgroups are as much a part of that support as going to MS. One area where .NET leads recently is MS has many of its core developers blogging about their languages, and I have learned more from reading thoses blogs that I could on a month of paid tech suport calls. The Java community doesn't seem to offer the same level of support on their blogs (yet?)
>> "Perhaps these days you could write a high performance server-side app in Java but in the heady days of 2000 it was nt so easy."
>> You mean like: * eBay * Google * iTunes Music Store
Yup, those were easy apps to write and were turned out in no time in someone's garage. And without the benefit of any VC funding, I might add (not!) :-)
Mike Schinkel
Friday, July 16, 2004
"From my understanding neither Java nor C++ offer get/set properties. Am I wrong on this?"
I don't know about C++, but for Java, that properties-style functionality is a large part of what JavaBeans are all about, and they're getting a lot of play in various frameworks because of it.
Justin Johnson
Friday, July 16, 2004
example: Check out Matt Rabble's 'AppFuse' (which includes Spring) which should give you a jump-start into a working J2EE app, using current technologies (Spring, Hibernate, Ant, XDoclet).
As far as a Java IDE, I'd suggest IntelliJ's IDEA. Only if you can't get that would I suggest checking out Eclipse & NetBeans (the latter /used/ to be very slow, but the current version is supposed to be very different)
Gwyn Evans
Friday, July 16, 2004
> You forgot:
No I didn't. These are all summed up in the phrase "broad industry support".
Portabella
Friday, July 16, 2004
"What I meant to say was, I love that .NET lets the client feel like he's working with a property.
Thing.Height = 1200
saveHeight = Thing.Height"
I think this is awful. What if the invisible getter does something like
Height = value;
DeleteImportantDataFile();
Silly example, yes. But there is no indication that this might do something other than assign a value. With the equivalent code in Java, you know for sure nothing other than the value being set is happening here.
This is the problem with operator overloading like things in general. What appears to be happening and what is really happening might be two very different things.
I'm a big fan of simplicity in programming languages. The characters saved by typing
saveHeight = Thing.Height
vs.
saveHeight = Thing.height()
aren't worth the increase in language complexity.
IMHO.
Jim Rankin
Friday, July 16, 2004
For what it's worth, a number of functional languages (Dylan and Cecil at least) consider 'x.y' to be syntactic sugar for 'y(x)'. In those languages, then, you know when you do 'x.y' you're *always* calling a function, so there's nothing being "hidden" there.
Phillip J. Eby
Friday, July 16, 2004
"saveHeight = Thing.Height
vs.
saveHeight = Thing.height()"
You also have to remember that with height(), you have to write the contents of the method in order to return the height -- so in addition to the two parentheses involved when calling it, you also have write
public float height()
{
return this.height;
}
Still, I generally agree that the characters saved in typing ain't that big of a deal, especially when working with an IDE that can generate getters and setters. In programming the primary limiting factors are thinking time and reading time, not typing time.
T. Norman
Friday, July 16, 2004
I don't know about other IDEs, but Eclipse has a function to generate simple getters and setters automatically (including boolean getters/setters), so the complaint that you have to write the function is a non-starter. You only have to write the function body if you want more complicated functionality, in which case you'd be writing it anyway.
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware5/default.asp?cmd=show&ixPost=163409&ixReplies=56 | CC-MAIN-2018-17 | refinedweb | 5,205 | 74.39 |
The air field simulates the effects of moving air. The affected objects will be accelerated or decelerated so that their velocities match that of the air. With the ‘-vco true’ flag thrown, only accelerations are applied. By parenting an air field to a moving part of an object (ie. a foot of a character) and using ‘-i 1 -m 0 -s .5 -vco true’ flags, one can simulate the movement of air around the foot as it moves, since the TOTAL velocity vector of the field would be only based on the movement of the foot. This can be done while the character walks through leaves or dust on the ground. For each listed object, the command creates a new field. The transform is the associated dependency node. Use connectDynamic to cause the field to affect a dynamic object. If fields are created, this command returns the field names. If a field was queried, the results of the query are returned. If a field was edited, the field name is returned. If the -pos flag is specified, a field is created at the position specified. If not, if object names are provided or the active selection list is non-empty, the command creates a field for every object in the list and calls addDynamic to add it to the object; otherwise the command defaults to -pos 0 0 0. Setting the -pos flag with objects named on the command line is an error.
Derived from mel command maya.cmds.air
Example:
import pymel.core as pm pm.air( name='particle1', m=5.0, mxd=2.0 ) # Result: nt.AirField(u'particle1') # # Creates an air field with magnitude 5.0 and maximum distance 2.0, # and adds it to the list # of fields particle1 owns. pm.air( wakeSetup=True ) # Creates an air field with no no velocity in and of itself (magnitude = 0). # All of the air's # velocity is derived from the motion of the objects that own the field. | http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.effects/pymel.core.effects.air.html#pymel.core.effects.air | crawl-003 | refinedweb | 330 | 67.76 |
EXP(3) BSD Programmer's Manual EXP(3)
exp, expf, exp2, exp2f, expm1, expm1f, - exponential functions
libm
#include <math.h> double exp(double x); float expf(float x); double exp2(double x); float exp2f(float x); double expm1(double x); float expm1f(float x);
The exp() and the expf() functions compute the base e exponential value of the given argument x. The exp2() and exp2f() functions compute the base 2 exponential of the given argument x. The expm1() and the expm1f() functions computes the value exp(x)-1 accu- rately even for tiny argument x.
These functions will return the appropriate computation unless an error occurs or an argument is out of range. The functions exp() and expm1() detect if the computed value will overflow, set the global variable errno to ERANGE and cause a reserved operand fault on a VAX. The function. moderate, but increases as pow(x, y) approaches the over/underflow thres- holds until almost as many bits could be lost as are occupied by the floating-point format's exponent field; that is 8 bits for VAX D and 11 bits for IEEE 754 Double. No such drastic loss has been exposed by test- ing;). Previ- ous., in- dependently of x.
math(3)
The exp() functions conform to ANSI X3.159-1989 ("ANSI C"). The exp2(), exp2f(), expf(), expm1(), and expm1f() functions conform to ISO/IEC 9899:1999 ("ISO C99").
The exp() functions appeared in Version 6 AT&T UNIX. The expm1() function appeared in 4.3BSD. MirOS BSD #10-current February. | http://www.mirbsd.org/htman/sparc/man3/exp.htm | CC-MAIN-2016-18 | refinedweb | 256 | 55.74 |
>> There are similar reliable tests for the other arithmetic operations. > > Is this documented somewhere? Is there a list of the standard ways? CERT has something, here: Although the principles in that memo are OK, the actual code is hard to read and its multiplication overflow checking is buggy. Here's something better, which I just now wrote. Also, please see Emacs Bug#8611 <>; its patch uses code like the following. #include <limits.h> int add_overflow (int a, int b) { return (b < 0 ? a < INT_MIN - b : INT_MAX - b < a); } int subtract_overflow (int a, int b) { return (b < 0 ? INT_MAX + b < a : a < INT_MIN + b); } int unary_minus_overflow (int a) { return a < -INT_MAX; } int multiply_overflow (int a, int b) { return (b < 0 ? (a < 0 ? a < INT_MAX / b : b != -1 && INT_MIN / b < a) : (b != 0 && (a < 0 ? a < INT_MIN / b : INT_MAX / b < a))); } int quotient_overflow (int a, int b) { /* This does not check for division by zero. Add that if you like. */ return a < -INT_MAX && b == -1; } int remainder_overflow (int a, int b) { /* Mathematically the remainder should never overflow, but on x86-like hosts INT_MIN % -1 traps, and the C standard permits this. */ return quotient_overflow (a, b); } | https://lists.gnu.org/archive/html/bug-gnu-emacs/2011-05/msg00050.html | CC-MAIN-2019-30 | refinedweb | 193 | 67.96 |
Talk:Dominant group
See w:Wikipedia:Articles for deletion/Dominant group.
Can we classify the field of this study? Linguistics? What? --Abd 00:08, 6 September 2011 (UTC)
moved from User talk:Abd --Abd 01:23, 8 September 2011 (UTC)
I would very much like to. So far it is part Linguistics and part History of Science. But, I am not sure how this is done. Is there some kind of template that is put on this discussion page?
In addition as the exploration progresses I may need to add one more, not sure of this though.
Also, the text on the page is getting lengthy and I would normally split off into separate topics, but I do not know if that is what's done here, or whether some of the text should be put in my user space (assuming I have one) here at wikiversity. Advice definitely welcome. Marshallsumter 01:17, 8 September 2011 (UTC)
- Yes, you have a user space. For example, I have a personal sandbox at User:Abd/Sandbox. You may use user space for just about anything that isn't against policy, you can draft articles there. I have a User:Abd/Playspace that I use for temporary storage of stuff that may not be appropriate for mainspace, but that may have some value for the user who created it, but that isn't spam, copyvio, etc.
- Wikiversity, unlike Wikipedia, allows subpages in mainspace. It's like Wikibooks. You can easily shove subtopics into subpages. In fact, please do! You can link to a subpage with a simple link that is relative to the page it's on. That's very useful on Wikibooks, they can rename a whole book, and the subpage links will still work. (Renaming an entire hierarchy of pages is a custodial privilege here, it can be done with a single Move.)
- If the page is a study in linguistics, then it might be linked from School:Linguistics or from some related page. It can be placed in a category, so that it will be found by looking at the Category page. See Category:Linguistics. A page may be, of course, placed in more than one category.... --Abd 01:32, 8 September 2011 (UTC)
Apart from my ethical concerns with this experiment, it currently violates the copyrights of several English Wikipedia editors. If proper attribution is not provided, per the terms of the CC-BY-SA license, the material will need to be deleted. Kaldari 19:12, 22 September 2011 (UTC)
- Thank you for noticing the incomplete citations/links. The citations have been completed, and I am happy to note their contributions. Marshallsumter 02:11, 23 September 2011 (UTC)
- Just as an additional point about attribution to wikipedia.)" Source: [1]. Marshallsumter 04:33, 23 September 2011 (UTC)
- Absent specific examples, I do not agree that copyright is being violated. Technically, there is no violation of copyright law, creating liability for the WMF, for anything on Wikipedia or Wikiversity unless the WMF fails to remove material on request, or can be shown to have deliberately encouraged violation of rights (as the WMF, not as independent individuals). In this case, as far as anything I've seen, there is no violation of copyright at all. Short quotations for the purpose of criticism or analysis don't violate copyright, they are de minimus. The core issue: could the editor who posts the material be successfully sued for copyright violation if, say, that editor sells the material elsewhere? Could someone copying the material from Wikiversity sell it according to the WV licensing? In fact, that could be a complex legal question, but my informed understanding is that, for every specific example I've looked at, the answer is, No, such a suit would not only be very unlikely to be filed, it would be very unlikely to succeed. I'm not an attorney, and would defer to a more knowledgeable opinion, but I'm not seeing that here. WMF counsel, I have some reason to think, has been informed about this situation and we are not seeing WMF instruction on this, only generalities.
- Unfortunately, Kaldari did not point to a specific example. With a specific example, a more specific opinion would be offered; my comments above are more general, having to do with de minimus quotation. It is common on the internet that entire original works are quoted for purpose of response or criticism, and this is mentioned in resources explaining copyright law; but that's a fair-use exemption. So even if there were copyright violation involved in short quotations, they would still be fair-use, depending on the usage. If Marshall diffs the quotations, attribution is covered. They should, of course, be diffed, it's common courtesy and matches academic practice. --Abd 15:16, 29 September 2011 (UTC)
- Later comment: scholarly practice requires attribution of sources. It is enough for all material released under a WMF site license, that a link be given to the source, and the source will show attribution. My preference is to link to permanent versions, ideally as edited by the editor who is being quoted or whose action is being cited, but it is sufficient to link to the page. The edit history here will show the date of addition of the material, and this then will indicate the source version visible at that time.
- What if Wikipedia then deletes the page? We are not responsible for Wikipedia's failures in this respect. This commonly happens with Commons files. I have suggested that instead of fully deleting images, Commons should replace them with thumbnails, and perhaps revision-delete the original image. All the license information and edit history, then, could be intact, all except for showing the full image. But the dominant group is crazy, a quick summary, my dominant impression. That's the dominant thinking, and dominant thinking always believes it is right, because it is dominant. "Hey, we agree. Go away." Notice: the "we" is not inclusive, it is exclusive.
- This is, of course, not just on the WMF wikis! This is simply how most people think, and that is a clue to the meaning of "dominant" and "dominant group." --Abd (discuss • contribs) 14:56, 14 August 2015 (UTC)
Dominant group[edit]
I get a vague idea of what dominant group is. Is it a real definition? I'm just curious. The beginning of the article isn't clear. How do the subjects differ, or what makes this special, from an article that has its own namespace? Thanks Sidelight12 Talk 01:31, 28 December 2012 (UTC)
"I get a vague idea of what dominant group is." You and me both!
"Is it a real definition?" It has numerous definitions. Some of which are field specific.
"The beginning of the article isn't clear." Suggestions are welcome. The beginning of the article is about an original research project to answer some of the concerns you have written about. The first paragraph is a quote from an astrophysics article that uses dominant group implicitly defined as a scientific term meaning the largest group of active regions, yet doesn't directly state this. When you read the article you will realize the definition being implied.
"How do the subjects differ, or what makes this special, from an article that has its own namespace?" It's an original research project in article form as described under the Topic section. The subjects do not differ with respect to the use of the scientific term dominant group.
Hope this helps,
Thank you for commenting and your interest. --Marshallsumter (talk) 02:01, 28 December 2012 (UTC)
- I see your Dominant Group sections in articles you have worked on, often empty. You came across the phrase "dominant group" and seem to have treated that as if it were a discriminable entity, and wondered what it is. You might be overthinking the matter. Nevertheless, here I go.
There are some hidden assumptions, not made explicit, perhaps.
- The term appears to be used as ordinary language, without specific definitions. A dominant group is a group that is dominant. What's a "group"? When we have elements in a set, any combination of the elements of the set can be a group. We then need to define elements and set! That is, approaching experience, we conclude that there are separate objects, not just some single entity. We have five *fingers.* By naming things, we separate them from other things, and can manipulate the concept and combine it with others.
- (While there is certainly object discrimination and separation nonverbally, the human use of language is powerful and tends to dominate our conscious experience. There is that word, "dominate.")
- So, given that we have discriminable objects, that are somehow identified with each other, have some common characteristic such that we may treat them as a *set*, then there are subsets that can be formed. Groups. In fact, every set is a group within a larger set. Every subset has a common characteristic. It might only be some arbitrary membership. If I number the cards in my possession, I could then consider the even-numbered cards and the odd-numbered cards, as a quality of history, not intrinsic to the cards themselves.
- Now, when we have groups, we many compare them with each other by some standard. If we have a set of coins, we might define the dominant group in that set as the group with the most members. Say it is pennies. Or we might define it as the group with higher monetary face value. The group of pennies may or may not have higher value than the group of nickels.
- Or the group with the highest numismatic value. In all these there is a common thread: something is "higher." That is, something is "dominant." This is a reflection of our habit of assigning importance. The "dominant group" is "more important" in some sense.
- In my training, these distinctions are recognized as stories or interpretations, not facts. They are invented, not "true or false." (Or we could say that their "truth" is conditional on assumptions, but the point is that other assumptions would produce different "truths.")
- So if you are searching for a common meaning for "dominant group," there might not be one, beyond a simple confluence of two steps in an analytical process.
- The common thread I see is the assignment, first, of meaning to membership in an identified set, and then an assignment of importance to some subset, which implies that the complimentary set of subsets is not important. "Importance" is a device which we use to blind ourselves to the rest of reality, like a horse's blinders, to keep the horse on track.
- Breakthroughs in understanding and achievement often occur when we remove the blinders and just see what's there.
- Now, this consideration leads me to a hypothesis:
- "Dominant group" is used to assign importance, to suggest that a topic that can be divided into groups may be best understood, or more quickly understood, or more efficiently understood, or some goal reached, by considering this group.
- The term is then used psychologically to focus attention. --Abd (discuss • contribs) 21:44, 26 June 2014 (UTC)
- The Importance hypothesis is the ninth one I'll be examining soon from the various sources available. So, I tend to agree with you. The oldest use of something synonymous with dominant group is the Latin "Classicus", or the patrician class of the early Roman republic via Romulus and Remus. A very historically important group according to some classics authors. Dominant group may be something inherent in our nature psychologically, my Primordial hypothesis. I am considering composing a short psychological questionnaire using the resource JTNeil has created for such things. Then, I'll just ask some of the authors to answer the questions and see what I get. What do you think? Until I started this project on wikipedia I'd never used the phrase for anything. It might be a language artifact. --Marshallsumter (discuss • contribs) 01:42, 27 June 2014 (UTC)
- Dominant groups do not exist in "nature," that is, in reality considered as distinct from our reactions to reality.
- Dominant groups are an artifact of how we process and analyze data, as mentioned above. They can both illuminate and obscure. The field here is w:Linguistics, w:Semantics, w:Semiotics, w:Epistemology or w:Ontology, I'm not yet prepared to classify it more closely. As the identification of dominant groups can be highly useful pedagogically and educationally, and as long as we understand that we create them, that is, we assign importance based on our history with a topic, this is not harmful, it's a functional device, important or even crucial educationally.
- That is, in setting up an educational process, there are two general approaches: first, present the raw data, develop familiarity with that data. In that approach, however, there is an issue of sequence. What data is presented first, and in what detail? A topic develops historically through this approach. This is how we come to know the unknown. There may be a mass of unorganized, uncorrelated data. Then correlations are found. The presentation of correlations is intermediate between the first and second approaches, because correlation requires a level of abstraction, of the distinction of identifying characteristics as being associated with some connection between collections of data.
- The second approach presents a developed organizing structure. This is often called "science," but should not be confused with the "scientific method." Science, in this case, refers to what we "know." It is especially shared knowledge, that is, you know "f = ma", and I know "f = ma", and our knowledge very likely matches. However, we each have very different bodies of data from which that knowledge is abstracted, if, indeed, we developed "f = ma" ourselves instead of merely memorizing it as a "scientific fact."
- As you know, F = ma is not "truth." It's an approximation, highly useful under some conditions, misleading under others. It's useful under a "dominant group" of conditions, and specifically those of ordinary life and what we encounter, the exceptions, the situations where this breaks down are, for us, rare and unknown until our experience broadened.
- These two approaches create a third, a hybrid approach, and this is also standard, pedagogically, where there is study and experiment. In experiment, a student is ideally trained to set aside their accumulated knowledge (accumulated through ordinary experience and belief, or accumulated through study), and return to observation, often with testing, that is, setting up conditions where the effect of controlling some variable can be distinguished.
- Now, to the relevance on Wikiversity: at some level, we, as a community, will develop this project more effectively as we understand these two approaches and how they may be fostered and presented.
- As you may know, I often study "wiki behavior." This study almost always starts because I see something odd, anomalous, maybe "wrong." However, instead of gathering "proof" of "wrongness," which is a common and even expected approach, I gather *data*. As I do this, then, hypotheses do develop, organizing concepts. They are not necessarily what I started with. I am now, having exposed myself to massive data -- this process can take weeks of study, I may have looked at hundreds of screens of data or more -- more informed about the topic.
- Unfortunately, this heightened level of knowledge (the first kind, data) can take me far outside of ordinary experience, and it can then be difficult to communicate. That is the problem I've faced for decades, the problem of how to connect what I find with our collective knowledge.
- Back to the topic here, you have placed Dominant group sections in many resources, where the function is obscure, I'd say. You often set up sections which is give definitional information; however, "dominant group" is not established as having meaning for most people. Rather, every resource may eventually identify dominant groups. Some may not. That identification is not fact, it is interpretation or opinion. Linking Dominant group from many resources is dominating the project with a single research project.
- However, creating approaches to learning that involve identifying "dominant groups" can be highly useful. But we don't describe the Wikiversity educational approach on every page! Practices like placing a "notation" section in resources, as with [2], would be like having, on every page, a definition of the language used. What is in the notation section is simply common practice. We don't say in a resource, as a reductio ad absurdem, "this page is a design template, a pattern in a binary code, for a collection of displayed pixels that can be interepreted as language, the language consisting of symbols as interpreted by those who know English. Attempting to decode the page in other manners may produce interesting but likely meaningless information."
- The point is that our design for each page will advisedly be toward maximum utility in education. We can establish "universals," which can be linked the first time a term is used on the page, by someone intending for the term to be intrepreted as a universal. Probably what is in the Notation section, though, is so normal that it's not necessary at all.
- The extra unnecessary content will tend to suppress reading with understanding, it's boring, if understood and confusing if not. When I try to explain stuff like this to my daughter (who will be 13 in a few days and is smart and self-expressed), she says, "Dad, I'm not listening." Most people won't tell you that, but they have stopped listening or reading. Yet they still have their reactions, as you experienced on Wikipedia, where your work was completely misunderstood (and where the work you were doing was, indeed, Original Research and inappropriate for Wikipedia while very appropriate here). --Abd (discuss • contribs) 13:47, 18 September 2014 (UTC)
- I agree already with at least two points you've made. Dominant group is an educational concept in that readers, students, may need to be aware that culturally these groups do exist and are usually putting glass ceilings in place. It was one of the points the proposal reviewers were getting at. This may not be done too well in those resources separate from the original research effort. Regarding Notations, Universals, Control groups, and Proof of concept sections, I am beginning to remove or replace these unless as in Positron astronomy mention is made within a primary source. These I believe are useful to readers and student. Having these sections in the beginning of the resource may no longer be necessary. Separate resources already exist for Control groups and Proof of concept. I will probably create Notations and Universals.
- I tend to agree that dominant groups may not occur in terms of evolution yet the concept exists in most theories about evolution. Whether it is an artifact of our thinking processes or something more sinister or something else entirely remains to be seen. I will probably include some of your notes here as anecdotal evidence with appropriate credit if that's okay.
- One of the common forms of dominant groups in nature are breeding pairs. Among meerkats, for example, they do everything they can to make the colony serve them to the point of killing offspring of other less dominant mating pairs.
- An example of a dominant group in human society was in place when I worked for the Navy. In my division, if a scientist lost his wife, divorce or death, he was out of job within a year. The married men who ran the division saw to it. No joke and no subjectivity either. Each was replace by a married man, usually much less capable. Many people of all types were leaving because of this behavior.
- As I go through resources, if the dominant group section seems less informative I will try to improve it or eliminate it. Its usefulness may no longer be needed. --Marshallsumter (discuss • contribs) 18:40, 18 September 2014 (UTC)
- In your response, it seems that you have some negative idea about "dominant group." It's very clear to me that the concept has its root in the necessities of efficient processing and analysis of data by the brain. We have a huge flow of information coming in at all times from the senses, and we learn to pay attention to only part of it, survival requires that. In my own training, the development of "story," i.e, interpretation, is sometimes, by newcomers, mistaken for there being something wrong with story. No, the training is to distinguish story from "what happened," because when we are caught in some locked state, where we seem to be unable to make progress, it is quite likely that we have developed a "limiting interpretation." That interpretation is neither right or wrong, it is only useful or not useful, and usefulness depends on context. What was useful yesterday may not be useful today. Or may not be useful in certain contexts, i.e., the context where we are stopped.
- The story you told about the Navy, were I in management there, and I heard this story, I'd want to do a study. The dominant group in that story is "married men." Married men also tend to live longer. Married men may be more stable in a number of ways. Cause and effect may not be as simple as your story implies. Yet, if it's true that unmarried men are being replaced by less capable married men, damage is taking place. In my training, when we come across a situation like this, or that appears like this, and if we think it's "wrong," the emotional reaction to the "wrongness" can readily disable our ability to function clearly and in the much larger realm afforded by the cerebral cortex, we start to be run by the more primitive survival responses of the back-brain, to use a modern interpretation of this effect.
- This could be one reason why reform is so difficult: it often is motivated by a belief that something is bad or wrong. And the results can be even worse that the starting situation! Examples abound.
- Rather, as a manager, I'd be concerned about the effect of losing a spouse on my employees. Do they need additional support or accomodation? If they suffered some illness or disability, we would often provide that. It would all start with understanding what actually is happening, which is not a matter of right or wrong, good or bad.
- So what does this have to do with "dominant group"? Dominant group showed up in your *explanation* of a phenomenon observed or suspected by you. That phenomenon may even be the subject of what we call a "conspiracy," i.e, people may agree about it, tsk, tsk, it's a shame! And that can happen if the phenomenon does not exist, i.e., if there is, in fact, no pattern. Humans do this: we simplify the recall and processing of data by forming conclusions and remembering the conclusions instead of the far, far more complex primary data. --Abd (discuss • contribs) 19:35, 18 September 2014 (UTC)
- As to the Navy situation, a number of lawsuits resulted with some voluntary retirements included. All of the managers involved are now gone. As far as I know I wasn't involved in this matter. The sociological, psychological or medical definition of dominant group seems to have applied. The damage apparent or otherwise may never have been fixed. Several sciences have produced theoretical, scientific, or working definitions for dominant group, usually overlapping about the control over resources, glass ceilings, and power (military force, or something similar). But, these may be cultural in origin perhaps related to the concept of hierarchy. If dominant group is only an artifact of our thinking a whole bunch of sociologists may be very upset. More than likely multiple definitions are involved that may be generalized by the metadefinition I created. But, I've been wrong before. --Marshallsumter (discuss • contribs) 20:07, 18 September 2014 (UTC)
- "Dominant group" can be defined in a particular field such as to refer to an objective reality. However, that's not how it's ordinarily used, and the definition will vary with the field. The most general definition would refer to how the brain assigns importance to phenomena, or, more accurately, to memory of phenomena. So, there is some social situation where some group appears to be dominant and then there is a revolution, and the "dominant group" is isolated, killed, or at least disempowered. Were they a dominant group or not? Obviously, the history would matter. So, at any given time are they dominant? What measures would be used? If some hidden phenomenon can reverse the apparent dominance, was it real or simply an illusion?
- Yet it seems you raise the possibility of dismissing "dominant group" as an "only an artifact of our thinking." Yes, I'm asserting that it is an artifact of our thinking. That is ontologically obvious. But I wouldn't say "only." Our entire process of developing analysis and interpretation is an "artifact of our thinking."
- It seems that underlying your comment is the question of whether or not "dominant group" is *real*. I.e, if we say that the married managers were a dominant group, is this "true" or not. The answer may depend on the measure; that is, "dominance" will be a function of what definitions and analytical tools we employ and what data we choose to feed it.
- Yet the concept of dominance can be *useful.* If I want to get something done, involving a group of people, I may wish to approach the "dominant group." Or should I? In fact, the apparent dominant group is dependent upon supporters; if the supporters decided to withdraw support, the dominance would vanish immediately. One of the signs of an artifact of thinking is that it cannot be cut with a knife. It cannot be touched, weighed, measured. It is dependent on a complex of interpretations and conditions.
- And then, when we study how to actually create transformation in the world, one of the approaches is to pull the rug out from under all these "established concepts." Mahatama Gandhi dropped the idea that the English were the dominant group, and stood for something else, and British domination vanished. It was an illusion, but maintained as such for a long time. Since they were not dominant, he did not need to fight them! He actually took charge, first of himself, and then of his people, through new interpretations that he created that were more powerful. A new dominant group, we could say.
- Yet is it dominant? It's an endless regression. It is far, far simpler to recognize that "true" and "false" do not apply to these interpretations, they are useful for prediction or they are not. No interpretation predicts everything, they always fall short, so all are in some way false. Any reasonably useful interpretation will predict *something* that happens, at least some of the time, so they are in that way true. In the end, they are what they are, interpretations, in my training, patterns of patterns of neutrons firing.
- I assert that there is a reality that is not merely patterns of neurons firing, but I don't expect to ever find a proof for it. Great minds have tried and failed on this one. Or, more accurately, that would be a proof that we can follow and understand and recognize as flawless. I'm suspecting that this ineffability is intrinsic to existence, to finite understanding.
- So.... the concept of dominance is equivalent to the concept of importance. If we have an educational resource on a subject that is more than some random effort to explore a topic, that is actually designed for efficient education, we are not just going to dump a disorganized pile of facts on the student. Making fact available is an important part of our mission, but fact <> education. Or else we would all have become geniuses as soon as we had the internet. Google (etc.) made the internet accessible, and has done a lot of work with search engine algorithms that attempt to predict importance. However, we can create hierarchies of knowledge here, organized around topic, that make our resources accessible and approachable and understandable. There are Wikipedia articles that are probably completely accurate, there is one user I have in mind, a mathematician, who was highly contentious, he managed to get other mathematicians banned. He insisted on the articles being "correct," in terms of the language used, but the articles were unintelligible to non-mathematicians. He didn't care. His goal wasn't learning, it was "truth."
- When I want to quickly learn about at topic, I go to Wikipedia and *usually* the articles are quite understandable. Sometimes they are practically unintelligible, they are not written for general access. And that's true even when I have some knowledge of a field. If I already knew the field, I might think they were completely accurate, or, very possibly, that there was something wrong with them, something that only an expert would recognize. It can get quite gnarly, because reliable sources can require expert interpretation, that's why primary sources are deprecated.
- My point here is that we do want to present what is "important," first. Yet presenting only high-level abstractions first -- that is what important usually means -- leads to pedagogical failure, the reader disappears. So a hybrid approach is used, which presents fact (data) and interpretation (theory) intertwined. In good educational writing, mysteries are created which may or may not be resolved. There were mysteries in the development of every science. How were they resolved?
- Of course, my Favorite Topic is a present-day mystery, confirmed experimental evidence with no satisfactory explanation. Underneath that lurks a Nobel Prize for somebody.... not a slam-dunk, Nobelists have already worked on the problem and came up with lousy theories that don't work. Something is missing from our knowledge of the solid state..... and that makes some of the "dominant group" uncomfortable, because they believe that their dominance is based on superior knowledge, and here comes this kid who says the emperor has no clothes.... --Abd (discuss • contribs) 22:58, 18 September 2014 (UTC) | https://en.wikiversity.org/wiki/Talk:Dominant_group | CC-MAIN-2020-24 | refinedweb | 5,073 | 55.03 |
I was making a code that uses the raw_input() function, but then the EOFError showed up. I have another code that uses raw_input(), but the error didn’t show. What did I do wrong?
Here is my code:
from fractions import * print "The Leaning Tower of Lire Block Counter" def stack(d): a = 1 n = 1 a1 = Fraction(a, n) a2 = Fraction(a, n * 2) while a1 < d: if a1 < d: x = (a1 + a2) a1 = x n = n + 1 a2 = Fraction(a, n * 2) continue else: break return n d = raw_input("Enter desired tower length: ") print stack(d)
End of Code | https://discuss.codecademy.com/t/help-eoferror-eof-when-reading-a-line-i-tried-using-raw-input-but-then-the-error-showed-up/509789/2 | CC-MAIN-2020-29 | refinedweb | 101 | 73 |
FYI... as I suspected, this is a bug in Axiom. I've filed a jira ticket.
Until the issue is fixed, one way to get around the problem is to set
the xml:lang attribute manually rather than using the setLanguage
method. The downside of this is that the feed.getLanguage() method will
not work on the feed but if all you're doing is creating a feed for
serialization, that shouldn't be a problem.
Feed feed = abdera.newFeed();
//feed.setLanguage("en-US");
feed.setAttributeValue("xml:lang", "en-US");
- James
Kiran Subbaraman wrote:
> I created a feed using Abdera:
> Factory factory = Abdera.getNewFactory();
> Feed feed = factory.newFeed();
> feed.setLanguage("en-US");
> feed.setBaseUri("");
> .....
>
> When I deploy this WAR file, and view the generated feed from within
> Firefox, I get to see the feed xml displayed in the Firefox.
> With Internet Explorer, I get the error that this xml cannot be displayed.
> Specifically, I see this in IE7:
> The namespace prefix is not allowed to start with the reserved string "xml".
> Line: 1 Character: 104
>
> <?xml version="1.0" encoding="UTF-8"?><feed xml: xml: xmlns:xml="" .......
>
> Whereas, when I comment out the lines:
> /* feed.setLanguage("en-US");
> feed.setBaseUri(""); */
> Both IE and Firefox display the generated feed correctly.
> What's going on? Am I missing something in Abdera?
> Thanks,
> Kiran.
> | http://mail-archives.apache.org/mod_mbox/abdera-user/200711.mbox/%3C473C8666.5010806@gmail.com%3E | CC-MAIN-2018-34 | refinedweb | 221 | 60.11 |
I am creating a tool window using the code below. Surprisingly, when there is an exception in my user control, visual studio crashes. (I am using resharper 5 eap beta)
Why is my exception not caught by 'resharper'' and displayed with resharper's default exception dialog? How can I change my code such that my exceptions are catched by resharper?
And another question is that with resharper 4.5 I could add a 'form' as a control to an ToolWindowContent. With resharper 5 beta I get an exception. Is this by design or will this be fixed ?
best,
Joe
[assembly: ToolWindowDescriptor(ToolWindowVisibilityPersistenceScope = ToolWindowVisibilityPersistenceScope.SOLUTION, Guid = "8CC73FC8-312C-4f1c-837A-2ED56D5D59F4", Text = "Foo text...", Id = "myTestWindow")]
[ActionHandler("MyPlugin.TestAction")]
public class MyActionHandler: IActionHandler {
public bool Update(IDataContext context, ActionPresentation presentation, DelegateUpdate nextUpdate) {
return true;
}
public MyActionHandler() {
}
private MyControl control = null;
public void Execute(IDataContext context, DelegateExecute nextExecute) {
if (control == null) {
control = new MyControl ();
IToolWindowFrame frame = WindowManager.Instance.GetToolWindowFrame("myTestWindow");
IToolWindowContent tw = new ToolWindowContent(control, "Tool Window Test title");
frame.Content = tw;
tw.Control.BackColor = SystemColors.Control;
DaemonStage.Control = control;
}
IToolWindowFrame frame = WindowManager.Instance.GetToolWindowFrame("myTestWindow");
frame.Show();
}
}
public partial class MyControl : UserControl {
private void toolStripButton2_Click(object sender, EventArgs e) {
throw new Exception("Test exception. This should be catched by resharper!!!");
}
}
Hello,
ReSharper ignores any exceptions that do not have ReSharper code in their
stack trace (otherwise we'd get lots of false positives for any foreign managed
code in VS). Any WinForms code calling into your handlers is likely to have
only WinAPI wrappers and your code on the stack traces, and thus would not
pass the filter.
Add try-catch, use the Logger class to spawn the dialog for the caught exception
(LogException method in most cases).
What do you mean by a "form"? Is this the System.Windows.Forms.Form class?
If yes, I do not think it should be allowed, at a glance. What's the code
you're using from the base Form class that you need in a tool window?
—
Serge Baltic
JetBrains, Inc —
“Develop with pleasure!”
It's good to know how resharper handles exception, thanks for the good information about the stacktrace.
Sorry, yes I mean 'System.Windows.Forms.Form'. Of course it does not give any benefits, it was just that by accident my test was using this class in 4.5. (which worked fine, but failed in 5.0), and after all Form inherits from Control. But of course in a real plugin I wouldn't use such a class.
Thanks for your help.
Best,
Joe | https://resharper-support.jetbrains.com/hc/en-us/community/posts/207046505-Exception-handlig-with-own-plugins-?page=1 | CC-MAIN-2020-34 | refinedweb | 424 | 50.84 |
Only a few years ago, computers with multiple CPUs were considered exotic. Today, multi-core systems are routine. The challenge faced by programmers has been to take advantage of these systems. Apple’s ground-breaking Grand Central Dispatch offers a new approach to programming for modern computer architectures.
The obvious way to take advantage of multiple cores is to write programs that do several tasks at once. Broadly, this is known as “concurrent programming”. The traditional approach to concurrent programming is to use multiple threads of control. Threads are a generalization of the original programming model – well suited to single CPU systems – where a single thread of control determined what the computer would do, step by step. Most multi-threaded programs are structured like several independent programs running together. They share a memory address space and common resources like file descriptors, but each thread has it’s own step-by-step flow of control.
Multi-threaded programming solves the problem of making use of a computer with two or more cores, but computer hardware designers keep adding twists. Computers have ever more cores, so a multi-threaded program that takes the best advantage of a system with two cores may not perform as well as it might on a system with eight. All those cores come with a cost as well. Each core uses power and generates heat. To manage energy consumption and heat production, some computers will vary the number of cores that are active any given time. It would be a enough of a burden to try to write programs that work well on systems that have 2, 4, or 8 cores. It’s an even tougher challenge to write multi-threaded programs that make the best use of hardware when the number of cores changes as the program runs.
This is where GCD steps in to lighten the programmer’s load. GCD works with the Operating System to keep track of the number of cores available and the current load on the system. This allows a program to take the best advantage of the system’s hardware resources, while allowing the operating system to balance the load of all the programs currently running along with considerations like heating and battery life.
Programming with GCD means leaving behind the details about threads and focusing on the essential work that needs to be done. The program assigns work items to GCD, identifying which items must be done sequentially, which may be done in parallel, and what synchronization is required between them. GCD dispatches the work to the system’s available cores.
If you are just starting out with concurrent programming, you can safely skip this section. If you have experience using threads, read on! It will help you understand the differences between writing multi-threaded code and using GCD.
Programming with GCD is different than multi-threaded programming. When you program with threads, you think about what each thread will be doing at all times. Each thread is a mini-program with its own flow of control. You decide when to start and end threads, and you establish ways for them to interact – typically using locks or semaphores to protect shared resources where access by multiple threads must be limited.
When you switch to GCD, you still need to identify shared resources and critical sections of code that might need protection, but you don’t need to think about threads very much. Remember that GCD will create and destroy threads for you behind the scenes. Your task is to define the program as a set of independent units of work that needs to be done. These work units are expressed in your code as either functions or blocks. Blocks are a newly supported feature of the C language families on macOS. There’s more information about blocks in the next section, but for now just think of a block as a snippet of code that can be passed around and executed like a function where it is required.
You’ll find that code written for GCD tends to have an event-driven style. GCD defines an object type called a dispatch queue, which acts as a runloop as well as work queue. You can tell a GCD dispatch queue to watch for events – like a timer firing, a message received on a network socket, text typed by a user, and so on – and have it trigger a function or block of code whenever the event occurs. The function or block responds by performing some of it’s own computations and possibly telling GCD to execute some further functions or blocks by assigning them to dispatch queues.
Before examining the GCD library routines and data types, let’s take a moment to look at the basic units of code you will be defining. GCD works equally with functions and blocks as the work units that you give to GCD, which in turn dispatches their execution on various threads. We expect that you are already familiar with functions, so we’ll leave them aside and focus on blocks.
Blocks are a newly-supported feature of the C language family on macOS. A block is a segment of code that looks like this:
^{ printf("Hello World\n"); }
In some ways, a block is like a function definition. It can have arguments as part of it’s definition. It has read-only (by default) access to local variables. Blocks can be assigned to variables, and then invoked using the variable name. In this example, a block is assigned to the variable speak. The block has an argument x, but it also uses the variable greeting.
const char *greeting = "Hello"; void (^speak)() = ^(char *x){ printf("%s %s\n", greeting, x); }; speak("World");
The result of running this example would be “Hello World”. Take a moment to create a simple C program that does this! Experiment with some variations like taking command-line arguments.
We think you’ll like blocks. They work well with GCD and you’ll see lots of examples in the sections ahead. If you want to learn more about blocks, see Blocks Programming Topics at Apple Developer Connection.
Your main point of contact with GCD is in the libdispatch library. It’s included in the macOS system library (libSystem), automatically linked in when you build your program with Xcode or cc and ld. You can use the following directive to include all the interface definitions.
#include <dispatch/dispatch.h>
The on-line manual pages, starting with the dispatch(3) manual provide details on the various library routines, data types and structures, and constants. The Grand Central Dispatch (GCD) Reference is available at Apple Developer Connection. If you are happier reading header files, you’ll find them in
/usr/include/dispatch. The library is divided into several subcomponents, each with its own header file.
As we noted above, libdispatch provides parallel versions of its routines, one for blocks and one for functions. For example, parallel to
dispatch_async is
dispatch_async_f, which takes a function pointer and a data pointer. The data pointer is passed to the function as a single argument when the function is dequeued and executed. In the examples in this article, we use the block versions of the libdispatch routines. Please refer to the GCD documentation for information on using functions.
Dispatch queues (
dispatch_queue_t types) are the workhorses of GCD. A queue is a pipeline for executing blocks and functions. At times you will use them as an execution pipeline, and at times you’ll use queues in combination with other GCD types to create flexible runloops. We’ll explore all the essentials below.
There are two types of queues. Serial queue execute work units in First-In-First-Out (FIFO) order, and execute them one at a time. Concurrent queues also execute work units in FIFO order, but they don’t wait for one unit to finish executing before starting the next.
This is one of the few places where threads programmers will recognize a connection between GCD and threads. Concurrent queues will use as many threads as they have available to chew through their workload. However, don’t be fooled into thinking that queues, work units, and threads are tightly connected. GCD will use threads from a thread pool, create new threads, and destroy threads in inscrutable ways! A thread that was – a moment ago – being used to execute work from a serial queue might suddenly get reused for work on a concurrent queue, and then may get put to work for some other purpose. It’s best to think of queues as little workhorse engines that will get your work done for you, either one item at a time, or as fast that they can by doing work concurrently.
GCD provides four pre-made queues for you to use: one serial queue called the “main” queue, and three concurrent queues. The three concurrent queues have execution priority levels: low, normal or default, and high priority. If you have background work that doesn’t need to be done quickly, just submit it to the low-priority queue. If you need really fast response, use the high priority concurrent queue.
GCD provide you with one serial queue “out of the box”, but serial queues are easy to create, and they are cheap, consuming relatively small amounts of memory and processing time by themselves. So while a serial queue might seem plodding – doing only one unit of work at a time – your code can create many of them. Each serial queue will run in parallel with all the other queues, once again providing a way for your program to take advantage of the system’s processing power. Serial queues also have another useful trick that we’ll see a bit later.
There’s one more thing to consider before we get to a code example. When you submit a work unit (we’ll be using blocks for our examples) to a queue, do you want to just add it to the execution pipeline and go on with other tasks? Or do you want to wait for all the work that’s currently enqueued to be processed by the FIFO pipeline, and then for your new work unit to be dequeued and finish executing before you go on? If you just want to submit the work and go on, use
dispatch_async. If you need to know that the work (and everything enqueued in front of it) has completed, submit it using
dispatch_sync.
#include <stdio.h> #include <stdlib.h> #include <dispatch/dispatch.h> /* * An example of executing a set of blocks on the main dispatch queue. * Usage: hello name ... */ int main(int argc, char *argv[]) { int i; /* * Get the main serial queue. * It doesn't start processing until we call dispatch_main() */ dispatch_queue_t main_q = dispatch_get_main_queue(); for (i = 1; i < argc; i++) { /* Add some work to the main queue. */ dispatch_async(main_q, ^{ printf("Hello %s!\n", argv[i]); }); } /* Add a last item to the main queue. */ dispatch_async(main_q, ^{ printf("Goodbye!\n"); }); /* Start the main queue */ dispatch_main(); /* NOTREACHED */ return 0; }
Try compiling and running this program. You’ll notice that the program never exits. The reason is that the main queue continues waiting for more work to do. You can fix that by adding a call to
exit in the last block.
/* Add a last item to the main queue. */ dispatch_async(main_q, ^{ printf("Goodbye!\n"); exit(0); });
Now let’s try a variation that looks nearly identical to the first version of this program. Instead of using the main queue, we create a new serial queue. Since the main queue doesn’t have any work to do, the
dispatch_main call has been removed, so the program will exit at the final
return statement. This seems to be an alternative way that to make the program exit when it finishes printing, and it also illustrates how easy it is to create a new serial queue.
Unlike the main queue, serial queues produced by
dispatch_queue_create are active as soon as they are created. Try compiling and running this example!
#include <stdio.h> #include <stdlib.h> #include <dispatch/dispatch.h> /* * An example of executing a set of blocks on a serial dispatch queue. * Usage: hello [name]... */ int main(int argc, char *argv[]) { int i; /* Create a serial queue. */ dispatch_queue_t greeter = dispatch_queue_create("Greeter", NULL); for (i = 1; i < argc; i++) { /* Add some work to the queue. */ dispatch_async(greeter, ^{ printf("Hello %s!\n", argv[i]); }); } /* Add a last item to the queue. */ dispatch_async(greeter, ^{ printf("Goodbye!\n"); }); return 0; }
Ooops! When you run this program, you will see a problem. Some or even all of the output is missing!
Yes, this is a contrived example, but it illustrates something interesting! The reason that the output is missing is that the program exits – at the
return(0) statement – before the “Greeter” serial queue can do all of it’s work. After all, you told it to execute all the blocks asynchronously.
The problem can be fixed by submitting the last block using
dispatch_sync instead of
dispatch_async. That call returns after the block has been dequeued and executed. Once the final work has been done, it’s safe to return and exit the program.
/* Add a last item to the queue and wait for it to complete. */ dispatch_sync(greeter, ^{ printf("Goodbye!\n"); });
A call to
dispatch_sync blocks until the queue has dequeued all the items already in the pipeline, and when the item you just submitted has completed. That’s exactly what you need in a situation like the example above, but it’s good to be a little bit cautious when using
dispatch_sync. More than one programmer has inadvertently created a deadlock in their program logic caused by making a synchronous call in the wrong place!
Also note that calling
dispatch_sync doesn’t guarantee that the queue will be empty when the call returns. Other parts of your program may submit work items to a queue after a blocking call to
dispatch_sync. So while you know that everything that was ahead of your call in the queue has been dispatched, new items may have been added behind it.
We’ve seen that serial queues are execution engines, and that they do their work in FIFO order, one item at a time. What some people miss on first reading is that “one item at a time” is a very useful control mechanism. Thread programmers are familiar with the notion of “critical sections” of code. These are parts of a multi-threaded program that only one thread at a time should execute. This is often done to protect the integrity of some data while it is being modified. If two threads accessed the data at the same time, it might not be updated correctly, or a thread might get an incorrect value. No doubt you have probably rushed ahead and exclaimed “AHA! I could use a serial queue for that!” That’s correct! A serial queue can also be used as a device that provides mutual exclusion. After all, it only executes one item at a time.
Let’s say you have a data structure
foo that gets updated by the two functions
increase(foo) and
decrease(foo). Although you might have may concurrently executing blocks, you can’t allow more than one thread at a time to increase or decrease the foo structure. The solution is easy! Create a serial queue for the special purpose of adjusting foo.
dispatch_queue_t foo_adjust = dispatch_queue_create("Adjust foo", NULL);
Now, whenever you need to change foo, use the
foo_adjust serial queue to do the work. You’ll be guaranteed that only one block at a time will make a change.
dispatch_sync(foo_adjust, ^{ increase(foo); }); ... dispatch_sync(foo_adjust, ^{ decrease(foo); });
If you need the call to complete before going on with your work, then use
dispatch_sync. If you only need to ensure that the increase or decrease is done without interference, then you can use
dispatch_async.
A special-purpose serial queue like this is handy for printing output from your program. You can submit a block to execute on the printing queue and be sure that all the lines you print in that block will appear together and without interference from some other code that might be trying to print at the same time.
dispatch_async(printer, ^{ uint32_t i; printf("Processing Gadget Serial Number %u Color %s", gadget_serial_number(g), gadget_color(g)); printf(" Using Widgets:"); for (i = 0; i < gadget_widget_count(g); i++) printf(" %u", gadget_get_widget(g, i)); printf("\n"); });
In some cases you might want to limit access to some resource to at most some fixed number of threads. If that limit is one, then you can use a serial queue to give one thread at a time exclusive access. However, you can also use a counting semaphore. They are created with
dispatch_semaphore_create(n), where n is the number of simultaneous accesses the semaphore will allow. Code that wants access a limited resource calls
dispatch_semaphore_wait before using the resource, and then calls
dispatch_semaphore_signal to relinquish its claim on the resource. Here’s a code example which allows up to three threads to simultaneously access a “paint bucket”.
#define PAINTERS 3 ... dispatch_semaphore_t paint_bucket = dispatch_semaphore_create(PAINTERS); ... dispatch_semaphore_wait(paint_bucket, DISPATCH_TIME_FOREVER); use_paint_color(); dispatch_semaphore_signal(paint_bucket); ...
The second argument of
dispatch_semaphore_wait is a timer. If you don’t want to wait for the semaphore in case it is unavailable, you can call
dispatch_semaphore_wait(paint_bucket, DISPATCH_TIME_NOW);. If you are willing to have your code wait for a few seconds, you can create an appropriate time value using
dispatch_time or
dispatch_walltime. More information about these routines is in the Keeping time section below.
A simple lock, which allows only one thread at a time, can be created using
dispatch_semaphore_create(1). While you can use a simple lock in place of the serial queue mechanism we explored above, a serial queue is less error prone. A lock requires you to remember to unlock when you are done with it. That may sound pretty basic when you first write some code that uses a lock. However, after many edits and possibly many different programmers, it’s not uncommon for someone to forget to unlock when they return from a function call or re-organize the code. On the other hand, submitting a block or a function call to a serial queue will automatically lock and unlock for each item of work.
You can often improve the performance of a program by delaying initialization code until it is required. This is called “lazy” or “just-in-time” initialization. When your program has many items executing concurrently, you need a way to specify that certain operations are never done more than once. libdispatch provides an easy way to do this with
dispatch_once or
dispatch_once_f.
static dispatch_once_t initialize_once; ... dispatch_once(&initialize_once, ^{ /* Initialization code */ ... });
The
static declaration of
initialize_once ensures that the compiler will make it’s value start off as zero. libdispatch will ensure that the initialization occurs at most once.
Programs that react to events generated by users, network sources, timers, and other inputs require a runloop: code that waits for an event, responds by taking some action, and then returns to wait for another event. Building a runloop with GCD just requires a dispatch queue and a dispatch source object, of type
dispatch_source_t.
A dispatch source is paired with a queue, and is assigned the task of watching for a specific event. When the event occurs, the source submits a block or a function to the queue. GCD provides sources that will respond to any of the following events:
If you’ve been programming long enough to have written your own runloops that handle all these sorts of events, you’ll quickly start to appreciate how much easier it is with GCD. As we’ve seen above, it only takes a single libdispatch call to get one of the four pre-defined dispatch queues, or to create a new serial queue. Creating a dispatch source to submit a block or function to a queue when an event occurs is almost as easy. For example, say you want to run an input processing routine to trigger on the main queue every time data arrives on a file descriptor.
input_src = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ, my_file, 0, dispatch_get_main_queue()); dispatch_source_set_event_handler(input_src, ^{ process_input(my_file); }); dispatch_source_set_cancel_handler(input_src, ^{ close(my_file); }); dispatch_resume(input_src);
The first line creates a
dispatch_source_t object that watches for data on the
my_file file descriptor, and pairs the new source with the main dispatch queue. When a dispatch source is created, it is in a “suspended” state, allowing you to configure it before it goes to work. In this example, we assign an event handler and a cancellation handler. Then we activate the source with
dispatch_resume.
The event handler is a block of code we want to be executed when the source is triggered – in this case by data arriving at a file descriptor. The cancellation handler is a block that is executed exactly once when a source has been cancelled. The asynchronous call
dispatch_source_cancel starts the process of shutting down a dispatch source. The call is not pre-emptive, so there may be event handler blocks still executing after a call to
dispatch_source_cancel. The cancellation handler is executed once after any active event handlers have completed. At that point, it’s safe to close file descriptors, mach ports, or clean up any other resources that might have been in use to handle source events.
You can cause a dispatch source to suspend anytime using
dispatch_suspend, and reactivate it with
dispatch_resume. This just causes a pause, and doesn’t involve the cancellation handler. You might suspend and resume if you wanted to change the event handler, or if you simply wanted part of your program to go quiet for a while.
Let’s try an example program that creates some timer sources. Each timer has a duration (the interval between successive timer firings) and a count (how many times to fire). When all the timers have finished, the program exits. This example program uses a couple of serial queues for printing and updating a counter, as we saw in the examples in the section on serial queues.
The program also uses some aspects of block programming that we haven’t seen before. It creates dispatch source objects that are local in the scope of a for loop, but the variables seem to be used outside that loop – in the event handler block for each source – once the program invokes
dispatch_main. The reason this works is that the entire block is copied by the dispatch source. A block includes not only program instructions, but data (variables) as well, so the copy of the block includes copies of the duration and src variables that it uses. These variables default to being “read only”, and can’t be modified in the block. The count variable is also used in the block, but it is modified. To make it “read/write”, it is declared using the
__block qualifier. See Blocks Programming Topics for all the details about programming with blocks.
/* * usage: source_timer [duration count]... * * example: source_timer 1 10 2 7 5 5 * * Creates a 1 second timer that counts down 10 times (10 second total), * a 2 second timer that counts down 7 times (14 second total), * and a 5 second timer that counts down 5 times (25 second total). */ #include <stdio.h> #include <stdlib.h> #include <dispatch/dispatch.h> /* * We keep track of how many timers are active with ntimers. * The ntimer_serial queue is used to update the variable. */ static uint32_t ntimers = 0; static dispatch_queue_t ntimer_serial = NULL; /* * The print_serial queue is used to serialize printing. */ static dispatch_queue_t print_serial = NULL; int main(int argc, char *argv[]) { dispatch_queue_t main_q; int i; /* * Create our serial queues and get the main queue. */ print_serial = dispatch_queue_create("Print Queue", NULL); ntimer_serial = dispatch_queue_create("N Timers Queue", NULL); main_q = dispatch_get_main_queue(); /* * Pick up arguments two at a time. */ for (i = 1; i < argc; i += 2) { uint64_t duration; dispatch_source_t src; /* * The count variable is declared with __block to make it read/write. */ __block uint32_t count; /* * Timers are in nanoseconds. NSEC_PER_SEC is defined by libdispatch. */ duration = atoll(argv[i]) * NSEC_PER_SEC; count = atoi(argv[i + 1]); /* * Create a dispatch source for a timer, and pair it with the main queue. */ src = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, main_q); /* * Set the timer's duration (in nanoseconds). */ dispatch_source_set_timer(src, 0, duration, 0); /* * Set an event handler block for the timer source. * This is the block of code that will be executed when the timer fires. */ dispatch_source_set_event_handler(src, ^{ /* Count down to zero */ count--; dispatch_async(print_serial, ^{ printf("%s second timer count %u\n", argv[i], count); }); if (count == 0) { /* * When the counter hits zero, we cancel and release the * timer source, and decrement the count of active timers. */ dispatch_source_cancel(src); dispatch_release(src); dispatch_sync(ntimer_serial, ^{ ntimers--; }); } if (ntimers == 0) { /* * This was the last active timer. Say goodbye and exit. */ dispatch_sync(print_serial, ^{ printf("All timers finished. Goodbye\n"); }); exit(0); } }); /* * Increment the count of active timers, and activate the timer source. */ dispatch_sync(ntimer_serial, ^{ ntimers++; }); dispatch_resume(src); } /* * When we reach this point, all the timers have been created and are active. * Start the main queue to process the event handlers. */ dispatch_main(); return 0; }
Other source types are as easy to use as the examples above. Let’s look the code that a GCD program would need to use to receive and process UNIX signals in a runloop.
#include <stdio.h> #include <stdlib.h> #include <signal.h> #include <unistd.h> #include <dispatch/dispatch.h> int main(int argc, char *argv[]) { dispatch_source_t sig_src; printf("My PID is %u\n", getpid()); signal(SIGHUP, SIG_IGN); sig_src = dispatch_source_create(DISPATCH_SOURCE_TYPE_SIGNAL, SIGHUP, 0, dispatch_get_main_queue()); dispatch_source_set_event_handler(sig_src, ^{ printf("Caught SIGHUP\n"); }); dispatch_resume(sig_src); dispatch_main(); return 0; }
Try running this program in one shell, and send it HUP signals from another shell. The only wrinkle in using signals is that you need to disable the default signal handling with so that GCD can “catch” the signals.
If you are an old hand at UNIX programming, stop here for a second to appreciate how much easier it is to handle signals using a dispatch source than using a signal handler! There are no special rules for what you can safely do while handling the signal. There are no strange mechanisms required to tell your runloop that a signal was caught. With
DISPATCH_SOURCE_TYPE_SIGNAL sources, signals are just events like data on a file descriptor or a timer firing.
Before ending this section, we’ll examine the
DISPATCH_SOURCE_TYPE_DATA_ADD and
DISPATCH_SOURCE_TYPE_DATA_OR types. These are useful when one part of a program needs to communicate with another. The communication is achieved through an unsigned long value. The section of code that wants to trigger the data source just calls
dispatch_source_merge_data(data_src, value);
The event handler function or block for the data source can fetch the data value using
data = dispatch_source_get_data(data_src);
The difference between the ADD type and the OR type has to do with how the data gets coalesced if there are multiple calls to
dispatch_source_merge_data before the dispatch data source’s event handler gets executed.
dispatch_source_merge_data will either add or logically OR the values with the dispatch source’s data buffer. The data buffer is reset for each invocation of the data source’s event handler.
Other dispatch source types also provide information using
dispatch_source_get_data. For example, a
DISPATCH_SOURCE_TYPE_PROC source can be used to monitor a process. The mask parameter passed to
dispatch_source_create for this type specifies the events you wish to monitor, such as the process exiting, calling
fork, calling
exec, and others. When the source is triggered by some event, the value returned by
dispatch_source_get_data will have bits set corresponding the the mask bits, reporting which events took place. Similarly, sources of type
DISPATCH_SOURCE_TYPE_VNODE report which changes occurred using bits set in the value returned by
dispatch_source_get_data. See the on-line manual page for
dispatch_source_create for details.
Try writing a few of these simple programs with some other dispatch source types!
GCD helps you gain performance when you can split large programming tasks into smaller operations that can run independently on multiple cores. It’s often necessary to know when several independent operations have completed. That’s the purpose of dispatch groups. Conceptually, a group keeps track of a set of work items. Your code can either wait for them all to complete, or you can set up the group to submit a block or function to a queue when they all complete.
There are several ways to use dispatch groups. Here’s simple example that dispatches three concurrent operations using function calls. It sets up a group to dispatch a function to the main queue when they complete.
dispatch_group_t grp = dispatch_group_create(); dispatch_queue_t q = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); /* * Three independent operations must be done... */ dispatch_group_async(grp, q, (void *)ptr, ^{ /* Do some operation */ ... } ); dispatch_group_async(grp, q, (void *)ptr, ^{ /* Do another operation */ ... } ); dispatch_group_async(grp, q, (void *)ptr, ^{ /* Do yet another operation */ ... } ); /* * Tell the group to invoke a block to deal with * the data after all the operations are done. */ dispatch_group_notify(grp, dispatch_get_main_queue(), ^{ data_processing_done(ptr); }); dispatch_release(grp);
The three operations will be dispatched on the default priority concurrent queue. When all three items complete, the group will dispatch the last block. Note that
dispatch_group_notify just tells the group what to do when all the items it is tracking complete. The call doesn’t block, and your code can go on to do other things.
Internally, a dispatch group simply maintains a counter that has special rules when the value is zero. You can have the group submit a block or function to a queue whenever the counter is zero, or you can use
dispatch_group_wait to block (with a timeout) for the count to be zero.
dispatch_group_wait(grp, DISPATCH_TIME_FOREVER);
You can add more work items to a group after the counter reaches zero to track another set of work items. Each time the internal counter becomes zero, the group will dispatch the notification block or function, or you can call
dispatch_group_wait again.
This brings up an interesting edge case. In the example above,
dispatch_group_notify_f is called after the calls to
dispatch_group_async_f. If you set up a group notification before any work is added, the group may dispatch the notification immediately since the internal counter is zero to start. In practice this is non-deterministic.
dispatch_group_async – and similarly
dispatch_group_async_f – are really just convenience routines that behave like this:
void dispatch_group_async(dispatch_group_t group, dispatch_queue_t queue, dispatch_block_t block) { dispatch_retain(group); dispatch_group_enter(group); dispatch_async(queue, ^{ block(); dispatch_group_leave(group); dispatch_release(group); }); }
Note that the group is retained until after the call to
dispatch_group_leave. In the simple example above, the functions submitted to the concurrent queue may still be waiting to execute when the code calls
dispatch_release(grp). If
dispatch_group_async did not retain the group, it may be freed before being used.
The internal counter is incremented by
dispatch_group_enter, and decremented by
dispatch_group_leave. You could do this in your own code, but using
dispatch_group_async ensures that the counter is always incremented and decremented correctly.
Dispatch groups will keep track of any number of work items and will dispatch a completion callback when they finish. If you only need to dispatch a callback when a single asynchronous item completes, you can simply use nested
dispatch_async calls.
dispatch_retain(callback_queue); dispatch_async(some_queue, ^{ /* work to be done */ ... dispatch_async(callback_queue, ^{ /* code to execute when work has completed */ ... }); dispatch_release(callback_queue); });
For the same reason that
dispatch_group_async retains and then releases its group, code that uses nested
dispatch_async calls must retain and release object references.
Two time-related constants were introduced in the section on using dispatch semaphores:
DISPATCH_TIME_NOW and
DISPATCH_TIME_FOREVER. These two values are the endpoints of a time scale represented by
dispatch_time_t type variables. When used with
dispatch_semaphore_wait or
dispatch_group_wait routines,
DISPATCH_TIME_NOW means that you are not willing to wait at all. Those routines will return an error status if the semaphore is unavailable or the group counter is non-zero.
DISPATCH_TIME_FOREVER simply means you are willing to wait as long as required.
In addition to the two “wait” routines, libdispatch uses a
dispatch_time_t variable in
dispatch_after and
dispatch_after_f. These routines submit a block or function to a queue (using
dispatch_async or
dispatch_async_f) at some time in the future. Calling with a time value of
DISPATCH_TIME_NOW will cause the work to be dispatched immediately. Using
DISPATCH_TIME_FOREVER clearly doesn’t make a lot of sense, but you can be assured that it truly will be forever before the work is submitted to a queue!
Values between “now” and “forever” can be of two forms: n nanoseconds from now in “system” time, or at a specific “wall clock” time. System time uses an internal clock that just keeps track of how many seconds the system has been active. If you put your computer to sleep, the system clock will sleep as well. Many “wall clock” hours might pass with zero or very little system clock time passing.
dispatch_time_t values are generated using two very similar routines.
dispatch_time_t dispatch_time(dispatch_time_t base, int64_t offset); dispatch_time_t dispatch_walltime(struct timespec *base, int64_t offset);
dispatch_walltime always produces a wall clock time value. The base parameter may point to a filled-out
struct timespec indicating a specific time, or NULL indicating the current wall-clock time. The offset parameter represents a value in nanoseconds which is added to the base value. To produce a time value 5 minutes from the current time:
dispatch_time_t in_five_minutes = dispatch_walltime(NULL, 5 * 60 * NSEC_PER_SEC);
dispatch_time adds an offset (in nanoseconds) to a base time. The base time can be a wall clock time that was previously produced by
dispatch_walltime, or it can be DISPATCH_TIME_NOW`, which will produce a system time value.
Note: At the time of writing of this article,
dispatch_group_wait and
dispatch_semaphore_wait treat timeouts strictly as system clock nanoseconds. This may change in a future update. Also note that a dispatch source of
DISPATCH_SOURCE_TYPE_TIMER that uses a wall clock time may be delayed in triggering following a system wake from sleep. This is also the case for a work item waiting for dispatch at a wall clock time in
dispatch_after or
dispatch_after_f. These timers will trigger as soon as there is any activity in your program. This behavior may change in a future update.
We hope that the discussion and examples provided here will help you on your way to becoming a proficient GCD programmer. However, the best way to learn is to jump in and write some code! Try writing a few small programs that explore just some part of the libdispatch API. Experiment! Try some variations! We think you’ll like programming with GCD, and that you’ll appreciate how it it makes it possible to take advantage of modern multi-core computer systems.
A complete library of resource information is available on the Web at Apple Developer Connection. Detailed information on programming with blocks starts at. More information on GCD starts at.
(Last updated: 2009-11-04) | https://apple.github.io/swift-corelibs-libdispatch/tutorial/ | CC-MAIN-2018-39 | refinedweb | 5,797 | 63.09 |
Opened 9 years ago
Closed 9 years ago
#8361 closed (invalid)
instance.content.url should return empty string if no file associated
Description
Hello everybody.
Due to the File storage refactoring, urls of ImageFields can now be retrieved not by using
instance.get_content_url(),
but rather by
instance.content.url
At the moment, if there is no file associated with the FileField, an exception is thrown.
In my opinion, this is not optimal.
Previously, you could do something like this in the template:
{% if instance.get_content_url %} <img src="/userimages/instance.get_content_url"> {% else %} <img src="/userimages/default.jpg"> {% endif %}
If you try to do this with instance.content.url, you get an exception.
I think it would be better, if just an empty string would be returned.
$DJANGOPATH\db\models\fields\files.py line 55
self._require_file()
should become something like that:
try: self._require_file() except: return ""
This probably isn't the most elegant way to do this, but it seems to work...
Attachments (1)
Change History (2)
Changed 9 years ago by
comment:1 Changed 9 years ago by
Apparently I didn't have enough sleep the last days! :)
This works just fine:
{% if instance %} <img src="/userimages/instance.get_content_url"> {% else %} <img src="/userimages/default.jpg"> {% endif %}
So please just ignore this ticket.
Sorry for the false report.
diff for /django/db/models/fields/files.py | https://code.djangoproject.com/ticket/8361 | CC-MAIN-2017-09 | refinedweb | 225 | 53.88 |
seems to be gaining popularity recently!
The functional programming language for .NET framework is F#. However, although C# is an object-oriented language at its core, it also has a lot of features that can be used with functional programming techniques.
You might already be writing some functional code without realizing.
Functional programming is an alternative programming paradigm to the currently more popular and common, object-oriented programming.
There are several key concepts that differentiate it from the other programming paradigms. Let’s start by providing definitions for the most common ones, so that we will recognize them when we see them applied throughout the article.
The basic building blocks of functional programs are pure functions. They are defined by the following two properties:
Because of these properties, a function call can be safely replaced with its result, e.g. to cache the results of computationally intensive functions for each combination of its arguments (technique known as memoization).
Pure functions lend themselves well to function composition.
This is a process of combining two or more functions into a new function, which returns the same result as if all its composing functions were called in a sequence. If ComposedFn is a function composition of Fn1 and Fn2, then the following assertion will always pass:
Assert.That(ComposedFn(x), Is.EqualTo(Fn2(Fn1(x))));
Composition is an important part of making functions reusable.
Having functions as arguments to other functions can further increase their reusability. Such higher-order functions can act as generic helpers, which apply another function passed as argument multiple times, e.g. on all items of an array:
Array.Exists(persons, IsMinor);
In the above code, IsMinor is a function, defined elsewhere. For this to work, the language must support first-class functions, i.e. allow functions to be used as first-class language constructs just like value literals.
Data is always represented with immutable objects, i.e. objects that cannot change their state after they have been initially created. Whenever a value changes, a new object must be created instead of modifying the existing one. Because all objects are guaranteed to not change, they are inherently thread-safe, i.e. they can be safely used in multithreaded programs with no threat of race conditions.
As a direct consequence of functions being pure and objects being immutable, there is no shared state in functional programs.
Functions can act only based on their arguments, which they cannot change and therewith, affect other functions receiving the same arguments. The only way they can affect the rest of the program is through the result they return, which will be passed on as arguments to other functions.
This prevents any kind of hidden cross-interaction between the functions, making them safe to run in any order or even in parallel, unless one function directly depends on the result of the other.
With these basic building blocks, functional programs end up being more declarative than imperative, i.e. instead of describing how to calculate the result, the programmer rather describes what to calculate.
The following two functions that convert the case of an array of strings to lower case, clearly demonstrate the difference between the two approaches:
string[] Imperative(string[] words)
{
var lowerCaseWords = new string[words.Length];
for (int i = 0; i < words.Length; i++)
{
lowerCaseWords[i] = words[i].ToLower();
}
return lowerCaseWords;
}
string[] Declarative(string[] words)
{
return words.Select(word => word.ToLower()).ToArray();
}
Although you will hear about many other functional concepts, such as monads, functors, currying, referential transparency and others, these building blocks should suffice to give you a basic idea of what functional programming is and how it differs from object-oriented programming.
You can implement many functional programming concepts in C#.
Since the language is primarily object-oriented, the defaults don’t always guide you towards such code, but with intent and enough self-discipline, your code can become much more functional.
You most probably are used to writing mutable types in C#, but with very little effort, they can be made immutable:
public class Person
{
public string FirstName { get; private set; }
public string LastName { get; private set; }
public Person(string firstName, string lastName)
{
FirstName = firstName;
LastName = lastName;
}
}
Private property setters make it impossible to assign them a different value after the object has been initially created. For the object to be truly immutable, all the properties must also be of immutable types. Otherwise their values can be changed by mutating the properties, instead of assigning a new value to them.
The Person type above is immutable, because string is also an immutable type, i.e. its value cannot be changed as all its instance methods, return a new string instance. However this is an exception to the rule and most .NET framework classes are mutable.
If you want your type to be immutable, you should not use any other built-in type other than primitive types, and strings as public properties.
To change a property of the object, e.g. to change the person’s first name, a new object needs to be created:
public static Person Rename(Person person, string firstName)
{
return new Person(firstName, person.LastName);
}
When a type has many properties, writing such functions can become quite tedious. Therefore, it is a good practice for immutable types to implement a With helper function for such scenarios:
public Person With(string firstName = null, string lastName = null)
{
return new Person(firstName ?? this.FirstName, lastName ?? this.LastName);
}
This function creates a copy of the object with any number of properties modified. Our Rename function can now simply call this helper to create the modified person:
public static Person Rename(Person person, string firstName)
{
return person.With(firstName: firstName);
}
The advantages might not be obvious with only two properties, but no matter how many properties the type consists of, this syntax allows us to only list the properties we want to modify as named arguments.
Making functions ‘pure’ requires even more discipline than making objects immutable.
There are no language features available to help the programmer ensure that a particular function is pure. It is your own responsibility to not use any kind of internal or external state, to not cause side effects and to not call any other functions that are not pure.
Of course, there is also nothing stopping you from only using the function arguments and calling other pure functions, thus making the function pure. The Rename function above is an example of a pure function: it does not call any non-pure functions or use any other data than the arguments passed to it.
Multiple functions can be composed into one by defining a new function, which calls all the composed functions in its body (let us ignore the fact that there is no need to ever call Rename multiple times in a row):
public static Person MultiRename(Person person)
{
return Rename(Rename(person, "Jane"), "Jack");
}
The signature of Rename method forces us to nest the calls, which can become difficult to read and comprehend, as the number of function calls increases. If we use the With method instead, our intent becomes clearer:
public static Person MultiRename(Person person)
{
return person.With(firstName: "Jane").With(firstName: "Jack");
}
To make the code even more readable, we can break the chain of calls into multiple lines, keeping it manageable, no matter how many functions we compose into one:
public static Person MultiRename(Person person)
{
return person
.With(firstName: "Jane")
.With(firstName: "Jack");
}
There is no good way to split lines with Rename-like nested calls. Of course, With method allows the chaining syntax due to the fact that it is an instance method.
However, in functional programming, functions should be declared separately from the data they act upon, like Rename function is.
While functional languages have a pipeline operator (|> in F#) to allow chaining of such functions, we can take advantage of extension methods in C# instead:
public static class PersonExtensions
{
public static Person Rename(this Person person, string firstName)
{
return person.With(firstName: firstName);
}
}
This allows us to chain non-instance method calls, the same way as we can instance method calls:
public static Person MultiRename(Person person)
{
return person.Rename("Jane").Rename("Jack");
}
To have a taste of functional programming in C#, you don’t need to write all the objects and functions yourself.
There are some readily available functional APIs in .NET framework for you to utilize.
We have already mentioned, string and primitive types are immutable types in .NET framework.
However, there is also a selection of immutable collection types available. Technically, they are not really a part of the .NET framework, since they are distributed out-of-band as a stand-alone NuGet package System.Collections.Immutable.
On the other hand, they are an integral part of .NET Core, the new open-source cross-platform .NET runtime.
The namespace includes all the commonly used collection types: array, lists, sets, dictionaries, queue and stack.
As the name implies, all of them are immutable, i.e. they cannot be changed after they are created. Instead a new instance is created for every change. This makes the immutable collections completely thread-safe in a different way than the concurrent collections, which are also included in the .NET framework base class library.
With concurrent collections, multiple threads cannot modify the data simultaneously but they still have access to the modifications. With immutable collections, any changes are only visible to the thread that made them, as the original collection remains unmodified.
To keep the collections performant in spite of creating a new instance for every mutable operation, their implementation takes advantage of structural sharing.
This means that in the new modified instance of the collection, the unmodified parts from the previous instance are reused as much as possible, thus requiring less memory allocation and causing less work for the garbage collector.
This common technique in functional programming is made possible by the fact that objects cannot change and can therefore be safely reused.
The biggest difference between using immutable collections and regular collections, is in their creation.
Since a new instance is created on every change, you want to create the collection with all the initial items already in it. As a result, immutable collections don’t have public constructors, but offer three alternative ways of creating them:
- Factory method Create accepts 0 or more items to initialize the collection with:
var list = ImmutableList.Create(1, 2, 3, 4);
- Builder is an efficient mutable collection that can be easily converted to its immutable counterpart:
var builder = ImmutableList.CreateBuilder(); builder.Add(1); builder.AddRange(new[] { 2, 3, 4 }); var list = builder.ToImmutable();
- Extension methods can be used to create immutable collections from an IEnumerable:
var list = new[] { 1, 2, 3, 4 }.ToImmutableList();
Mutable operations of immutable collections are similar to the ones in regular collections, however they all return a new instance of the collection, representing the result of applying the operation to the original instance.
This new instance has to be used thereafter if you don’t want to lose the changes:
var modifiedList = list.Add(5);
After executing the above statement, the value of the list will still be { 1, 2, 3, 4 }. The resulting modifiedList will have the value of { 1, 2, 3, 4, 5 }.
No matter how unusual the immutable collections may seem to a non-functional programmer, they are a very important building block in writing functional code for .NET framework. Creating your own immutable collection types would be a significant effort.
A much better known functional API in .NET framework is LINQ.
Although it has never been advertised as being functional, it manifests many previously introduced functional properties.
If we take a closer look at LINQ extension methods, it quickly becomes obvious that almost all of them are declarative in nature: they allow us to specify what we want to achieve, not how.
var result = persons
.Where(p => p.FirstName == "John")
.Select(p => p.LastName)
.OrderBy(s => s.ToLower())
.ToList();
The above query returns an ordered list of last names of people named John. Instead of providing a detailed sequence of operations to perform, we only described the desired result. The available extension methods are also easy to compose using the chaining syntax.
Although LINQ functions are not acting on immutable types, they are still pure functions, unless abused by passing mutating functions as arguments.
They are implemented to act on IEnumerable collections which is a read-only interface. They don’t modify the items in the collection.
Their result only depends on the input arguments and they don’t create any global side effects, as long as the functions passed as arguments are also pure. In the example we just saw, neither the persons collection, nor any of the items in it will be modified.
Many LINQ functions are higher-order functions: they accept other functions as arguments. In the sample code above, lambda expressions are passed in as function arguments, but they could easily be defined elsewhere and passed in, instead of created inline:
public bool FirstNameIsJohn(Person p)
{
return p.FirstName == "John";
}
public string PersonLastName(Person p)
{
return p.LastName;
}
public string StringToLower(string s)
{
return s.ToLower();
}
var result = persons
.Where(FirstNameIsJohn)
.Select(PersonLastName)
.OrderBy(StringToLower)
.ToList();
When function arguments are as simple as in our case, the code will usually be easier to comprehend with inline lambda expressions instead of separate functions. However, as the implemented logic becomes more complex and reusable, having them defined as standalone functions, starts to make more sense.
Functional programming paradigm certainly has some advantages, which has contributed to its increased popularity recently.
With no shared state, parallelizing and multithreading has become much easier, because we don’t have to deal with synchronization issues and race conditions. Pure functions and immutability can make code easier to comprehend.
Since functions only depend on their explicitly listed arguments, we can more easily recognize when one function requires a result of another function and when the two functions are independent and can therefore run in parallel. Individual pure functions are also easier to unit test, as all the test cases can be covered by passing different input arguments and validating return values. There are no other external dependencies to mock and inspect.
If all of these make you want to try out functional programming for yourself, doing it first in C# might be an easier option than learning a new language at the same time. You can start out slow by utilizing existing functional APIs more and continue by writing your code in a more declarative fashion.
If you see enough benefits to it, you can learn F# and go all in later, when you already become more familiar with the concepts.! | https://www.dotnetcurry.com/csharp/1384/functional-programming-fsharp-for-csharp-developers | CC-MAIN-2022-05 | refinedweb | 2,455 | 52.9 |
On Fri, Aug 27, 2010 at 03:30:39PM -0700, John Stultz wrote:> On Fri, 2010-08-27 at 14:38 +0200, Richard Cochran wrote:> > We have not introduced new PPS interface. We use existing PPS subsystem.> >> that can support it.The PPS subsystem offers no way to disable PPS interrupts. > > > Same for the timestamps and periodic output (ie: and how do they differ> > > from reading or setting a timer on CLOCK_PTP?)> > > > The posix timer calls won't work:> > > > I have a PTP hardware clocks with multiple external timestamp> > channels. Using timer_gettime, how can I specify (or decode) the> > channel of interest to me?> > I guess I'm not following you here. Again, I'm not super familiar with> the hardware involved. Could you clarify a bit more? What do you mean by> external timestamp? Is this what is used on the packet timestamping? No, the packet timestamp occurs in the PHY, MAC, or on the MII bus andis an essential feature to support the PTP.An external timestamp is just a wire going into the clock and is anoptional feature to make the clock more useful. The clock can latchthe current time value when an edge is dectected on the wire. Usingexternal timestamps, you correlate real world events with the absolutetime in the clock.Typically, a clock offers two or more such input channels (wires), buttimer_gettime does not offer a way to differentiate between them, andthus is not suitable.> The posix clock id interface is frustrating because the flat static> enumeration is really limiting.> > I wonder if a dynamic enumeration for posix clocks would be possibly a> way to go?I am perfectly happy with this.> In other words, a driver registers a clock with the system, and the> system reserves a clock_id for it from the designated available pool and> assigns it back. Then userland could query the driver via something like> sysfs to get the clockid for the hardware. > > Would that maybe work? I have now posted a sample implementation of this idea. Do you like it?> > The sysfs will include one class device for each PTP clock. Each clock> > has a sysfs attribute with the corresponding clock id.> > Do you mean the clock_id # in the posix clocks namespace?Yes.> > I would also be happy with the character device idea already> > posted. Just pick one of the two, and I'll resubmit the patch set...> > Personally, with regard to exposing the ptp clock to userland I'm more> comfortable via the chardev. However, the posix clocks/timer api is> really close to what you need, so I'm not totally set against it. And> maybe the dynamic enumeration would resolve my main problems with it?Okay, I have posted a draft of the dynamic idea. Can you support it?> That said, with the chardev method, I don't like the idea of duplicating> the existing time apis via a systemtime device. Dropping that from your> earlier patch would ease the majority of my objections.Well, the clock interface needs to offer basic services:1. Set time2. Get time3. Jump offset4. Adjust frequencyThis is similar to what the posix clock and ntp API offer. Using achardev, should I make the ioctls really different, just for thepurpose of being different?To me, it makes more sense to offer a familiar interface.I was perfectly happy with the chardev idea. In fact, that is the wayI first implemented it. Now, I have also gone ahead and implementedthe dynamic posix clock idea, too.> > At this point I would just like to go forward with one of the two> > proposed APIs. I had modelled the character device on the posix clock> > calls in order to make it immediately familar, and I think it is a> > viable approach. After the lkml discussion, I think it is even cleaner> > and nicer to just offer a new clock id.I would like to repeat the sentiment in this last paragraph! I alreadyimplemented and would be content with either form for the new clockcontrol API:1. Character device2. POSIX clock with dynamic idsPlease, just take your pick ;^)Thanks,Richard | https://lkml.org/lkml/2010/9/6/31 | CC-MAIN-2015-27 | refinedweb | 683 | 75.1 |
What’s the optimal layout for your Django applications, settings files, and various other associated directories?
When Django 1.4 was released it included an updated project layout which went a long way to improving the default Django project’s layout, but here are some tips for making it even better.
This is a question we get asked all of the time so I wanted to take a bit of time and write down exactly how we feel about this subject so we can easily refer clients to this document. Note that this was written using Django version 1.7.1, but can be applied to any Django version after 1.4 easily.
Why this layout is better
The project layout we’re recommending here has several advantages namely:
- Allows you to pick up, repackage, and reuse individual Django applications for use in other projects. Often it isn’t clear as you are building an app whether or not it is even a candidate for reuse. Building it this way from the start makes it much easier if that time comes.
- Encourages designing applications for reuse
- Environment specific settings. No more
if DEBUG==Truenonsense in a single monolithic settings file. This allows to easily see which settings are shared and what is overridden on a per environment basis.
- Environment specific PIP requirements
- Project level templates and static files that can, if necessary, override app level defaults.
- Small more specific test files which are easier to read and understand.
Assuming you have two apps blog and users and 2 environments dev and prod your project layout should be structured like this:
myproject/ manage.py myproject/ __init__.py urls.py wsgi.py settings/ __init__.py base.py dev.py prod.py blog/ __init__.py models.py managers.py views.py urls.py templates/ blog/ base.html list.html detail.html static/ … tests/ __init__.py test_models.py test_managers.py test_views.py users/ __init__.py models.py views.py urls.py templates/ users/ base.html list.html detail.html static/ … tests/ __init__.py test_models.py test_views.py static/ css/ … js/ … templates/ base.html index.html requirements/ base.txt dev.txt test.txt prod.txt
The rest of this article explains how to move a project to this layout and why this layout is better.
Current Default Layout
We’re going to call our example project foo , yes I realize it’s a very creative name. We’re assuming here that we’re going to be launching foo.com but while we like to have our project names reflect the ultimate domain(s) the project will live on this isn’t by any means required.
If you kick off your project using
django-admin.py startproject foo you get a directory structure like this:
foo/ manage.py foo/ __init__.py settings.py urls.py wsgi.py
This layout is a great starting place, we have a top level directory foo which contains our manage.py the project directory foo/foo/ inside it. This is the directory you would check into your source control system such as git.
You should think of this foo/foo/ subdirectory as being the project where everything else is either a Django application or ancillary files related to the project.
Fixing Settings
We’re on a mission to fix your bad settings files here. We show this layout to new clients and I’m constantly surprised how few people know this is even possible to do. I blame the fact that while everyone knows that settings are just Python code, they don’t think about them as Python code.
So let’s fix up our settings. For our foo project we’re going to have 4 environments: dev, stage, jenkins, and production. So let’s give each it’s own file. The process to do this is:
- In foo/foo/ make a settings directory and create an empty
__init__.pyfile inside it.
- Move foo/foo/settings.py into foo/foo/settings/base.py
- Create the individual dev.py , stage.py , jenkins.py , and prod.py files in foo/foo/settings/. Each of these 4 environment specific files should simply contain the following:
from base import *
So why is this important? Well for local development you want
DEBUG=True, but it’s pretty easy to accidentally push out production code with it on, so just open up foo/foo/settings/prod.py and after the initial import from base just add
DEBUG=False. Now if your production site is safe from that silly mistake.
What else can you customize? Well it should be pretty obvious you’ll likely have staging, jenkins, and production all pointing at different databases, likely even on different hosts. So adjust those settings in each environment file.
Using these settings
Using these settings is easy, no matter which method you typically use. To use the OS’s environment you just do:
export DJANGO_SETTINGS_MODULE=“foo.settings.jenkins”
And boom, you’re now using the jenkins configuration.
Or maybe you prefer to pass them in as a commandline option like this:
./manage.py migrate —settings=foo.settings.production
Same if you’re using gunicorn:
gunicorn -w 4 -b 127.0.0.1:8001 —settings=foo.settings.dev
What else should be customized about settings?
Another useful tip with Django settings is to change several of the default settings collections from being tuples to being lists. For example
INSTALLED_APPS, by changing it from:
INSTALLED_APPS = ( … )
to:
INSTALLED_APPS = [ … ] prerequisites and another for your actual project applications. So like this:
PREREQ_APPS = [ ‘django.contrib.auth’, ‘django.contrib.contenttypes’, … ‘debug_toolbar’, ‘imagekit’, ‘haystack’, ] PROJECT_APPS = [ ‘homepage’, ‘users’, ‘blog’, ] INSTALLED_APPS = PREREQ_APPS + PROJECT_APPS
Why is this useful? For one it helps better distinguish between Django core apps, third party apps, and your own internal project specific applications. However,
PROJECT_APPS often comes in handy as a list of your specific apps for things like testing and code coverage. You have a list of your apps, so you can easily and automagically make sure their tests are run and coverage is recorded just for them, not including any third party apps, without having to maintain the list in two separate places.
Fixing requirements
Most projects have a single
requirements.txt file that is installed like this:
pip install -r requirements.txt
This is sufficient for small simple projects, but a little known feature of requirements files is that you can use the
-r flag to include other files. So we can have a base.txt of all the common requirements and then if we need to be able to run tests have a specific requirements/test.txt that looks like this:
-r base.txt pytest==2.5.2 coverage==3.7.1
I’ll admit this is not a HUGE benefit, but it does help separate out what is a requirement in which environment. And for the truly performance conscience it reduces your pip install time in production a touch by not installing a bunch of things that won’t actually be used in production.
Test Files
Why did we separate out the tests files so much? One main reason, if you’re writing enough tests a single tests.py file per application will end up being one huge honking file. This is bad for readability, but also just for the simple fact you have to spend time scrolling around a lot in your editor.
You’ll also end up with less merge conflicts when working with other developers which is a nice side benefit. Small files are your friends.
URLs
For small projects it’s tempting to put all of your url definitions in foo/urls.py to keep them all in one place. However, if your goal is clarity and reusability you want to define your urls in each app and include them into your main project. So instead of:
urlpatterns = patterns(‘’, url(r’^$’, HomePageView.as_view(), name=‘home’), url(r’^blog/$’, BlogList.as_view(), name=‘blog_list’), url(r’^blog/(?P<pk>\d+)/$’, BlogDetail.as_view(), name=‘blog_detail’), … url(r’^user/list/$’, UserList.as_view(), name=‘user_list’), url(r’^user/(?P<username>\w+)/$’, UserDetail.as_view(), name=‘user_detail’), )
you should do this:
urlpatterns = patterns(‘’, url(r’^$’, HomePageView.as_view(), name=‘home’), url(r’^blog/‘, include(‘blog.urls’)), url(r’^user/‘, include(‘user.urls’)), )
Templates and static media
Having per app templates/ and static/ directories gives us the ability to reuse an application basically as is in another project.
We get the default templates the app provides and any associated static media like special Javascript for that one cool feature all in one package.
However, it also gives us the ability to override those templates on a per project basis in the main foo/templates/ directory. By adding a
templates/blog/detail.html template we override, or mask, the default
blog/templates/blog/detail.html template.
Reusing a Django application
So assuming you’ve been using this layout for awhile, one day you’ll realize that your new project needs a blog and the one from your foo project would be perfect for it. So you copy and paste the files in… pssst WRONG!. Now you have two copies of the application out there. Bug fixes or new features in one have to manually be moved between the projects, and that assumes you even remember to do that.
Instead, make a new repo for your blog and put the foo/blog/ directory in it. And adjust both your existing foo project and your new project to pip install it.
They can still both track different versions of the app, if necessary, or keep up to date and get all of your bug fixes and new features as they develop. You still can override the templates and static media as you need to on a per project basis, so there really isn’t any real issues doing this.
Additional Resources
Our friends Danny and Audrey over at CartWheel Web reminded us about Cookie Cutter and specifically Danny's cookiecutter-django as useful tools for making your initial project creations easier and repeatable.
Also, if you're looking for all around great Django tips and best practices, you can't go wrong with their book Two Scoops of Django: Best Practices For Django 1.6
which we recommend to all of our clients.
Feedback
We hope you find this improved project layout useful. If you find any bugs, have a suggestion, or just want to chat feel free to reach out to us. Thanks for reading!
programming django Featured Posts | https://www.revsys.com/tidbits/recommended-django-project-layout/ | CC-MAIN-2020-10 | refinedweb | 1,741 | 66.13 |
kdeui
KCModule Class ReferenceThe base class for control center modules. More...
#include <kcmodule.h>
Detailed DescriptionThe base class for control center modules.
Starting from KDE 2.0, control center modules are realized as shared libraries that are loaded into the control center at runtime. at one factory function like this:
#include <kgenericfactory.h> typedef KGenericFactory<YourKCModule, QWidget> YourKCModuleFactory; K_EXPORT_COMPONENT_FACTORY( yourLibName, YourKCModuleFactory("name_of_the_po_file") );
The parameter "name_of_the_po_file" has to correspond with the messages target that you created in your Makefile.am.
See for more detailed documentation.
Definition at line 69 of file kcmodule.h.
Member Enumeration Documentation
An enumeration type for the buttons used by this module.
You should only use Help, Default and Apply. The rest is obsolete.
- See also:
- KCModule::buttons
- Enumerator:
-
Definition at line 81 of file kcmodule.h.
Constructor & Destructor Documentation
Definition at line 62 of file kcmodule.cpp.
Definition at line 77 of file kcmodule.cpp.
Definition at line 106 of file kcmodule.cpp.
Member Function Documentation
This is generally only called for the KBugReport.
If you override you should have it return a pointer to a constant.
- Returns:
- the KAboutData for this module
Definition at line 159 of file kcmodule.cpp.
Adds a KConfigskeleton
config to watch the widget
widget.
This function is useful if you need to handle multiple configuration files.
- Since:
- 3.3
- Returns:
- a pointer to the KConfigDialogManager in use
- Parameters:
-
Definition at line 98 of file kcmodule.cpp.
Indicate which buttons will be used.
The return value is a value or'ed together from the Button enumeration type.
- See also:
- KCModule::setButtons
Definition at line 203 of file kcmodule.h.
Calling this slot is equivalent to emitting changed(true).
- Since:
- 3.3
Definition at line 190 of file kcmodule.cpp.
Indicate that the state of the modules contents has changed.
This signal is emitted whenever the state of the configuration shown in the module changes. It allows the control center to keep track of unsaved changes.
- Returns:
- a list of KConfigDialogManager's in use, if any.
- Since:
- 3.4
Definition at line 212 129 of file kcmodule.cpp.
Definition at line 195.
If you use KConfigXT, loading is taken care of automatically and you do not need to do it manually. However, if you for some reason reimplement it and also are using KConfigXT, you must call this function otherwise the loading of KConfigXT options will not work.
Definition at line 114 of file kcmodule.cpp.
Returns the changed state of automatically managed widgets in this dialog.
- Since:
- 3.5
Definition at line 141 206Msg
Definition at line 175 121 of file kcmodule.cpp.
This sets the KAboutData returned by aboutData().
- Since:
- 3.3
Definition at line 164 309 of file kcmodule.h.
Sets the quick help.
- Since:
- 3.3
Definition at line 200 of file kcmodule.cpp.
Sets the RootOnly message.
This message will be shown at the top of the module of the corresponding desktop file contains the line X-KDE-RootOnly=true. If no message is set, a default one will be used.
- See also:
- KCModule::rootOnlyMsg
Definition at line 170 of file kcmodule.cpp.
Change whether or not the RootOnly message should be shown.
Following the value of
on, the RootOnly message will be shown or not.
- See also:
- KCModule::useRootOnlyMsg
Definition at line 180 of file kcmodule.cpp.
Set the configuration to system default values.
This method is called when the user clicks the "System-Default" button. It should set the display to the system default values.
- Note:
- The default behavior is to call defaults().
Definition at line 166 of file kcmodule.h.
Call this method when your manually managed widgets change state between changed and not changed.
- Since:
- 3.5
Definition at line 153Msg
Definition at line 185 of file kcmodule.cpp.
Definition at line 217 of file kcmodule.cpp.
A managed widget was changed, the widget settings and the current settings are compared and a corresponding changed() signal is emitted.
- Since:
- 3.4
Definition at line 136 of file kcmodule.cpp.
The documentation for this class was generated from the following files: | http://api.kde.org/3.5-api/kdelibs-apidocs/kdeui/html/classKCModule.html | crawl-002 | refinedweb | 677 | 52.15 |
Factor analysis is a dimensionality reduction technique commonly used in statistics. FA is similar to principal component analysis. The difference are highly technical but include the fact the FA does not have an orthogonal decomposition and FA assumes that there are latent variables and that are influencing the observed variables in the model. For FA the goal is the explanation of the covariance among the observed variables present.
Our purpose here will be to use the BioChemist dataset from the pydataset module and perform a FA that creates two components. This dataset has data on the people completing PhDs and their mentors. We will also create a visual of our two-factor solution. Below is some initial code.
import pandas as pd from pydataset import data from sklearn.decomposition import FactorAnalysis import matplotlib.pyplot as plt
We now need to prepare the dataset. The code is below
df = data('bioChemists') df=df.iloc[1:250] X=df[['art','kid5','phd','ment']]
In the code above, we did the following
- The first line creates our dataframe called “df” and is made up of the dataset bioChemist
- The second line reduces the df to 250 rows. This is done for the visual that we will make. To take the whole dataset and graph it would make a giant blob of color that would be hard to interpret.
- The last line pulls the variables we want to use for our analysis. The meaning of these variables can be found by typing data(“bioChemists”,show_doc=True)
In the code below we need to set the number of factors we want and then run the model.
fact_2c=FactorAnalysis(n_components=2) X_factor=fact_2c.fit_transform(X)
The first line tells Python how many factors we want. The second line takes this information along with or revised dataset X to create the actual factors that we want. We can now make our visualization
To make the visualization requires several steps. We want to identify how well the two components separate students who are married from students who are not married. First, we need to make a dictionary that can be used to convert the single or married status to a number. Below is the code.
thisdict = { "Single": "1", "Married": "2",}
Now we are ready to make our plot. The code is below. Pay close attention to the ‘c’ argument as it uses our dictionary.
plt.scatter(X_factor[:,0],X_factor[:,1],c=df.mar.map(thisdict),alpha=.8,edgecolors='none')
You can perhaps tell why we created the dictionary now. By mapping the dictionary to the mar variable it automatically changed every single and married entry in the df dataset to a 1 or 2. The c argument needs a number in order to set a color and this is what the dictionary was able to supply it with.
You can see that two factors do not do a good job of separating the people by their marital status. Additional factors may be useful but after two factors it becomes impossible to visualize them.
Conclusion
This post provided an example of factor analysis in Python. Here the focus was primarily on visualization but there are so many other ways in which factor analysis can be deployed.
Thank you | https://educationalresearchtechniques.com/2018/10/24/factor-analysis-in-python/ | CC-MAIN-2020-10 | refinedweb | 539 | 64.51 |
Otto 1.2
WSGI-compliant HTTP publisher.
Overview
Otto is an HTTP publisher which uses a routes-like syntax to map URLs to code. It supports object traversal.
You can use the publisher to write web applications. It was designed with both small and large applications in mind. We have tried to incorporate elements of existing publishers to allow diverse and flexible application patterns while still being in concordance with the Zen Of Python.
Here's a variation of a familiar theme:
import otto import webob import wsgiref.simple_server app = otto.Application() @app.connect("/*path/:name") def hello_world(request, path=None, name=u'world'): return webob.Response(u"An %d-deep hello %s!" % (len(path), name)) wsgiref.simple_server.make_server('', 8080, app).serve_forever()
This release is compatible with Python 2.4+.
See the documentation for this release.
Changes
1.2 (2009-11-16)
Features
- Route matches that come before object mapping are passed on to the mapper on instantiation; these matches are then not passed to the controller.
Backwards incompatibilities
- The object mapper takes the place of the traverser; on instantiation it gets the part of the match dictionary that comes before the asterisk.
- The empty asterisk is now mapped to the empty string. This does not change the high-level interface.
1.1 (2009-11-12)
Features
- The leading slash is now optional in a route path definition.
- The Route class now provides the match method.
Backwards incompatibilities
- The Publisher.route method was renamed to connect. This method now takes a route object. This change was also applied for the Router class.
1.0 (2009-11-12)
- Initial public release.
- Author: Malthe Borch
- Keywords: wsgi publisher router
- License: BSD
- Categories
- Package Index Owner: malthe
- DOAP record: Otto-1.2.xml | http://pypi.python.org/pypi/Otto/1.2 | crawl-003 | refinedweb | 290 | 53.07 |
j WHEN WILL repaint() of CustomItem will call automatically
J2ME
J2ME how to create table using four field in RMS - MobileApplications
J2me Hi, I would like to know how to send orders linux to a servlet which renvoit answers to a midlet. thank you
;|
J2ME Servlet...
Map | Business Software
Services India
J2ME Tutorial Section
Java
Platform Micro Edition |
MIDlet Lifecycle J2ME
|
jad and properties file
J2ME code - MobileApplications
...
user enter name and pwd to J2ME code...j2me cl the servlet..and servlet...J2ME code hi...
i'm facing problem while connecting J2ME code to servlet..
i want to know how to request servlet and how to get the response from
j2me application
j2me application code for mobile tracking system using j2me app with servlets
J2me app with servlets Can we send and receive message from our servlet website to mobile? if yes,then how..
without using any router..code plz??
Please visit the following link:
j2me solution - MobileApplications
j2me solution Hi friends,
In one of my mobile application i am... those values in mysql database via a servlet.
i am using double lat... =87.555657,when i tried display this value in servlet,it is showing 1.506276030234...E-308
j2me pgrm run
j2me pgrm run How to run a j2me program
Please visit the following link:
J2ME Tutorials
How to access (MySQL)database from J2ME?
How to access (MySQL)database from J2ME? I am new to J2ME. I am using NetBeans.
Can anyone help me?
How to access (MySQL)database from J2ME?
( I search a lot I found that there is need to access database through servlet
how to connect j2me program with mysql using servlet?
how to connect j2me program with mysql using servlet? my program of j2me
import java.io.*;
import java.util.*;
import javax.microedition.midlet.... the response from the servlet page.
DataInputStream - MobileApplications
j2me code how to write a j2me calendar using alert form on the special day Hi Friend,
Please visit the following link:
Hope that it will be helpful for you
j2me mysql connectivity
j2me mysql connectivity I do a project on reservation using j2me. I need a connectivity to a MYSQL database.
How do I do this
Thanks and regards
Karthiga
Emulator for j2me in eclipse
Emulator for j2me in eclipse I want to run J2me application in eclipse. For that i need an emulator..but i can not get it any how. | http://roseindia.net/tutorialhelp/comment/85653 | CC-MAIN-2014-10 | refinedweb | 401 | 75.1 |
Input Device: Keyboard, mouse. Allows you to input data and translate to machine language. Output Device: Printer, Monitor. Allows the output of information and translate to human language. Storage Device: Flash Drive, etc. All hard disks have tracks and sectors where information is stored. When something is formatted all content is erased, tracks and sectors are remade, and a new file location table is made. The root directory is where you start from, such as F:\. Program (Software): A set of instructions to tell a computer to perform a particular task. Data: Unprocessed items. Like raw chicken that is uncooked (doesnt mean you cant eat it). Data Types: There are different types of data such. You need to know the different types so you don't do something dumb like add two names (roboraiderrobotix! Raiderspamsquad!) One type of data would be a bool, which can be true or false. Information: generated from data. Information and data are two different things! Categories of Software: System software (OS), Application software (Word, Excel, Autocad blah blah HYDROLAZERZ), Programming language packages. Operating System: Manages hardware, applications+programming language packages, and users. With hardware, things such as a printer getting backed up and alerting the user to it or saying NOT SUFFICIENT MEMORY when a flash drive is full. Users are managed through user interfaces (DOS; GUI: Graphical User Interface; Dialogue Boxes; Check boxes, etc.). Processing: Equation or a formula. JPOS Cycle: Input Processing Output Storage Cycle. Analog/Digital Signal: Analog signals are made of continuous waves. Sound is analog (hey didnt we do this in physics?). Digital is on or off only - but precise. Algorithm: a sequence of instructions to perform a specific task is a program so this is the equation and the procedure. Languages: Machine Language: Binary, duh. Assembly Language: Includes using symbols like + for addition instead of binary. An Assembler translates the symbols into the machine language. High-Level Language: C++, C#, Java, etc. You can write words and not suffer through binary.
C++ Programming: Text editor: lets you type the code. Compiler: will translate the high level language to machine language robot. Can check for errors. Source code: programming language instructions written in a text editor. After writing the code you compile it to check for syntax errors. The compiler will list the type of error and the line number. Fix any errors, recompile, and it will be translated to machine language to become object code. Simple program:
#include <iostream> //a preprocessor directive using namespace std; //This is a statement. Statements end in a semicolon. int main () //main() is a function { cout << "Hello World!"; return 0; }
Key words in c++ must be written in lowercase! Preprocessor directives: these all start with the pound (#) sign. For example, #include <iostream> is a preprocessor directive that has all the input output stuff. These are written above main (). *There is only one main ()! Comments: use // to comment out one line of text or code Use /*To comment out block of text or code*/ Comments are written to describe what is going on. Always write the objective of your program at the top. int: an integer, which is a type of variable. Input/Output: << >> cin cout output operator input operator character in. For when you want some information to be put in. character out. For the information you want to display.
If you misspelled cout or something that would be a syntax error. When < and > are written they are called angled brackets. Escape sequences: /n newline *you can also use <<endl; instead of /n /t tab | https://ru.scribd.com/document/160106274/MCC-CSC-133-Notes-7-9-12 | CC-MAIN-2019-35 | refinedweb | 591 | 60.61 |
22 February 2012 17:46 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
December sales in housing and construction rose by 24.6% year on year, to €9.4bn ($12.5bn), the Statistisches Bundesamt said.
For the full 12 months of 2011, housing and construction sector sales rose by 12.5% year on year to €93.4bn.
In a separate update report, the country’s central bank warned of a possible bubble in parts of the market.
The Bundesbank said that
In towns with populations of more than 500,000 people, prices for town houses and apartments rose 7% year on year in 2011, compared with 3.25% in 2010 from 2009.
The Bundesbank warned that with
Investors needed to take a close look at this risk, the bank added.
As in the
Analysts have repeatedly noted that because the eurozone sovereign debt crisis, as well as persistently low interest rates and potential inflation risks, German investors are putting more and more money into housing.
Germany’s housing pricing had been relatively subdued over the past 10 to15 years, compared with countries such as the US, the UK or Spain where prices soared, followed by a sharp correction in the wake of the 2008/2009 global economic and financial | http://www.icis.com/Articles/2012/02/22/9535031/germany-construction-sector-soars-but-central-bank-warns-investors.html | CC-MAIN-2014-52 | refinedweb | 208 | 71.85 |
This is your resource to discuss support topics with your peers, and learn from each other.
05-28-2011 03:39 PM
Is the AuthorId required for Signed apps ?
If not, do I leave it as <authorId></authorId> ?
This is in the blackberry-tablet.xml
05-28-2011 03:47 PM
hey DachFlach,
if your app is signed, the authorId tags are not required in the blackberry-tablet.xml file as the signed app would have that detail already in it via the signing process. so you can go ahead and omit it. the authorId is only required when you are doing debugging on the device using the tokens. hope that helps. good luck!
05-28-2011 03:51 PM
Thanks for the quick reply.
I am using this advice to create a blackberry-tablet.xml
When I go to run the example provided there, I get a namespace missing error.
I have added
<?xml version="1.0" encoding="utf-8" standalone="no"?><application xmlns="">
to the top but now get another error: Structure must start and end within the same entity
Here is a sample of what I got:
<?xml version="1.0" encoding="utf-8" standalone="no"?><application xmlns=""><qnx> <icon>  </icon> <author>AuthorName</author> <category>core.games</category> <splashscreen>splash_landscape.jpg:splash_portrait
I eliminated authorId and used a dummy authorName for this post.
Do you have a working sample?
05-28-2011 03:57 PM
hey,
this is what my blackberry-tablet.xml looks like for one of my apps:
<qnx> <initialWindow> <systemChrome>none</systemChrome> <transparent>false</transparent> </initialWindow> <author>Joynal Rab</author> <category>core.prod</category> <icon>  </icon> <action>access_internet</action> <action>read_device_identifying_information</actio
n> </qnx>n> </qnx>
also one other note -- make sure there are no spaces or blank lines before the first <qnx> tag -- it has to be the absolute first thing in that file. good luck!
05-28-2011 04:10 PM
I get namespace is missing with that example.
I even tried using yours Exactly and got the same.
Command line right?
Here is what I use:
blackberry-airpackager -package app.bar blackberry-tablet.xml app.swf
05-28-2011 04:15 PM
05-28-2011 04:58 PM
Ok, progress.
I was able to package the bar.
Had it signed by the signing authority.
Signed it again using code signing certificate ( is this really required?)
But am back to getting Authentication Failed.There have been 3 of 5 failed attempts.
What is the best command line to install it in the simulator?
I have seen many suggestions.
Is the following enough to install it in the sim:
blackberry-airpackager app.bar -installApp -launchApp -device 000.000.000.000
ip is different.
05-28-2011 05:09 PM
05-28-2011 05:16 PM
Excellent. Success!
Thanks to both of you.
One more off topic but you guys are so on the ball.
What is the stage size required?
1024x600 or 1024x768
I have seen both recommended.
05-28-2011 06:30 PM
The screen resolution of the PlayBook is 1024x600 | https://supportforums.blackberry.com/t5/Adobe-AIR-Development/AuthorId-required-for-Signed-apps/m-p/1115931/highlight/true | CC-MAIN-2016-40 | refinedweb | 517 | 70.29 |
Given two positive integer n and k. n can be represented as the sum of 1s and 2s in many ways, using multiple numbers of terms. The task is to find the minimum number of terms of 1s and 2s use to make the sum n and also number of terms must be multiple of k. Print “-1”, if no such number of terms exists.
Examples:
Input : n = 10, k = 2 Output : 6 10 can be represented as 2 + 2 + 2 + 2 + 1 + 1. Number of terms used are 6 which is multiple of 2. Input : n = 11, k = 4 Output : 8 10 can be represented as 2 + 2 + 2 + 1 + 1 + 1 + 1 + 1 Number of terms used are 8 which is multiple of 4.
Observe, the maximum number of terms used to represent n as the sum of 1s and 2s is n, when 1 are added n number of times. Also, the minimum number of terms will be n/2 times of 2s and n%2 times 1s are added. So, iterate from minimum number of terms to maximum number of terms and check if there is any multiple of k.
// C++ program to find minimum multiple of k // terms used to make sum n using 1s and 2s. #include<bits/stdc++.h> using namespace std; // Return minimum multiple of k terms used to // make sum n using 1s and 2s. int minMultipleK(int n, int k) { // Minimum number of terms required to make // sum n using 1s and 2s. int min = (n / 2) + (n % 2); // Iterate from Minimum to maximum to find // multiple of k. Maximum number of terns is // n (Sum of all 1s) for (int i = min; i <= n; i++) if (i % k == 0) return i; return -1; } // Driven Program int main() { int n = 10, k = 2; cout << minMultipleK(n, k) << endl; return 0; }
Output:. | http://www.geeksforgeeks.org/make-n-using-1s-2s-minimum-number-terms-multiple-k/ | CC-MAIN-2017-13 | refinedweb | 311 | 76.25 |
Glossary Item Box
This topic is describes how to use the Reverse Mapping Wizard in a task-oriented approach.
The Reverse Mapping Wizard reads a schema from the database and helps you to map tables from this schema to classes. You can generate the source code in either VB or C#.
During the task the following operations will be demonstrated:
Prerequisites:
This walkthrough uses the Northwind database on SQL Server 2005, to demonstrate the above-mentioned operations. To complete this walkthrough you will need access to the Northwind database on any of the OpenAccess supported database servers.
Preparing the project to use OpenAccess
From the list of Visual C# projects, select the Class Library project template, type NorthwindMapping in the Name box and then click OK.The project name, NorthwindMapping, is also assigned to the root namespace by default. The root namespace is used to qualify the names of classes in the assembly. This means that all classes generated by the Reverse Engineering wizard will use this namespace by default. You can change the namespace for individual or all the classes using the wizard.
Complete the Enable Project wizard by specifying Northwind as the database and appropriate Server Name. Accept the default values for rest of the options.
Reading the Schema and adjusting the table and field mapping
The Reverse Engineering wizard uses the information specified in the Enable Project wizard to retrieve schema information.
Expand the class node that the reference will represent. For e.g. if you need to create a 1:n collection of orderDetails in the Order class, expand the "Order Details" node and select the "order" child-node. Then check the Create one-to-many list check box and type the desired name of the collection in the Inverse Field Name textbox(or use the default value).This will create an orderDetails collection reference in the Order class.
Select the table that is to be treated as a ‘join table’. For e.g., In the Northwind database, the EmployeeTerritories is a join table. In the "Simple View" tab you can notice that "EmployeeTerritories" is mapped as a collection unlike other tables that ar mapped as classes. That means that a one-to-many collection of type "Territory" to the Employee class will be generated.
In the "Advanced View" tab select the EmployeeTerritories tree node and check the Many-to-many check box. That will create a m:n relation between the Employee and Territory classes. That means that besides the Employee having a collection of the related "Territory" objects, but a collection of type "Employee" will also be generated as part of the Territory class.
In the "Simple View" tab select the rows for the classes you want to change the namespace for. Then just type the name of the new namespace in the Change Namespace For The Selected Classes textbox.
Generating the source code
When finished with defining the various settings in the Reverse Mapping Wizard just click "Generate & Save Config" and OpenAccess will create the appropriate classes and fields and also set all the required configuration settings in the App.config and reversemapping.config files. | https://docs.telerik.com/help/openaccess-classic/openaccess-tasks-howto-reverse-mapping-wizard.html | CC-MAIN-2020-24 | refinedweb | 522 | 53.21 |
This Developer Guide contains Java code snippets and ready-to-run programs. You can find these code samples in the following sections:
Note
The Amazon DynamoDB Getting Started Guide contains additional Java sample programs.
You can get started quickly by using Eclipse with the AWS Toolkit for Eclipse. In addition to a full-featured IDE, you also get the AWS SDK for Java with automatic updates, and preconfigured templates for building AWS applications.
To Run the Java Code Samples (using Eclipse)
Download and install the Eclipse IDE.
Download and install the AWS Toolkit for Eclipse.
Start Eclipse and from the Eclipse menu, choose File, New, and then Other.
In Select a wizard, choose AWS, choose AWS Java Project, and then chooseNext.
In Create an AWS Java, do the following:
In Project name, type, type a name for your class in Name (use the same name as the code sample that you want to run), and then choose Finish to create the class.
Copy the code sample from the documentation page you are reading into the Eclipse editor.
To run the code, choose Run in the Eclipse menu.
The SDK for Java provides thread-safe clients for working with DynamoDB. As a best practice, your applications should create one client and reuse the client between threads.
For more information, see the AWS SDK for Java.
Note
The code samples in this Developer samples in this Developer samples access DynamoDB in the US West (Oregon) region.
You can change the region by modifying the
AmazonDynamoDBClient
properties.
The following code snippet instantiates a new
AmazonDynamoDBClient.
The client is then modified so that the code will run against DynamoDB in a different
region.
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient; import com.amazonaws.regions.Regions; ... // This client will default to US West (Oregon) AmazonDynamoDBClient client = new AmazonDynamoDBClient(); // Modify the client so that it accesses a different region. client.withRegion(Regions.US_EAST_1);
You can use the
withRegion method to run your code against Amazon DynamoDB
in any region where it is available. For a complete list, see AWS Regions and Endpoints in the
Amazon Web Services General Reference.
If you want to run the code samples using DynamoDB locally on your computer, you need to set the endpoint, as shown following:
AmazonDynamoDBClient client = new AmazonDynamoDBClient(); // Set the endpoint URL client.withEndpoint(""); | http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/CodeSamples.Java.html | CC-MAIN-2016-50 | refinedweb | 385 | 55.13 |
()
Ashish Jaiman)
TimothyA Vanover(1)
Shripad Kulkarni(1)
Resources
No resource found.
Creating#.
A Simple Calculator Class
Nov 28, 2000.
Code sample shows how to create a simple calculator class and call it from Main.
View Relational Data in a DataGrid
Mar 15, 2001.
This article explains you how easy is reading a database and displaying in a grid using DataSet.
Write XML in C#
Mar 27, 2001.
In this article, I will show you how to use XmlTextWriter class to create an XML document and write data to the document. II
Apr 16, 2001.
In this part, I will discuss both Private and Shared assemblies and how to create a "Shared Assembly"..
Using .NET Framework Multithreading and GDI+ to Enrich the user experience
May 04, 2001.
This tutorial shows you how to create, send, and received messages using MSMQ from the .NET base class library (System.Messaging) and C#.
Digital Watch In C#
Jun 04, 2001.
GDI+ sample example shows you how to create a Digital Watch..
Simplest way to Create and Deploy Web Services
Aug 20, 2001.
This article explains how to create and deploy a simple Web Service.#.
Web Browser in C# and VB.NET
Sep 11, 2001.
This article explains how to add and web browser ActiveX to your project to developer your own customized web browser.
Using Attributes in C#
Sep 14, 2001.
This article shows how to create custom attribute classes, use them in code, and query them.. more.
Hangman Program Using C#
Jan 19, 2002.
Hangman was created to illustrate several features of C# including GDI+, string manipulation, array processing, using properties, as well as simple creation of objects.
Creating an Excel Spreadsheet Programmatically
Jan 22, 2002.
The Interoperability services make it very easy to work with COM Capable Applications such as Word and Excel. This article reveals using Excel from a managed application. Excel is the spreadsheet component of Microsoft Office 2000.
Drag and Drop for Board Games
Jan 22, 2002.
This application shows how the drag and drop features in C# could be used to create a simple board game or whatever....
N-Tier Development with Microsoft .NET : Part III
Feb 25, 2002.
The last installment in this series detailed more on the middle tier – business – façade and how to create a Web Service Export Proxy to have a physically separated middle tier.
Sorting Object Using IComparer and IComparable Interfaces
Mar 01, 2002.
The System.Collections namespace contains interfaces and classes that define various...
Authenticate Web Service Client - Windows Application
Mar 04, 2002.
This client application shows you how to access Authenticate Web service from a Windows.
About how-to-create-captcha. | http://www.c-sharpcorner.com/tags/how-to-create-captcha | CC-MAIN-2016-36 | refinedweb | 441 | 59.6 |
Running HealthShare XSLTs from Terminal
HealthShare uses a lot of XSLTs. These are used to convert IHE medical documents to SDA (internal HealthShare format) and back to IHE formats, to create summary reports, and to deal with IHE profiles (e.g., patient information query, document provide and register). Customers may customize the XSLTs to customize reports or for other reasons.
For debugging and development, it is very convenient to be able to run an XSLT from Terminal.
The Class Method
Following is a class that contains a class method that lets you do this on Windows. Create the class in HSREGISTRY or another HealthShare namespace (don't use HSLIB or VIEWERLIB) and compile it.
Class Local.XsltTransformer Extends %RegisteredObject { ClassMethod Transform(XslDirectory As %String, XslBaseFilename As %String, Directory As %String, InputFilename As %String, OutputFilename As %String, byref Parameters = "") { // Run the XSLT transform with the base filename XslBaseFilename (i.e., without the .xsl // extension) that is in the XslDirectory. Run it on the input file with name InputFilename // and put the output in the file with name OutputFilename. The input file must be in the // directory Directory, and the output will be put in the same directory. The Parameters // argument may be used to pass parameters to the transform (rarely needed). This class // method should be run from Terminal in a HealthShare namespace other than HSLIB or // VIEWERLIB. The method will write out the path and name of the transform and any error // messages. set In = ##class(%Stream.FileCharacter).%New() set In.Filename = Directory _ "\" _ InputFilename set Out = ##class(%Stream.FileCharacter).%New() set Out.Filename = Directory _ "\" _ OutputFilename set Transformer = ##class(HS.Util.XSLTTransformer).%New() set Transformer.XSLTCacheMode = "N" set Transformer.XSLTDirectory = XslDirectory write !, XslDirectory _ "\" _ XslBaseFilename _ ".xsl" set Status = Transformer.Transform( .In, XslBaseFilename _ ".xsl", .Out, .Parameters ) if $system.Status.IsOK( Status ) { set Status = Out.%Save() } if $system.Status.IsError( Status ) { write $system.Status.GetErrorText( Status ) } } }
Following are what the parameters to the Transform class method are: XslDirectory is the directory where the XSLT file is; the class will first try to append \Custom to the directory when looking for the transform, then try it without. XslBaseFilename is the filename of of the XSLT transform without the .xsl extension. Directory is the directory where you have the input document and where you want the transform to write the output of the transform. InputFilename is the name of the input document file including extension. OutputFilename is the name of the output file including extension. Parameters can be used to pass parameters to the transform, but this is rarely needed.
Running the Transform in Terminal
To run an XSLT transform, open Terminal, and change to the namespace and run the Transform method. For example, let's run the CCDA-to-SDA transform that comes with HealthShare. I have HealthShare installed in C:\InterSystems\HealthShare2016.1.1. I'll put a CCDA named "Sample_CCDA.xml" in my c:\Junk folder.
USER>zn "hsregistry" HSREGISTRY>do ##class(Local.XsltTransformer).Transform( "C:\InterSystems\HealthShare2016.1.1\CSP\xslt\SDA3", "CCDA-to-SDA", "c:\Junk", "Sample_CCDA.xml", "SDA_Out.xml" ) C:\InterSystems\HealthShare2016.1.1\CSP\xslt\SDA3\CCDA-to-SDA.xsl HSREGISTRY>
The SDA_Out.xml output file is now in c:\Junk.
Note that you should not use the transforms that are in the CSP\xslt\SDA directory. Use the ones in the CSP\xslt\SDA3 directory. The ones in the SDA directory are for an old version of SDA (version 2).
Studio XSL Transform Wizard
An alternative to running your transforms from Terminal is to use the Studio XSL Transform Wizard. In Studio, select Tools > Add-Ins > XSL Transform Wizard. Enter your input file in the "XML File" box and the XSLT transform in the "XSL File" box. For "XSLT Helper Class", select "HSREGISTRY" and "HS.Util.XSLTHelper". Click "Finish". The output is displayed in the dialog box (you can copy and paste it):
Debugging
When debugging XSLTs, one approach is to add <xsl:comment> elements to the XSLTs so that you can see various items in the output. Here are some examples:
<xsl:comment>useFirstTranslation <xsl:value-of. referenceValue <xsl:value-of. displayName <xsl:value-of. originalText <xsl:value-of. descriptionValue <xsl:value-of. </xsl:comment> <xsl:comment>Context node:</xsl:comment> <xsl:copy-of <xsl:comment>End of context node.</xsl:comment>
Several of the HealthShare XSLTs apply a "Canonicalize" template. For example, CCDA-to-SDA.xsl does this where you see the comment "Canonicalize the SDA output". Canonicalizing will remove comments from the output, so you may wish to comment it out while debugging.
Documentation
For more information on XSLTs in HealthShare, see the following chapters in the documentation:
- HealthShare 2020.1 > InterSystems Clinical Data Models > CDA Interoperability with SDA > CDA Documents and XSL Transforms
- HealthShare 2020.1 > InterSystems Clinical Data Models > CDA Interoperability with SDA > Customizing CDA XSL Transformations
In particular, the "Customizing CDA XSL Transformations" chapter has a few suggestions for debugging. Also see the following chapter:
- HealthShare 2020.1 > Running a HealthShare Unified Care Record System > Unified Care Record Registries > Managing XML Summary Report Types | https://community.intersystems.com/post/running-healthshare-xslts-terminal | CC-MAIN-2022-21 | refinedweb | 850 | 51.04 |
In addition, if you want to avoid the dependency on the Rails open_id_authentication plugin or you need greater flexibility, you can access the ruby-openid library directly. Listing 2 shows an alternative (and longer) login controller that uses the ruby-openid function calls and public classes directly. To differentiate it even further, it uses the filesystem (instead of the database used up to this point) to store the temporary data required throughout the handshaking between your application and the OpenID provider.
The advantages of the OpenID service are clear: less code duplication, complexity, and maintenance overhead for the developer and a more coherent web experience for the user. However, this article went through only the very superficial layers of the OpenID universe, leaving a lot of other features open to exploration. These include the new OpenID 2.0 specification, the service discovery and delegation mechanisms, the Extensible Resource Descriptor Sequence (XRDS) format, stateless relying parties, and many others.
Also, many parts of the ruby-openid library were not explored. For example, it also supports setting up a Rails OpenID provider, instead of just the client library for a relying party as described in the article. Thanks to the openness of the OpenID initiative, you can further explore the service in as much details as you may need.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/opensource/Article/37692/0/page/4 | CC-MAIN-2015-11 | refinedweb | 247 | 53.92 |
I have made myself a small activity model like this =>
== Schema Information
Schema version: 20110519174324
Table name: activities
id :integer not null, primary key
account_id :integer
action :string(255)
item_id :integer
item_type :string(255)
created_at :datetime
updated_at :datetime
belongs_to :account
belongs_to :item, :polymorphic => true
from this i can get a nice timeline/actions/userfeed or what we want
to call it from =>
def self.get_items(account)
item_list = []
items = where(:account_id => account.id).order(“created_at
DESC”).limit(30)
items.each do |i|
item_list << i.item_type.constantize.find(i.item_id)
end
item_list
end
it works like a charm for objects not refering to others - like if a
user uploads a new image i can
get the image nice and tidy. But for comments => then i want to get
the original post info, can i get
that in get_items directly ? Someone got tips for me ?
/Niklas | https://www.ruby-forum.com/t/polymorphic-reference-deeper/207219 | CC-MAIN-2021-49 | refinedweb | 143 | 50.36 |
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
Hi, I'd like to ask, why libstdc++ for GCC 4.0.3 declares isfinite() in std:: namespaces? I see in cmath headers, there is following definition: isfinite(_Tp __f) { return __gnu_cxx::__capture_isfinite(__f); } There is also a comment in the file saying: (...) remove possible C99-injected names from the global namespace, and sequester them in the __gnu_cxx extension namespace.(...) Does is mean __gnu_cxx extensions are pushed into std:: namespace? I'd be thankful for some enlightenment. Also, what's the best practice to avoid C99 extensions from GCC/libstdc++, when compiling C++ portable code / C++ standard compliant. Unfortunately, following compilation options do not prevent from using isfinite() function, a C99-based extension as I understand. g++ -Wall -pedantic -ansi -std=c++98 isinfinite.cpp Cheers -- Mateusz Loskot | http://gcc.gnu.org/ml/libstdc++/2006-09/msg00285.html | crawl-003 | refinedweb | 145 | 52.15 |
Python Data Analysis With Sqlite And Pandas
As a step in me learning Data Analysis With Python I wanted to
- set up a database
- write values to it (to fake statistics from production)
- read values from the database into pandas
- do some filtering with pandas
- make a plot with matplotlib.
So this text describes these steps.
Set up the environment
To spice things up I wanted this to run on a raspberry pi (see Dagbok 20151215). I started with the Raspbian Lite image from the official Raspberry pi downloads page (see [1]).
This was a fun but painfully slow way to set up the environment. I should probably have spend twice as much on the micro-SD card to get it faster if I had known this. I also first used Wifi instead of a wired ethernet connection.
After running sudo raspi-config to make use of the entire storage I made an update and installed my favorite desktop environment (Xfce), a nice editor (Gnu Emacs) and the python packages I needed:
sudo apt-get update sudo apt-get upgrade sudo apt-get install emacs-nox sudo apt-get install xfce4 xfce4-goodies xfce4-screenshooter sudo apt-get install sqlite3 sudo apt-get install python-scipy sudo apt-get install python-pandas sudo apt-get install ...
Set up the database
I wanted to use some form of SQL database, and sqlite is perfect for the job. Since I want to do this programmatically I go through python. In this short example I connect to a (new) database and create a table called sensor.
conn = sqlite3.connect(FILENAME) cur = conn.cursor() sql = """ create table sensor ( sid integer primary key not null, name text, notes text );""" _ = cur.execute(sql)
I fill this and the other tables with some values. In fact I do this in a very complicated way just for fun and it turned out to be very, very slow. If you feel like getting the details scroll down and read the code in the full example.
sql = "insert into sensor(sid, name, notes) values (%d, '%s', '%s');" for (uid, name, notes) in [(201, 'Alpha', 'Sensor for weight'), \ (202, 'Beta', 'Sensor for conductivity'), (203, 'Gamma', 'Sensor for surface oxides'), (204, 'Delta', 'Sensor for length'), (205, 'Epsilon', 'Sensor for x-ray'), (206, 'Zeta', 'Color checker 9000'), (207, 'Eta', 'Ultra-violet detector'), ]: cur.execute(sql % (uid, name, notes))
The full example build this table, a few others and adds some 700 thousand faked sensor readings to the database. On my Raspberry Pi 2 this requires almost 6 minutes, but that's OK since it is intended to fake 7 years of sensor readings:
$ time python build.py create new file /tmp/database.data insert values from line 1 [...] real 5m42.281s user 5m6.020s sys 0m11.460s
Read values from the database
We want to read the values and I experimented with sqlite default settings in my .sqliterc file. I tried this:
$ cat ~/.sqliterc .mode column .headers on
Anyway, I first try to do some database queries with the command line tool. If you have never used these before, I can only urge you to learn hand-crafting sql queries. It really speeds up debugging and experimentation to have a command line session running in parallel with the code being written. Here is a typical small session:
$ sqlite3 database.data sqlite> select * from line; lid name notes ---------- ---------- ----------------------------- 101 L1 refurbished soviet line, 1956 102 L2 multi-purpose line from 1999 103 L3 mil-spec line, primary 104 L4 mil-spec line, seconday
As we saw above, when we created the values, communicating through python is super-easy, so now we want these values to go into pandas for data-analysis. As it turns out: this was also very easy once you figure out how. The tricky part was to figure out that the command I needed was pandas.read_sql(query, conn). This example works fine using IPython (see Ipython First Look), to use the syntax completion features, but it also works in a regular python session, or as a script:
import pandas import matplotlib.pyplot as plt import sqlite3 conn = sqlite3.connect('./database.data') limit = 1000000 query = """ select reading.rid, reading.timestamp, product.name as product, serial, line.name as line, sensor.name sensor, verdict from reading, product, line, sensor where reading.sid = sensor.sid and reading.pid = product.pid and reading.lid = line.lid limit %s; """ % limit data = pandas.read_sql(query, conn)
We now have very many values in the data structure called data. My poor raspberry pi leaped from 225 MB of used memory to 465 MB, after peaking at more than 500 MB. Remember that this poor computer only has about 925 MB after giving some of it to the GPU.
Let's try to take a look at it by counting the values based on what line and product they represent:
print data.groupby(['line', 'product']).size() line product L1 PA 183364 L2 PA 47247 PB 57258 PC 375084 L3 PB 7971 PC 13311 L4 PD 1389
For someone who has not studied my toy example this means that on for example Line 3 we have recorder 7971 sensor readings on product of type PB and 13311 readings on products of type PC. These values are of course totally irrelevant, but imagine them being real values from a real raspberry pi project in a production site where you are responsible for improving the quality of the physical entities being produced. Then these values might mean that Line 4 is not living up to expectation and could be scrapped, or that product PB on Line L3 should instead be produced on line 4.
Make a plot
I made a bar-chart. But am not too happy with this example, I think the code is too verbose and bulky for a minimal example. Perhaps you can make it prettier. This nicely illustrates the power of scipy.
fig, ax = plt.subplots() new_set = data.groupby('verdict').size() width = 0.8 ax.bar([1,2,3], new_set, width=width) plt.ylabel('Total') plt.title('Number of sensor readings per outcome') plt.xticks([1 + width/2, 2+ width/2,3+ width/2], ('OK', 'FAIL', 'No read')) plt.tight_layout() plt.savefig('python-data-analysis-sqlite-pandas.png')
And here is the plot, as created on a Raspberry Pi:
Summary
Data Analysis With Python is extremely powerful and can be done, with some pain, even on a raspberry pi. Download the full example from here: [2]
My next step is to pretend that the database solution does not scale to the new needs (all the new lines), so a front-end for presenting sensor readings and manually commenting on bad verdicts should be possible. We let this database be a legacy database and use Django: Python Data Presentation With Django And Legacy Sqlite Database
This page belongs in Kategori Programmering
See also Data Analysis With Python | http://pererikstrandberg.se/blog/index.cgi?page=PythonDataAnalysisWithSqliteAndPandas | CC-MAIN-2017-39 | refinedweb | 1,150 | 62.38 |
C# Generics, Part 2/4: Constraints, Members, Operators
The following article is excerpted from the book Practical .NET2 and C#2. The first part of this article is located here.
Type parameter constraints
C#2 allows you to impose constraints on the parameter type of a generic type. Without this feature, the generics in C#2 would be hard to exploit. In fact, it is hard to do almost anything on a parameter type of which we know nothing. We do not even know if it can be instantiated (as it can take the form of an interface or abstract case). In addition, we cannot call a specific method on an instance of such a type, we cannot compare the instances of such a type...
To be able to use a parameter type within a generic type, you can impose one or several constraints amongst the following:
- The constraint of having a default constructor.
- The constraint of implementing a certain interface or (non-exclusive) of deriving from a certain type.
- The constraint of being a value type or (exclusive) being a reference type.
Note for C++ coders: The template mechanism of C++ has no need for constraints to use type parameters since these types are resolved during compilation. In this case, all attempts to use a missing member will be detected by the compiler.
Default constructor constraint
If you need to be able to instantiate an object of the parameter type within a generic, you do not have a choice but impose the default constructor constraint. Here is an example which illustrates this syntax:
Example 6
class Factory<U> where U : new() { public static U GetNew() { return new U(); } } class Program { static void Main(){ int i = Factory<int>.GetNew(); object obj = Factory<object>.GetNew(); // Here, 'i' is equal to 0 and 'obj' references // an instance of the class 'object'. } }
Derivation constraint
If you wish to use certain members of the instances of a parameter type in a generic, you must apply a derivation constraint. Here is an example which illustrates the syntax:
Example 7
interface ICustomInterface { int Fct(); } class C<U> where U : ICustomInterface { public int AnotherFct(U u) { return u.Fct(); } }
You can apply several interface implementation constraints and one base class inheritance constraint on a same type parameter. In this case, the base class must appear in the list of types. You can also use this constraint conjointly with the default constructor constraint. In this case, the default constructor constraint must appear last:
Example 8
interface ICustomInterface1 { int Fct1(); } interface ICustomInterface2 { string Fct2(); } class BaseClass{} class C<U> where U : BaseClass, ICustomInterface1, ICustomInterface2, new() { public string Fct(U u) { return u.Fct2(); } }
You cannot use a sealed class or a one of the System.Object, System.Array, System.Delegate, System.Enum or System.ValueType class as the base class of a type parameter.
You also cannot use the static members of T like this:
Example 9
class BaseClass { public static void Fct(){} } class C<T> where T : BaseClass { void F(){ // Compilation Error: 'T' is a 'type parameter', // which is not valid in the given context. T.Fct(); // Here is the right syntax to call Fct(). BaseClass.Fct(); } }
A type used in a derivation constraint can be an open or closed generic type. Let's illustrate this using the System.IComparable<T> interface. Remember that the types which implement this interface can see their instances compared to an instance of type T.
Example 10
using System; class C1<U> where U : IComparable<int> { public int Compare( U u, int i ) { return u.CompareTo( i ); } } class C2<U> where U : IComparable<U> { public int Compare( U u1, U u2 ) { return u1.CompareTo( u2 ); } } class C3<U,V> where U : IComparable<V> { public int Compare( U u, V v ) { return u.CompareTo( v ); } } class C4<U,V> where U : IComparable<V>, IComparable<int> { public int Compare( U u, int i ) { return u.CompareTo( i ); } }
Note that a type used in a derivation constraint must have a visibility greater or equal to the one of the generic type which contains this parameter type. For example:
Example 11
internal class BaseClass{} // Compilation Error: Inconsistent accessibility: // constraint type 'BaseClass' is less accessible than 'C<T>' public class C<T> where T : BaseClass{}
To be used in a generic type, certain functionalities can force you to impose certain derivation constraints. For example, if you wish to use a T type parameter in a catch clause, you must constrain T to derive from System.Exception or of one of its derived classes. Also, if you wish to use the using keyword to automatically dispose of an instance of the type parameter, it must be constraint to use the System.IDisposable interface. Finally, if you wish to use the
Take note that in the special case where T is constrained to implement an interface and T is a value type, the call to a member of the interface on an instance of T will not cause a boxing operation. The following example puts this into evidence:
Example 12
interface ICounter{ void Increment(); int Val{get;} } struct Counter : ICounter { private int i; public void Increment() { i++; } public int Val { get { return i; } } } class C<T> where T : ICounter, new() { public void Fct(){ T t = new T(); System.Console.WriteLine( t.Val.ToString() ); t.Increment(); // Modify the state of 't'. System.Console.WriteLine( t.Val.ToString() ); // Modify the state of a boxed copy of 't'. (t as ICounter).Increment(); System.Console.WriteLine( t.Val.ToString() ); } } class Program { static void Main() { C<Counter> c = new C<Counter>(); c.Fct(); } }
This program displays:
0
1
1
There are no comments yet. Be the first to comment! | http://www.codeguru.com/csharp/sample_chapter/article.php/c11673/C-Generics-Part-24-Constraints-Members-Operators.htm | CC-MAIN-2014-42 | refinedweb | 945 | 52.9 |
Problem 66 is one of those problems that make Project Euler lots of fun. It doesn't have a brute-force solution, and to solve it one actually has to implement a non-trivial mathematical algorithm and get exposed to several interesting techniques.
I will not post the solution or the full code for the problem here, just a couple of hints.
After a very short bout of Googling, you'll discover that the Diophantine equation:
Is quite famous and is called Pell's equation. From here, further web searches and Wikipedia-reading will bring you to at least two methods for finding the fundamental solution, which is the pair of x and y with minimal x solving it.
One of the methods involves computing the continued-fraction representation of the square root of D. This page is a must read on this topic, and will help you with other Euler problems as well.
I want to post here a code snippet that implements the continued-fraction computation described in that link. Its steps follow the Algebraic algoritm given there:
def CF_of_sqrt(n): """ Compute the continued fraction representation of the square root of N. The first element in the returned array is the whole part of the fraction. The others are the denominators up to (and not including) the point where it starts repeating. Uses the algorithm explained here: In the section named: "Methods of finding continued fractions for square roots" """ if is_square(n): return [int(math.sqrt(n))] ans = [] step1_num = 0 step1_denom = 1 while True: nextn = int((math.floor(math.sqrt(n)) + step1_num) / step1_denom) ans.append(int(nextn)) step2_num = step1_denom step2_denom = step1_num - step1_denom * nextn step3_denom = (n - step2_denom ** 2) / step2_num step3_num = -step2_denom if step3_denom == 1: ans.append(ans[0] * 2) break step1_num, step1_denom = step3_num, step3_denom return ans
As I said, this still isn't enough to solve the problem, but with this code in hand, the solution isn't too far. Read some more about Pell's equation and you'll discover how to use this code to reach a solution.
It took my program ~30 milliseconds to find an answer to the problem, by the way. It took less than a second to solve a 10-times larger problem (for D <= 10000), so I believe it to be a pretty good implementation. | http://eli.thegreenplace.net/2009/06/19/project-euler-problem-66-and-continued-fractions/ | CC-MAIN-2016-36 | refinedweb | 383 | 61.06 |
Understanding Recurrent Neural NetworksAugust 13, 2017
In my last post, we used our micro-framework to learn about and create a Convolutional Neural Network. It was super cool, so check it out if you haven’t already. Now, in my final post for this tutorial series, we’ll be similarly learning about and building Recurrent Neural Networks (RNNs). RNNs are neural networks that are fantastic at time-dependent tasks, especially tasks that have to do with time series as an input. RNNs can serially process each time step of the series in order to build a semantic representation of the whole time series, one step at a time.
In this post, we will understand the math and intuition behind RNNs. We will build out RNN features in our micro-framework, use them to create an RNN, and train the RNN on a sequence classification task.
Recurrent Layers
A recurrent layer is a layer which has a temporal connection to itself. This connect allows the layer to model time-dependent functions. Let’s say our data is a time series of vectors which vary over every time step. The recurrent layer consists of a cell which takes as inputs the current time step and the last cell output. The cell does some computation on the inputs to produce the output.
Note that there is only one RNN cell, but it has been repeated multiple times for sake of simplicity. At each time step, the cell produces an output, the hidden state of the cell. If our input time series has time step vectors with D elements, and we chose our hidden state vector’s size to be H, then a time series of dimensions N x T x D will have an output of N x T x H, where N is the batch size and T is the amount of time steps in each time series. We can use this output tensor to learn insights about the time series.
Vanilla RNN Cells
The simplest way to combine the two inputs (the current time step input as well as the previous cell output) to produce the next output is to multiply each by a weight and add them together–with a non-linearity on the output, of course. Thus, we have our very simple vanilla RNN cell equation:
That’s it! This is pretty easy to implement, so let’s get right into it. First, the initializer:
class VanillaRNN(Layer): def __init__(self, input_dim, hidden_dim): super(VanillaRNN, self).__init__() self.params['Wh'] = np.random.randn(hidden_dim, hidden_dim) self.params['Wx'] = np.random.randn(input_dim, hidden_dim) self.params['b'], self.h0 = np.zeros(hidden_dim), np.zeros((1, hidden_dim)) self.D, self.H = input_dim, hidden_dim
Next, the forward pass. Since the RNN cell is reused for every step in the time series, we create a step function which executes at each time step. The forward function, then, calls the step functions at every time step:
def forward_step(self, input, prev_h): cell = np.matmul(prev_h, self.params['Wh']) + np.matmul(input, self.params['Wx']) + self.params['b'] next_h = np.tanh(cell) self.cache['caches'].append((input, prev_h, next_h)) return next_h def forward(self, input): N, T, _ = input.shape H = self.H self.cache['caches'] = [] h = np.zeros((N, T, H)) h_cur = self.h0 for t in range(T): h_cur= self.forward_step(input[:, t, :], h_cur) h[:, t, :] = h_cur return h
We return a tensor which is a time series of all of the cell outputs at each time step.
Now, the backward pass, which we similarly break down into a step function and a series function. One note on this step: because the RNN cell is reused for every time step, the gradient must be accumulated over all time steps–that is, instead of the gradient being set, we must start with zero and added to at ever time step of the backward pass. Here’s how the backward pass looks:
def backward_step(self, dnext_h, cache): input, prev_h, next_h = cache dcell = (1 - next_h ** 2) * dnext_h dx = np.matmul(dcell, self.params['Wx'].T) dprev_h = np.matmul(dcell, self.params['Wh'].T) dWx = np.matmul(input.T, dcell) dWh = np.matmul(prev_h.T, dcell) db = np.sum(dcell, axis=0) return dx, dprev_h, dWx, dWh, db def backward(self, dout): N, T, _ = dout.shape D, H = self.D, self.H caches = self.cache['caches'] dx, dh0, dWx, dWh, db = np.zeros((N, T, D)), np.zeros((N, H)), np.zeros((D, H)), np.zeros((H, H)), np.zeros(H) for t in range(T - 1, -1, -1): dx[:, t, :], dh0, dWx_cur, dWh_cur, db_cur = self.backward_step(dout[:, t, :] + dh0, caches[t]) dWx += dWx_cur dWh += dWh_cur db += db_cur self.grads['Wx'], self.grads['Wh'], self.grads['b'] = dWx, dWh, db return dx
We loop backwards through the incoming gradient’s time steps, pass them through our backward step function, and accumulate the gradients through the backward unrolling. Note that we add the derivative of the previous hidden state to the gradient on each output hidden state at time step. This is because the previous hidden state is passed in as an input in the forward pass and hence must be accounted for in the backward pass, and it’s most convenient to do this computation in the series function rather than the step function.
That’s it! Now we can use vanilla RNN cells in our code.
Sequence Classification
Let’s put our vanilla RNN cells to use on a sequence classification task! Let’s have a time series of 2 elements, where the label is a 0 if the difference between the two is <= 0, and a 1 otherwise. Let’s create a quick loader class to create such a dataset:
class RecurrentTestLoader(Loader): def __init__(self, batch_size): super(RecurrentTestLoader, self).__init__(batch_size) train, validation, test = self.load_data() self.train_set, self.train_labels = train self.validation_set, self.validation_labels = validation self.test_set, self.test_labels = test def load_data(self, path=None): timeseries = np.random.randint(1, 10, size=(16000, 2, 1)) targets = timeseries[:, 0] - timeseries[:, 1] neg, pos = np.where(targets <= 0), np.where(targets > 0) targets[neg], targets[pos] = 0, 1 timeseries_train = timeseries[:8000, :] timeseries_val = timeseries[8000:12000, :] timeseries_test = timeseries[12000:16000, :] targets_train = targets[:8000] targets_val = targets[8000:12000] targets_test = targets[12000:16000] return (timeseries_train, targets_train.astype(np.int32)), (timeseries_val, targets_val.astype(np.int32)), (timeseries_test, targets_test.astype(np.int32))
Let’s initialize our loader and network:
loader = RecurrentTestLoader(16) layers = [VanillaRNN(1, 4), ReLU(), Flatten(), Linear(8, 2)] loss = SoftmaxCrossEntropy() recurrent_network = Sequential(layers, loss, 1e-3, regularization=L2(0.01))
Notice that we use a Flatten layer to turn the time series of hidden state outputs into a 2-dimensional tensor that can be passed into the final Linear layer. Finally, let’s train our model!
for i in range(10000): batch, labels = loader.get_batch() pred, loss = recurrent_network.train(batch, labels) if (i + 1) % 100 == 0: accuracy = eval_accuracy(pred, labels) print("Training Accuracy: %f" % accuracy) if (i + 1) % 500 == 0: accuracy = eval_accuracy(recurrent_network.eval(loader.validation_set), loader.validation_labels) print("Validation Accuracy: %f \n" % accuracy) accuracy = eval_accuracy(recurrent_network.eval(loader.test_set), loader.test_labels) print("Test Accuracy: %f \n" % accuracy)
That’s it! We have used our Vanilla RNN cell on a fairly simple time-dependent problem that illustrates its purpose and functionality. However, there’s a problem–it doesn’t work! Because we repeatedly multiply the same cell weight with its derivative, the gradient will either explode or vanish. This is (rather fittingly) called the exploding/vanishing gradient problem. To solve this problem, a new type of cell was invented: Long Short-Term Memory (LSTM) cells.
LSTM Cells
An LSTM cell, similar to a vanilla RNN cell, uses both the current input and its previous output to produce its current output. However, what’s interesting about LSTM cells is that it actually produces two outputs, both of which it uses in its next unrolling: the hidden state and the cell state. The cell state is only modified through the use of controlled gates, as defined by these equations:
The input gate (i_t) is a gate which is directly multiplied with the previous cell state, thereby deciding how much new information is added to the cell state. The forget gate (f_t) is is a gate that decides how much of the previous cell state is “forgotten”. The input gate is multiplied by a gate (g_t) which decides what is added to the cell state, and is then added to the cell state with some information forgotten by the forget gate. This sum is the new cell state. Finally, the output gate (o_t) is a gate which is multiplied with the new cell state and decide how much is exposed as the next hidden state. Here’s how all of that looks as a diagram:
The benefit of the LSTM cell is that there is a direct connection between the cells previous state and its current state, meaning that gradients can more easily backpropagate through the cells without vanishing. Because only the forget and input gates directly interact with the cell state, long-term dependencies can more easily persist through many unrollings.
The cell state can be thought of in another way as well. The cell state at time t can be thought of as an encoded representation of the time series by time t. The fact that it takes the previous output as an input means that each time step input is used to continuously update the internal representation of the time series. So, by the end, the output is the encoded representation of the entire time series. For example, if we feed the cell a sentence, one word at a time, the output of the final cell computation is a vector representation of the sentence’s semantic meaning. This final state can be used in addition to the hidden state sequence to learn interesting insights.
Let’s start implementing an LSTM cell! We can make the implementation easier on ourselves if we think of all the inputs getting concatenated and being multiplied by one huge weight; we then split the output of this computation into the individual gates. Here’s the initializer:
class LSTM(Layer): def __init__(self, input_dim, hidden_dim): super(LSTM, self).__init__() self.params['W'], self.params['b'] = np.random.randn(input_dim + hidden_dim, 4 * hidden_dim), np.zeros(4 * hidden_dim) self.h0 = np.zeros((1, hidden_dim)) self.D, self.H = input_dim, hidden_dim
Now forward the forward pass:
def forward_step(self, input, prev_h, prev_c): N, _ = prev_h.shape H = self.H input = np.concatenate((input, prev_h), axis=1) gates = np.matmul(input, self.params['W']) + self.params['b'] i = utils.sigmoid(gates[:, 0:H]) f = utils.sigmoid(gates[:, H:2 * H]) o = utils.sigmoid(gates[:, 2 * H:3 * H]) g = np.tanh(gates[:, 3 * H:4 * H]) next_c = f * prev_c + i * g next_h = o * np.tanh(next_c) self.cache['caches'].append((input, prev_c, i, f, o, g, next_c, next_h)) return next_h, next_c def forward(self, input): N, T, _ = input.shape D, H = self.D, self.H self.cache['caches'] = [] h = np.zeros((N, T, H)) h_prev = self.h0 c = np.zeros((N, H)) for t in range(T): x_curr = input[:, t, :] h_prev, c = self.forward_step(x_curr, h_prev, c) h[:, t, :] = h_prev return h
Here’s the backward pass:
def backward_step(self, dnext_h, dnext_c, cache): input, prev_c, i, f, o, g, next_c, next_h = cache D = self.D dtanh_next_c = dnext_h * o dcell = (1 - next_h ** 2) * dnext_h dnext_c += dtanh_next_c * dcell dc = dnext_c * f di = dnext_c * g df = dnext_c * prev_c do = dnext_h * np.tanh(next_c) dg = dnext_c * i dgates1 = di * i * (1 - i) dgates2 = df * f * (1 - f) dgates3 = do * o * (1 - o) dgates4 = dg * (1 - g ** 2) dgates = np.concatenate((dgates1, dgates2, dgates3, dgates4), axis=1) db = np.sum(dgates, axis=0) dW = np.matmul(input.T, dgates) dinput = np.matmul(dgates, self.params['W'].T) dx = dinput[:, :D] dh = dinput[:, D:] return dx, dh, dc, dW, db def backward(self, dout): N, T, _ = dout.shape D, H = self.D, self.H dx, dh0, dW, db, dc = np.zeros((N, T, D)), np.zeros((N, H)), np.zeros((D + H, 4 * H)), np.zeros(4 * H), np.zeros((N, H)) caches = self.cache['caches'] for t in range(T - 1, -1, -1): dx[:, t, :], dh0, dc, dW_cur, db_cur = self.backward_step(dout[:, t, :] + dh0, dc, caches[t]) dW += dW_cur db += db_cur self.grads['W'], self.grads['b'] = dW, db return dx
We can use the diagram from above as a pseudo-graph to break down the backward pass. First, we notice that the next cell state branches into two outputs: the outputted next cell state and the next hidden state. The next cell state’s gradient is given to us as an incoming gradient; we must also account for the gradient on the next cell state due to the next hidden state. We do this by multiplying the next hidden state by the output gate (which the next cell state is multiplied with in the forward pass), take the derivative of the tanh function with respect to the next hidden state, multiply these values together, and add it to our incoming next cell state gradient. The gradient on the cell state is then the gradient on the next cell state multiplied with the forget gate. This was all done by simply tracing all the lines from the outputs to the cell state input.
The gradients on the gates is fairly straightforward: the gradient on the forget gate is the gradient on next cell state times the previous cell state, the gradient on the input gate is the gradient on the next cell state times g, the gradient on g is the gradient on the next cell state multiplied by the input gate, and the gradient on the output gate is the gradient on the next hidden state times the tanh of the next cell state. (again, this is all easy to see by tracing all the lines from all outputs to these gates). We take the derivative of the tanh or sigmoid functions with respect to each gate (note: these are left out of the diagram graph) and concatenate them together to get the final gradient on all gates. This can easily be used to get the gradients on the weight, bias, and input of the layer, in a manner similar to the backward pass of a linear layer. One last thing: we split the input into the gradients on time step vector and the previous cell state, as those were stacked together in the forward pass.
Now, we can replace the VanillaRNN layer with our LSTM to get much better performance on our sequence classification problem. Sweet!
Next Steps
Recurrent networks are incredibly powerful tools which can be applied to some incredible problems. For example, in language translation, a source sentence is encoded by an RNN into an intermediate representation of the semantic meaning of the sentence, and then that representation is used by another RNN to decode it into a target language. Here is a paper on using RNNs for machine translation, and here is a PyTorch tutorial on the subject.
Another cool application of RNNs: a convolutional network pretrained on image classification is used to turn an image into an intermediate representation, which is then used as the initial hidden state of an LSTM cell to caption the picture. Here is the original paper on topic, and here is a Torch (not PyTorch) implementation of the latest iteration of the paper.
One last plug: one of my close friends made a hilarious RNN project which is trained on Trump’s Tweet corpus to produce “new” Trump tweets. Check out the code here.
I really hope you’ve enjoyed this tutorial series and found it to be educational and useful! If you have any questions, you can reach out to me at @ShubhangDesai on Twitter. There’s a whole world of deep learning, the surface of which we’ve barely scratched–go and explore it! Happy learning! | https://shubhangdesai.github.io/blog/Understanding-RNNs | CC-MAIN-2018-13 | refinedweb | 2,669 | 64.3 |
I am trying to solve a problem from a lab challenge. I'm having trouble on how to output the table and it must be done in top-down design.
My coding:
#include "stdafx.h" #include <iostream> using namespace std; const int PEOPLE = 5, PRODUCTS = 6; void fill ( double sales [][PRODUCTS] ) { int salesperson, product; double value; cout << "Enter the salesperson (1 - 4), product number (1 - 5), and " << "total sales.\nEnter -1 for the salesperson to end input.\n"; cin >> salesperson; // continue receiving input for each salesperson until -1 is entered while ( salesperson != -1 ) { cin >> product >> value; sales[salesperson][product] += value; cin >> salesperson; } }//end fill void rows ( double sales[][PRODUCTS] ) { int k, m; for (k = 0; k < 1; k++ ) { for (m = 0; m < 2; m++ ) sales[k][PRODUCTS] = sales[k][0] + sales[k][m]; }//end for }//end rows void columns ( double sales[][PRODUCTS]) { int k, m; for (k = 0; k < 1; k++ ) { for (m = 0; m < 2; m++ ) sales[PEOPLE][k] = sales[0][k] + sales[m][k]; }//endfor }//end columns void print ( double sales[][PRODUCTS]) { }//end print int main () { double sales[PEOPLE][PRODUCTS] = {0.0}; fill(sales); rows(sales); columns(sales); print(sales); }
I'm not sure what to put in the print module to print out the table. Any help would be appreciated. | https://www.daniweb.com/programming/software-development/threads/120516/c-code-problem | CC-MAIN-2017-04 | refinedweb | 213 | 59.33 |
I mentioned in an earlier post that the two basic range types in Swift,
Range and
ClosedRange, are not convertible to each other. This makes it difficult to write a function that should work with both types.
Yesterday, someone on the swift-users mailing list had the exact problem: suppose you’ve written a function named
random that takes an integer range and returns a random value that lies in the range:
import Darwin // or Glibc on Linux func random(from range: Range<Int>) -> Int { let distance = range.upperBound - range.lowerBound let rnd = arc4random_uniform(UInt32(distance)) return range.lowerBound + Int(rnd) }
You can call this function with a half-open range:
let random1 = random(from: 1..<10)
But you can’t pass a closed range:
let random2 = random(from: 1...9) // error
That sucks. What’s the best way to solve this?
Overloading
One option is to overload the
random function, adding a variant that takes a
ClosedRange. The implementation can simply forward to the existing overload:
func random(from range: ClosedRange<Int>) -> Int { return random(from: range.lowerBound ..< range.upperBound+1) }
This would fail for input ranges whose upper bound is
Int.max because such a range can’t be expressed as a half-open range. However, this is not a relevant problem in our example because the
arc4random_uniform function can only deal with 32-bit integers anyway, so the program would crash on the conversion to
UInt32 long before hitting this new limitation.
Countable ranges are convertible
Since this particular example deals with integer ranges, we have another option. Integer-based ranges are countable, and countable ranges are convertible between their half-open and closed variants,
CountableRange and
CountableClosedRange. So we can switch the parameter type to
CountableRange like this:
func random(from range: CountableRange<Int>) -> Int { // Same implementation ... }
And now we can call the function with a closed range, but only if we explicitly convert it to
CountableRange first, which is not very nice (and unintuitive):
// Works as before let random3 = random(from: 1..<10) // Requires explicit conversion let random4 = random(from: CountableRange(1...9))
Now you might think, no problem, let’s overload the closed-range operator
... to return a half-open range, like this (I copied this declaration from the existing version in the standard library and only changed the return type from
CountableClosedRange to
CountableRange):
func ...<Bound>(minimum: Bound, maximum: Bound) -> CountableRange<Bound> where Bound: _Strideable & Comparable, Bound.Stride: SignedInteger { return CountableRange(uncheckedBounds: (lower: minimum, upper: maximum.advanced(by: 1))) }
This would solve our problem, but unfortunately it creates new ones, because now an expression like
1...9 is ambiguous without explicit type information — the compiler can’t decide between which overload to choose. So this is not a good option.
Writing generic code by identifying the essential interface
The reason I’m writing this post is to point out a very good suggestion Hooman Mehr made on the mailing list: what if ranges are not the best abstraction for this algorithm anyway? What if we consider a higher abstraction level?
Let’s try to identify the essential interface our
random function needs, i.e. the minimal set of features required to implement the functionality:
- It needs an efficient way to compute the distance between the lower and upper bounds of the input sequence.
- It needs an efficient way to retrieve the n-th element of the input sequence in order to return it, where n is the random distance from the lower bound it computed.
Hooman notes that both countable range types share a common protocol conformance in the form of
RandomAccessCollection. And indeed,
RandomAccessCollection provides exactly the essential interface we want: random-access collections guarantee that they can measure distances between indices and access elements at arbitrary indices in constant time.
So let’s turn the
random function into a method on
RandomAccessCollection (the
numericCasts are required because different collections have different
IndexDistance types):
extension RandomAccessCollection { func random() -> Iterator.Element? { guard !isEmpty else { return nil } let offset = arc4random_uniform(numericCast(count)) let i = index(startIndex, offsetBy: numericCast(offset)) return self[i] } }
Now it works with both range types:
(1..<10).random() (1...9).random()
And even better, we can now get a random element from every random-access collection:
let people = ["Susan", "David", "Janet", "Jordan", "Kelly"] let winner = people.random()
Conclusion
We already knew that the distinction between half-open ranges and closed ranges in the type system is unintuitive. If you’re stuck with ranges, the best way to deal with the problem is usually to suck it up and provide two overloads, even if that means you have to repeat some code.
But it can pay off to make your code more generic. Even if you don’t need your algorithm to work with other data types right now, implementing it generically at the correct level of abstraction forces you to think about the essential interface your algorithm requires, which in turn makes it easier for readers of the code to keep the capabilities of the involved types in their head — a
RandomAccessCollection has by definition fewer methods and properties than a concrete type that conforms to the protocol (such as
CountableRange).
Don’t overdo it, though. Generic code with lots of constraints on the generic parameters can also be harder to read, and especially if you write application code that doesn’t provide public APIs to third parties, spending too much time building the perfect abstraction without getting any real work done is an easy trap to fall into. | https://oleb.net/blog/2016/10/generic-range-algorithms/ | CC-MAIN-2017-13 | refinedweb | 919 | 50.46 |
09 June 2010 17:57 [Source: ICIS news]
TORONTO (ICIS news)--The US Environmental Protection Agency (EPA) is moving to end all uses of insecticide endosulfan in the ?xml:namespace>
The agency said it was working with producer Makhteshim Agan of
Endosulfan, an organochlorine insecticide used on vegetables, fruits, and cotton, could pose unacceptable neurological and reproductive risks to farm workers and wildlife, and it could persist in the environment, the agency said.
In fact, new data showed that risks faced by workers were greater than previously known, it added.
Endosulfan was first registered in the 1950s. It also used on ornamental shrubs, trees, and herbaceous | http://www.icis.com/Articles/2010/06/09/9366528/us-epa-moves-to-end-use-of-insecticide-endosulfan.html | CC-MAIN-2014-23 | refinedweb | 106 | 50.57 |
On Sat, 14 Apr 2007, Michael Niedermayer wrote: > > - } > > -#endif > > + for (; s<end; s+=4, d+=4) { > > + int v = *(uint32_t *)s; > > + int r = v & 0xff, g = (v>>8) & 0xff, b = (v>>16) & 0xff; > > + *(uint32_t *)d = b + (g<<8) + (r<<16); > > int v = *(uint32_t *)s; > int g = v&0xFF00; > v &= 0xFF00FF; > *(uint32_t *)d = (v>>16) + (v<<16) + g > > 2 shift less > 1 and less asm("bswapl %0 ; shrl $8, %0" : "+r"(*(uint32_t *)s)); In the linux kernel, you could do this: *(uint32_t *)d = swab32p((uint32_t *)s) >> 8; It would probably expand to exact the same code as the asm statement on x86. Does ffmpeg have a useful byte-swap function? Could also do this, which avoids the shift instrunction, but it probably slower due to the overlapping unaligned accesses. uint32_t scratch; asm("movl %2, %1; bswapl %1; movl %1, %0" : "=g"(*(uint32_t *)(s-1)), "=r"(scratch) : "=g"(*(uint32_t *)d)); | http://ffmpeg.org/pipermail/ffmpeg-devel/2007-April/021871.html | CC-MAIN-2016-50 | refinedweb | 147 | 64.38 |
Created on 2008-08-22.16:52:23 by jy123, last changed 2009-04-05.17:41:16 by thobes.
when I try to install lastest version django on lastest jython version,
get "couldn't make directories". It seems ntpath.isdir failed.
This is a bug/"feature" in the MSVCRT stat function, which Jython uses through jna-
posix.
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
int main(void) {
struct stat foo;
int retval = stat("c:\\Documents and Settings\\", &foo);
fprintf(stderr, "stat returned %d\n", retval);
perror("stat");
}
% ./stat
stat returned -1
stat: No such file or directory
If you remove the trailing \, it works fine.
I think this problem still exists in 2.5alpha3
Here is my workarounds
in lib\os.py:
def makedirs(path, mode='ignored'):
"""makedirs(path [, mode=0777])
Super-mkdir; create a leaf directory and all intermediate ones.
Works like mkdir, except that any intermediate path segment (not
just the rightmost) will be created if it does not exist.
The optional parameter is currently ignored.
"""
sys_path = sys.getPath(path)
if File(sys_path).mkdirs():
return
if(sys_path[-1]=='\\'):
if File(sys_path[:-2]).mkdirs():
return
# if making a /x/y/z/., java.io.File#mkdirs inexplicably fails. So
we need
# to force it
# need to use _path instead of path, because param is hiding
# os.path module in namespace!
head, tail = _path.split(sys_path)
if tail == curdir:
if File(_path.join(head)).mkdirs():
return
if(_path.join(head)[-1]=='\\'):
if File(_path.join(head)[:-2]).mkdirs():
return
raise OSError(0, "couldn't make directories", path)
This issue still exists in 2.5b1; I am experiencing the issue when
trying to install Django on jython.
I am attaching a patch to install_egg_info.py which solves the problem
on my Windows machine.
Fix implemented in revision 6164. | http://bugs.jython.org/issue1110 | CC-MAIN-2015-11 | refinedweb | 301 | 63.56 |
The following tutorial will demonstrate how to Read values from a Text file (.txt, .csv) to blink 1 of 9 LEDs attached to an Arduino. It uses the combination of an Arduino and Processing program to process the file. The Processing program will read the text file in real time, only sending new information to the Arduino.
Components
Required
-
- A comma separated text file (*.txt).
Arduino Layout
The
Text File
- Open Notepad or equivalent text file editor, and paste the following data into it.
1,2,3,4,5,6,7,8,9,8,7,6,5,4,3,2,1
- Save the file on your hard drive. In my case, I have chosen to save the file at this location.
D:/mySensorData.txt
- It should look like the following screenshot
Additional notes regarding the Text file:
- Just remember what you call it, and where you saved it, because we will be referring to this file later on in the Processing script.
- Keep all values on the same line.
- Separate each number with a comma.
- The number 1 will blink the first LED which is attached to Pin 2 on the Arduino.
- The number 9 will blink the last LED which is attached to Pin 10 on the Arduino.
Processing Code
You can download the Processing IDE from this site.
Once you have copied the text above into the Processing IDE, you can now start working on the Arduino code as seen below.
Arduino CodeYou can download the Arduino IDE from this site.
Copy and paste the following code into the Arduino IDE.
Additional Information:
- The Arduino code will still work without the processing program. You can open the serial monitor window to send the commands to the Arduino manually. In fact, if you encounter any problems, I would suggest you do this. It will help to identify the root cause of the problem (ie Processing or Arduino Code, or physical connections).
- If you choose to use the Serial Monitor feature of the Arduino IDE, you cannot use the Processing program at the same time.
Once you have assembled the Arduino with all the wires, LEDs, resistors etc, you should now be ready to put it all together and get this baby cranking!
Connecting it all together
- Connect the USB cable from your computer to the Arduino, and upload the code.
- Keep the USB cable connected between the Arduino and the computer, as this will become the physical connection needed by the Processing Program
- Make sure that you have the text file in the correct location on your hard drive, and that it only contains numbers relevant to the code provided (separated by commas).
- Run the Processing program and watch the LEDs blink in the sequence described by the text file.
- You can add more numbers to the end of the line, however, the processing program will not be aware of them until you save the file. The text file does not have to be closed.
Other programs can be used to create text file, but you will need the processing program to read the file and send the values to the Arduino. The Arduino will receive each value and react appropriately.
SIMILAR PROJECT: Use a mouse to control the LEDs on your Arduino - see this post.
SIMILAR PROJECT: Use a mouse to control the LEDs on your Arduino - see this post.
An alternative Processing Sketch
This Processing sketch uses the loadStrings() method instead of the FileReader method used in the first sketch. This sketch also provides better control over sending the values to the Arduino. When the sketch first loads, the application window will be red. By clicking your mouse inside the window, the background will turn green and the file will be imported and sent to the Arduino, with every value being sent at half second intervals. If you update the text file and save, only new values will be transmitted, however, if you want the entire file to transmit again, you can press the window once (to reset the counter), and then again to read the file and send the values again from the beginning of the file.
I personally like this updated version better than the first, plus I was inspired to update this blog posting due to the fact that some people were having problems with the FileReader method in the first sketch. But both sketches should work (they worked for me).
I personally like this updated version better than the first, plus I was inspired to update this blog posting due to the fact that some people were having problems with the FileReader method in the first sketch. But both sketches should work (they worked for me).
I was writing a blog post about reading in text files for Arduino as it doesn't have a file system and I found this post.
Its really handy and combines stuff I had looked at.
I will reference this blog and post when I publish my post.
Well done :)
Jake_F has identified a number of ways to read from a text file: for those interested - have a look at his blog.
Thanks for the post!
Is there a way to upload and store the data from the txt file once into Arduino? Or does the Arduino need to be plugged into the computer the whole time the sketch runs?
In this case - yes the Arduino needs to be plugged into the computer. However, you can get around that by using a shield of some sort. For example, an XBee, WiFi, or Bluetooth shield will allow you to detach the Arduino from your computer, or alternatively perhaps an SD card shield, where you can load the information to the card and then reference that afterwards. It depends on what you are trying to do, and whether you need the Arduino to act independently or not. It also depends on how much information you need to be stored/transmitted.
Thanks for the tutorial!
Can the processing keep update the value of the txt.file by just pressing the "start" button once only instead of restarting the processing and run it again?
Hi Anonymous,
It has been a long time since I did this project - but from memory, I am pretty sure if you update the text file and press save... processing will continue from where it left off. I am pretty sure that you don't re-start Processing - otherwise it will read the file from the beginning. Give it a try.
I having the same problem ""Cannot find class of type named "FileReader""
And i ADD "import java.io.*;" on the top...
Not sure why it is not working for you. It works for me when I tested it out. Am using Processing version 2.0b8.
Null pointer exception in - if(counter<subtext.length){ -
Hi Marcelo,
I think I need to update this post. If you are desperate to use this code, then perhaps use an older version of Processing. Otherwise, try
import java.io.FileReader;
Your null pointer exception probably relates to the FileReader issue also.
I'm starting to program Arduino.
I use PIC.. 16F877A and 16F628A...
I need these code because i'm doing an laser engraving...
All the cordinate will be in an text file....
Thanks for everything!
Marcelo,
I have updated the code, try the second alternative Processing sketch and see if this works for you instead.
Thanks!!!
I will change the code for send an caracter just when the software receive one specific byte by USB. I will use that logic for control the link and send the characters, so that the hardware control and not the software (in this case, an mause click).
dear sir,
how to read a text file and display it using Arduino?
*display on a LCD4884
Have you seen these pages :
Arduino Forum
Tronixstuff
Once you send values to the arduino (as per my example), you should be able to send them to the shield as per the links provided.
You may also want to look at my Serial communication tutorial.
Great tutorial but I am in need of a slight tweak, I have a text file where all the data is on separate rows like:
1
3
4
5
2
etc
I would like to read the first 150 rows, stop and i'll program in a stepper then read the next 150 rows etc until the end of the file. the numbers still correspond to an LED being toggles. Is there any way you could help me please??
Hi Jake,
I have limited time because I am studying at the moment. So I cannot write it out for you, but I can give you some pointers.
In the updated processing sketch, there is a line with the variable textFileLines[].
From your example,
textFileLines[0] = 1
textFileLines[1] = 3
textFileLines[2] = 4
textFileLines[3] = 5
textFileLines[4] = 2
etc etc
You don't have any delimiters on each line - so you don't need to use the splitTokens line.
Why not read the entire file? and then do the 150 line/row grouping or handling afterwards using a couple of for-loops.
Or if the file is only being updated progressively, then you could get a variable to hold what line you were up to, so that when you read the entire file again, you can skip the x number of lines.
Scott
Cheers Scott, heading in the right direction now
Good luck with your exams!
This is great! Thank you so much for giving this to us all. I'm wondering what your thoughts are on something I'm trying to accomplish that is similar, and I hope I can explain it well enough.
I am playing with two IC chips and I'm trying to get the csv file (or text) to tell Arduino to turn on multiple leds simultaneously (I'd like them to be solid, not blinking). So, there will be two sets or arrays of data being sent, one for each IC.
Array1(which is row1 in CSV) [1,2,3,4,5]
Array2(which is row2 in CSV) [1,2,3,4,5]
When Array1 comes back with a value of 3, then I'd like the third led to light.
When Array2 comes back with a value of 5, then I'd like the fifth led to light.
I am pretty new, and appreciate any help or direction you can provide. This project is going to be installed in an order fulfillment process to help me package and ship products faster.
Thanks,
Scott
Hi Scott,
Sounds like an interesting project.
Will finish studying in about a month - which is when I will have more time to give more useful responses. You query confused me a little.
How are you getting the values to the Arduino?
And how does Array1 come back with a value of 3 ? How is that determined?
Once you have selected a value from the array (I am assuming), you can send this value to the Arduino. In the example above, the LED stays on until you send a "0" to the Arduino, which is when it wipes it clean.
The value is sent from a computer using the processing language.
You can customise either of the programs (Arduino or Processing) to meet your specific requirements.
You may decide that there is a special command to toggle the LED rather than switch them all off. It really depends on what you are trying to achieve and how you plan to achieve it.
Regards
Scott
This comment has been removed by a blog administrator.
i have been download the processing code 64-bit and when i upload the processing code, it says that "serial does not run in 64-bit mode"...
what should I do ?
please help me...>0<
download and use the 32-bit version instead. Don't use the 64 bit version for Serial communication.
can you tell me the syntax to read an integer value that is stored in a text file and send it to arduino uno using serially.
The integer stored in the text file will be periodically updated.
Kindly Help ...
String myString = "16";
int myInt = int(myString);
or
int myInt = Integer.parseInt(myString);
Very good post,.
but...
i'm using servo controller using arduino and the protocol is "#0 P200 T100"
#0 > ID
P200 > Position
T100 > delay
and i'm using vb to control the arduino >> servo controller and i'm using your code :
byteRead = Serial.read();
/*ECHO the value that was read, back to the serial port. */
Serial.println(byteRead);
on serial monitor and i make pos 'whatever' work perfectly, but when i'm using VB to control (with Track Bar), data was very poor and not work well,. please help me to solving protocol combination all character, Thanks. :)
Have never used VB with Arduino, would not know where to start.
I think this is one for the forums... however, I am happy answering questions about my project.
Hey Scott thanks for this really helpful post.
I wondering if it was possible to make processing receive some data from the arduino using the serial port and then write it to a file which some other c++ code running on the computer can use.
I basically want the arduino to receive some data act accordingly and then when a button is pressed it sends the state of the process back to processing which then writes the received part to a file.
Thanks in Advance.
Yes - it is possible.
I have a tutorial that shows how to do this - but unfortunately it is way out of date, and was done using Processing 1. However, I know that you can write to a file using processing 2, and you should be able to find it within their website.
This link may help you get on the right track:
Here is my old tutorial:
You should be able to work it out from these two links.
Hi Scott,
I realize this is an old post and will be surprised if you answer this, but its worth a shot. The code works great except the last little bit:
digitalWrite((byteRead+1), HIGH);
If I change the "byteRead+1" to a specific pin it works fine but with this it just doesn't send the command to the pin!
Many thanks
Add this to the Arduino Sketch just after that digitalWrite line to help identify the problem:
Serial.println(byteRead);
Upload the code to the Arduino, and then open the serial monitor.
If you want the LED attached to pin 2 to light up, then you need to type 1 in the serial monitor, and press enter. The LED attached to pin 2 should turn on, and the byteRead variable (1) should display in the serial Monitor.
My guess is that you are not following my project as described.
Which pin exactly are you having a problem with ?
I just tested the sketch - and it works fine for me
Thankyou for your post.
I'm using the MOUSE PRESSED CODE to send integers from a text file serially to the arduino. On the arduino i have a basic LED code where i have used 6 LEDs marked 1 2 3 4 5 and 6 respectively.So the processing code reads the integers from the text file and the LEDs light up respectivley.The problem is that after typing the integers for a total of 65-70 times the serial sending stops after that for example if i have typed 80 integers in the text file it wont work after the 65-70 range.I'm using a baud rate of 9600 and the delays have been set to 500 in both the arduino and the processing code..I cant seem to figure out the problem could you help me out with a syntax if that is what is needed to be added,
And also what is the relation between the delay set in the arduino after each LED (500) on command and the delay provided in the processing code do they have to be same or can they be different?As changing the delays in both also seems to have a LITTLE effect on the numbers of integers that can be read but i cant go below a delay of 500 for the project im working on.THANKS
P.S im inserting the integers in the text file in the form 1,2,3,4,5,6,1,2,3,4,5,6 and so on ...and after a total of 65-70 integers (where im not counting the commas) the serial sending stops.
sorry just lost in this code
im inserting the integers in the text file in the form 123456123456123456 and so on ignore the above comment.
Hi Anonymous,
Firstly - I am not sure where you are seeing a delay in my Arduino code??? It might be in your code, but not in mine.. so you might want to think about changing that.
I control the speed of LED illumination from the Processing sketch. I can drop this delay down to 15 without issues, and I am sure you could go below 15 if you wanted to.
I use comma delimiters to help with this process.
I think however, that your main problem exists with the delay in your Arduino code.
How to repeat reading values?? Where to put the loop????
In the alternative processing sketch, you would place the code within the mousePressed function in the draw function instead. You would have to put a delay.
Scott or anyone
I'm trying to create an LED controller such as this, but without the PC running the processing code. Using an Arduino Uno or Mega alone, reading a table in a text file. Have you seen any examples of this?
No, I haven't seen this being done. I think you need some software on the computer to read the text file. It doesn't have to be processing, but processing is one of the easier methods.
Thank you very much for the code! I'm currently using it to send data via xbee.Can you help me on the separator? I need to separate it when a new line of data log is added, splitTokens() needs string and the program won't accept splitTokens(text, "\n")
Hi Anonymous:
In the Alternative Processing Sketch above - you will see these lines:
String textFileLines[]=loadStrings("d:/mySensorData.txt");
String lineItems[]=splitTokens(textFileLines[0], ",");
Because this tutorial only made use of one line of data, you will notice that the splitTokens method is working on the first line of the file : textFileLines[0]
To get the next line, it would be textFileLines[1]
You could use a loop to work your way from line to line.
(newbie alert)hey, thanks a lot for the code.i have few questions glad if you to clear them:
(i)The processing IDE reads the values from the notepad and stores the values in a string datatype. so when it sends the data via the serial port to the arduino does the arduino consider it as a string or a char or the int(char) is sent.
(ii)if the notepad has a value fr example ,20, so does the subtext array save the value as 2 and 0 seperately
Hi Anonymous,
(i) In the Arduino sketch above - you will notice this line:
byte byteRead; and this line byteRead = Serial.read();
Therefore, the Arduino is expecting a byte.
ii) If the notepad has values of 0,20,10 within the text file then this line:
String lineItems[]=splitTokens(textFileLines[0], ",");
will create an array where lineItems[0] = 0 lineItems[1] = 20 lineItems[2] = 10
So what will happen if you send these numbers to the Arduino?
I think my Serial Communication tutorial will help you understand.
so when subtext[1]= 20 when sent via myPort.write(subtext[1]); the arduio first recives 2 and then 0 so i gotta save the int of the value received and then save it in two variables for example a=2 and b=0 and then do c = a*10+b so that i finally get 20 for the exection of my code(by the way 2 is sent first then after that 0 ...right?)
Yes - you are right. However, this is an old tutorial, and Arduino have improved their Serial commands. If you want to read a byte, then the tutorial above shows you exactly how to handle that. But if you want to read an integer from the Serial port (on the Arduino), which I gather is what you are trying to do, then you could do this instead:
int newData = Serial.parseInt();
However, you must make certain that the data being sent is actually an integer.
You might want to make use of the Serial.peek() command..
Anyway - it just depends on what you are trying to do, in order to decide what approach to take. However, you may need to introduce some extra safety features to ensure that it works the way you anticipate.
It is worth playing around with the Serial monitor first, and once you are happy with the Arduino functionality - move to the Processing sketch.
Hi Friends!
How to make the same application but now to use the text file in a SD card using arduino!
I don't have any SD card projects - but I will have some soon.
If you would like to send or store text to a USB stick instead, then have a look at this tutorial:
Hi Scott
Your blog experiment on reading a text file using arduino worked fine with the second processing program. But how to make the blinking to be continuous in a single mouse click.
When you say continuous, do you mean like an endless loop - so that the sequence repeats over and over?
If so, you could get processing to re-read the text file when it gets to the end of the file, and just keep repeating over and over.
Hi Scott
You're right. I want it to be continuous. But how do make that work.
see my forum response
Hi, I can't find in your Processing code to put the COM port for my computer.
is it here?
comPort = new Serial(this, Serial.list()[0], 9600);
if i change the 0 to my COM 3 port, i will get this error
ArrayIndexOutOfBoundException: 3
Hi SC,
Serial.list()[0] = will select the first available COM port - which may be COM3 in your case.
If there are no available COM ports, you will get an error.
If there is more than one COM port to choose from, then
Serial.list()[0] will select the first one, Serial.list()[1] will select the second one ...etc etc
If you want to select a specific COM port (e.g. COM3),
then you could write this:
comPort = new Serial(this, "COM3", 9600);
If you would like to see what COM ports are available, you could include this:
printArray(Serial.list());
And the reason you got this error "ArrayIndexOutOfBoundException: 3",
is probably because you wrote this:
comPort = new Serial(this, Serial.list()[3], 9600);
And I am guessing that you did not have 4 available COM ports, and thus went beyond the limits of the Serial.list() array.
I hope that makes sense.
Understood, thank you very much
This comment has been removed by a blog administrator.
Hi There,
I have this code that I would like to write a .TXT file that I could be able to change the parameters of the two variable that I have in my arduino code without uploading my code every time to the arduino. So how do I do this? I looked at your video and I got totally confused what I need to do. I'm a beginner to arduino and I would deeply appropriate that if you could help me out on this. Thanks.
Hi Anonymous,
I am not exactly sure what you are trying to do?
Are you trying to replicate my tutorial or are you trying to do something else?
If you would like help with your project, then please ask within the forum:
hi scott
is it possible to program the arduino board to compare a preset value with each value being read from a text fie then give a particular response when a match is found?
Yes - but I would get the Processing sketch to do the comparison (because it would be quicker)... but you could do it on the Arduino also.
And can i also make the processing script read values from only one column of the text file and not all the columns? thanks
Of course. Just delimit it appropriately, and you can program it to read whatever column you choose.
Hi Scott,
I have attempted to copy your code here and for some reason (although I have not changed anything) it does not work the way it is supposed to. I get some of the LEDs to turn on but not all of them. And I have tested the LEDs on their own so I know that the LEDs do in fact work normally. I often use the serial monitor on the arduino in order to debug using the print statement. However because you set up a new port connection this no longer seems possible. Any suggestions or words of advice?
Hi Peter,
To test the Arduino side of the project, use the Serial monitor (as you said) but don't run the Processing sketch. The processing sketch will occupy the COM port and will prevent you from running the Serial monitor. From the Serial monitor, you can send numbers to the Arduino to test the LEDS.
hi scott,
is it possible to read 2 text files to the arduino at the same time ? my project is have 2 inputs variable (temperature and humidity) which is stored in temp.txt and humid.txt both of them are the inputs. any tips ? thank you
Hi Ardi,
The answer is yes and no. I will start with "No", the Arduino will not read both values at EXACTLY the same time.... but you can read both of the values really quickly. There are a few ways to do this, and it would be best served in the ArduinoBasics forum. Please create a new topic and explain in greater detail - what exactly you are trying to achieve.
You can find the forum link at the top of this page - look for the tab labelled "FORUM". And select the category to ask questions about your own project..
Regards
Scott
Hi Scott,
your blog is very useful. For my project i need to transfer data from PC to arduino and use those values for calculation.In your video, you transmitted data and based on the data received LEDs were lighten up. Is it possible to store those values in an array and perform calculation?
Yes - of course you can.
scott C,
why if i have data in row ?
like this
0
0
2
3
15
12
5
2
1
0
that possible to read ? how ?
String textFileLines[]=loadStrings("d:/mySensorData.txt");
This will populate the textFileLines[] array .
So in your case:
textFileLines[0] = 0
textFileLines[1] = 0
textFileLines[2] = 2
textFileLines[3] = 3
textFileLines[4] = 15 ....
You then just need a for-loop to send those values
scott , why this code cant read number like 10 , 20, 200 ? , this code just can read number like 1 until 9.
This should explain it:
Also - have a look at my Serial communication tutorial - which will give you a few more examples.
scott c , how to read neative value ? ex : -17,-5,-4 ect
This post will show you how:
Look in the comments section of that post.
Hi.
Can i store .accdb file into the SD card? or should i convert it to .csv file?
Hey!
Thanks a lot for this tutorial, its one of the most informative ive seen! I plan on using transistors as switches rather than LEDs and would like to read a csv file which tells each switch (on/off) my file will look more like this
1,1,0,1,0,1,1,1,0
where each position in the file corresponds to a single pin. Ive been trying to use lines similar to this:
if (subtext[1].equals("1"))
{myPort.write('1');}
if (subtext[2].equals("1"))
{myPort.write('2');}
...and etc...
For some reason, Only the first LED will turn on. any ideas?
-Chris
Hi Chris,
At the top of this post, you will find some tabs. One of those tabs will have a link to the "Forum". Please create a new thread in this forum and post both your Arduino code and your Processing code. There is not enough information in your comment to say what is going wrong.
Hi, could I send the text file byte by byte to another Arduino?
Yes:
Hi Scott,
i have done correctly as shown in above tutorial but i can't get result.
i found that led glows when arduino code uploaded and entering text file values in serial monitor window. but after running processing code led not glows.
why such problem occure?
There are two versions of Processing sketches. Which one did you use ?
Also - make sure that you are using the same COM port in the Processing Sketch as the one indicated in the Arduino IDE.
Also double check the location of text file, and that you have populated the text file with numbers.
Also - make sure that you have closed the Serial monitor before running the Processing sketch.
Hey what do i do if i want to read numbers like 10,11,12 ... i mean numbers with two digits, how do i have to change the code? thanks in advance!
It depends on how high you want to read
Hi.
I tryed this on Arduino Mega and in his code change
digitalWrite((byteRead+29), HIGH). Output pins are fom
30 to 53, number values in notepade are fom 1 to 24.
For example in this code No.24 activate together pin 31
and pin 33 instead pin53. What should I do read
two-digit numbers and work all of 24 outputs? TNX.
Hi Franjo,
Two digit numbers are a bit more complicated.
One way to do it could be a multi-step approach.
Send a start code, the integer in question and an end code.
The arduino will receive each digit of the integer seperately.
Use maths to rebuild it.
For example when receiving the number 16
You will receive 1 and then 6
Grab each number and then use maths to rebuild the integer:
X = (1*10)+6 = 16
hello,
thank your for your post my friend, it helped so much.
however, i need to send my txt file via bluetooth. my bluetooth module is connected and all, i only have a problem in the codes. I would really appreciate some advice
Hi..i am using the same code and just changed the port, the LEDS connected to arduino are not lighting up why ?? the arduion program is working fine but the processing program is not. processing does not even show any error. so if there is no error, then why its not working???
how do you know the Arduino program is working ?
make sure the serial monitor is not being used at the same time as processing.
did you change the port in processing ?
make sure the port being used to upload the code to the arduino is the same port being used to communicate with the Arduino in Processing.
Create a program to light up the LEDs, make sure that the LEDs are connected appropriately.
how to read multiple lines ???
Hello Scott,
How do I allow processing to constantly read the text file, and send commands whenever a new character is appended in the text file? I tried changing "mousePressed()" to "loop()", but it doesn't repeats the process at all...
As described in the tutorial: "You can add more numbers to the end of the line, however, the processing program will not be aware of them until you save the file. The text file does not have to be closed." | http://arduinobasics.blogspot.com/2012/05/reading-from-text-file-and-sending-to.html | CC-MAIN-2018-22 | refinedweb | 5,391 | 72.16 |
Let’s suppose we have given an array of unsorted integers. The task is to find the positive missing number which is not present in the given array in the range [0 to n]. For example,
Input-1 −
N = 9 arr = [0,2,5,9,1,7,4,3,6]
Output −
8
Explanation − In the given unsorted array, ‘8’ is the only positive integer that is missing, thus the output is ‘8’.
Input-2 −
>N= 1 arr= [0]
Output −
1
Explanation − In the given array, ‘1’ is the only positive integer that is missing, thus the output is ‘1’.
There are several approaches to solve this particular problem. However, we can solve this problem in linear time O(n) and constant space O(1).
Since we know that our array is of size n and it contains exactly elements in the range of [0 to n]. So if we do XOR operation of each of the elements and its index with ‘n’, then we can find the resultant number as a unique number that is missing from the array.
Take Input of N size of the array with elements in the range [0 to n].
An integer function findMissingNumber(int arr[], int size) takes an array and its size as input and returns the missing number.
Let’s take n as a missing number to perform XOR operation.
Iterate over all the array elements and perform XOR operation with each of the array elements and its indexes with respect to missing number, i.e., n.
Now return the missing number.
#include<bits/stdc++.h> using namespace std; int findMissingNumber(int *arr, int size){ int missing_no= size; for(int i=0;i<size;i++){ missing_no^= i^arr[i]; } return missing_no; } int main(){ int n= 6; int arr[n]= {0,4,2,1,6,3}; cout<<findMissingNumber(arr,n)<<endl; return 0; }
If we will run the above code then it will print the output as,
5
If we perform the XOR operation with each of the elements of the array and its indexes, it will print ‘5’ which is missing from the array. | https://www.tutorialspoint.com/write-a-program-in-cplusplus-to-find-the-missing-positive-number-in-a-given-array-of-unsorted-integers | CC-MAIN-2021-39 | refinedweb | 352 | 57.1 |
My fascination with Elixir began in the late summer of 2019. I first became aware of this emerging language community a few months before, but could not spare time to devote to learning it until August. The concurrency, clean syntax and scalability of Elixir really interested me. As someone who spends a lot of time in the Ruby community, I could also not ignore the strong communal overlap between the two.
As such, in August I began slowly writing an experimental SDK for my company Nexmo in Elixir. Nexmo provides a suite of APIs for communications, such as SMS, Voice, Video, 2FA and a lot more, and has SDKs in a variety of languages, including Node.js, Ruby, PHP, Python and .NET.. We also offer experimental SDKs for the community, and albeit not officially supported, they are available to utilize as one needs. These include a Golang SDK, and now an Elixir SDK.
The Elixir SDK has been an act of love for me, and an attempt of not being afraid to learn in public. It continues to delight me in figuring out better ways to architect the SDK and better ways to get the job done.
A couple of months ago I moved the documentation officially to HexDocs, which is the Elixir community source of Hex packages docs. In so doing, I studied and learned ways to most effectively store the documentation inside the codebase for HexDocs to parse.
Just this week we have added support for another Nexmo API to the Elixir SDK, Applications API. This API lets you manage your Nexmo Applications through the SDK, and sit alongside support for the Number Insights, Account and SMS APIs.
Each API call is wrapped into a function that abstracts a lot of the work for you as the user of the API. For example, this is the function that wraps the create a Nexmo Application API
POST request to the Applications API:
def create(params) do credentials = "#{Nexmo.Config.api_key}:#{Nexmo.Config.api_secret}" |> Base.encode64() headers = [ {"Content-Type", "Application/json"}, {"Authorization", "Basic #{credentials}"} ] body = Enum.into(params, %{}) Nexmo.Applications.post("#{System.get_env("APPLICATIONS_API_ENDPOINT")}", Poison.encode!(body), headers) end
The function accepts an argument of
params, which it turns into a map using the
Enum.into/2 function. The new parameter map is then encoded into JSON and utilized in the
POST request. The credentials are generated by Base 64 encoding the combination of the user's API key and API secret and passed into the headers in the
Authorization format.
All the functions whether they wrap a
PUT,
DELETE or
GET request are similarly written to be simple and concise. This is something that comes easily in Elixir, and one of the reasons I enjoy the language so much.
We welcome contributions and you can find the code on GitHub.
nexmo-community
/
nexmo-elixir
An experimental Elixir client library for Nexmo
Nexmo Elixir Client Library
This is a work in progress Elixir client library for Nexmo. Functionality will be added for each Nexmo API service. Currently, this library supports the Account, Applications, Number Insight and SMS Nexmo APIs.
Installation
Hex
The Hex package can be installed by adding
nexmo to your list of dependencies in
mix.exs:
def deps do [ {:nexmo, "~> 0.4.0", hex: :nexmo_elixir} ] end
Environment Variables
The client library requires environment variables to be supplied in order to enable its functionality. You can find a sample .env file in the root directory of the project. You need to supply your API credentials and the host names for the API endpoints in the
.env file.
Your Nexmo API credentials:
NEXMO_API_KEY
NEXMO_API_SECRET
API host names:
ACCOUNT_API_ENDPOINT=""
NUMBER_INSIGHT_API_ENDPOINT=""
SECRETS_API_ENDPOINT=""
SMS_API_ENDPOINT=""
Documentation
- Nexmo Elixir documentation:
- Nexmo API…
If you'd like to give the package a spin, you can also find the package on Hex.
Discussion (0) | https://dev.to/bengreenberg/the-joys-of-writing-an-elixir-sdk-4mce | CC-MAIN-2021-43 | refinedweb | 637 | 53.31 |
TextFont¶
from panda3d.core import TextFont
- class
TextFont¶
Bases:
TypedReferenceCount,
Namable
An encapsulation of a font; i.e. a set of glyphs that may be assembled together by a
TextNodeto represent a string of text.
This is just an abstract interface; see
StaticTextFontor
DynamicTextFontfor an actual implementation.
Inheritance diagram
- enum
RenderMode
- static
getClassType() → TypeHandle¶
getGlyph(character: int) → TextGlyph¶
Gets the glyph associated with the given character code, as well as an optional scaling parameter that should be applied to the glyph’s geometry and advance parameters. Returns the glyph on success. On failure, it may still return a printable glyph, or it may return NULL.
getKerning(first: int, second: int) → float¶
Returns the amount by which to offset the second glyph when it directly follows the first glyph. This is an additional offset that is added on top of the advance.
- property
line_height→ float¶
- Getter
Returns the number of units high each line of text is.
- Setter
Changes the number of units high each line of text is.
- property
space_advance→ float¶
- Getter
Returns the number of units wide a space is.
- Setter
Changes the number of units wide a space is. | https://docs.panda3d.org/1.10/python/reference/panda3d.core.TextFont | CC-MAIN-2020-40 | refinedweb | 191 | 56.96 |
Hi, I have a database and i have connected it to a view page in a list, i want to have a where statement that narrows down the events so only some are displayed. would you recommend doing it in the view model or the actual view
Here is the view model
[ImplementPropertyChanged] public class Database_ViewModel { bool _isLabelEmptyVisible { get; set; } TodoItem _selectedItem { get; set; } bool _isTapped { get; set; } int _count { get; set; } int Count { get { return _count; } set { _count = value; IsListViewVisible = (_count != 0); IsLabelEmptyVisible = (_count == 0); } } public ObservableCollection<TodoItem> List { get; set; } = new ObservableCollection<TodoItem>(); public bool IsLabelEmptyVisible { get; set; } public bool IsListViewVisible { get; set; } public Database_ViewModel() { List.Add(new TodoItem { Time = "1:2:3", Event = "509m" }); List.Add(new TodoItem { Time = "1:2:3", Event = "5091m" }); List.Add(new TodoItem { Time = "1:2:3", Event = "5029m" }); List.Add(new TodoItem { Time = "1:2:3", Event = "50m Freestyle" }); Count = List.Count; }
Answers
@TomJefferis
I would make the method on the view take a parameter to be passed to the
whereclause - maybe with a default of the filter you currently have in mind.
Then you can (now or later) put a field on the view to use as the
CommandParameterbinded back to a property for the
whereclause.
This way you have your default filter in place - and could allow the user to do their own custom search.
So have the listview item source in the table, like this? sorry im new to xamarin
` _sqLiteConnection = DependencyService.Get().GetConnection();
`
At the moment my view is
`this.BindingContext = new Database_ViewModel();
ListView lv = new ListView { HasUnevenRows = true };
lv.ItemTemplate = new DataTemplate(() =>
{
@TomJefferis
No. Your ViewModel really shouldn't know how its getting data. Usually you have a data access layer that does this. Your ViewModel asks the Data Access Layer for new records - the DAL gets the records and returns a collection. This way if you change data base engines (as an example) you don't have 1,000 references to change throughout your app: Instead you make one change in your DAL.
You said you are new to Xamarin. Are you new to XAML/C#/MVVM design patterns in general? Have you ever done MVVM apps in the past; such as with WPF? If so this might give you a better 30,000ft view of the design architecture to employ. Its not done: I have more to write... But its more than nothing for now. | https://forums.xamarin.com/discussion/comment/246047 | CC-MAIN-2019-13 | refinedweb | 401 | 64.81 |
I am having a problem reading from a table in my SQL Server without a primary key. I have defined an entity class such as:
public class PerfData { public Int64 ActivityId { get; set; } public Int64 Numbers{ get; set; } }
And the
DbContext class, e.g.
class MyDBContext { public DbSet<PerfData> {get; set;} }
The entity type records the numbers for
ActivityId each person performed, so there is no primary key defined on the SQL Server table. However, when I do retrieve data with the following code, EF Core complains that the entity type
PerfData requires a primary key to be defined:
dBContext.AdPerfData.FromSql(@"select [ActivityId], [Numbers] from PerfTable").AsNoTracking().ToList();
How would one work around this limitation? The table contains data for reading, I don't ever need to do insert, update or delete from my code.
Update(2/11/2018):
I added the
[Key] Annotation on the
ActivityId property, and that made EF Core happy and allow my query to go through. I did not need to add
Primary Key attribute on the table in the Sql Server, which would be wrong in terms of business logic anyway. However, I still think EF Core should support
table without primary keys. It's just such common place. Now all tables need to have primary keys.
Update(2/15/2018): I researched more into the issue. So as @Ivan correctly pointed out, EF Core team is working on QueryType and it's available in EF Core 2.1, and you can get it now from myget feed. However, I am going to opt for Dapper Micro ORM instead of waiting for the release to get some quick action. Based on what I read, Dapper is fast and easier to use.
Based on my research in last few days, EF Core's DBSet only works with table with Identity or Primary key. The whole EF Core is based on change tracking and almost always has CRUD in mind, so a key is mandatory. For ad-hoc or read only scenarios, DBSet is not suitable. EF Core team is working on DBQuery type that is supposed to work with table/views that does not have identity.
For read only scenario, Dapper micro ORM might be better fit, because it's fast and easy to use. | https://entityframeworkcore.com/knowledge-base/48729019/how-to-read-from-a-table-without-a-primary-key-in-entity-framework-core-2-0- | CC-MAIN-2021-17 | refinedweb | 381 | 63.29 |
ISO/IEC JTC1 SC22 WG21 N3280 = 11-0050 - 2011-03-24
Lawrence Crowl, crowl@google.com, Lawrence@Crowl.org
Alberto Ganesh Barbati, ganesh@barbati.net
This paper is a revision of N3256 = 11-0026 - 2011-02-27.
Introduction
Wording Changes
16.8 Predefined macro names [cpp.predefined]
17.6.1.3 Freestanding implementations [compliance]
30.3 Threads [thread.threads]
CD comment DE 18 requested a macro to indicate the presence of threads. The response was a rather convoluted "ineffective <thread> header". This response failed to solve the real problem, and caused FCD comment GB 55. In addition, the change exacerbated incompatibilities between C and C++.
This paper provides the minimal fixes in this area for C++0x. They specifically satisfy
A macro that indicates the presence of threads in the core language is generally more applicable than one that only indicates the availability of library components. This approach is consistent with the wording in 1.10 [intro.multithread].
The <mutex> header is more likely to be implemented in freestanding implementations than is <thread> and yet their freestanding requirements are opposite. Furthermore, there are many single-threaded embedded systems. So, <thread> should not be required of freestanding implementations. Consequently, <ratio> and <chrono> would not need to be freestanding and GB 55 would become moot.
The set of freestanding headers required by C++ should include at least those required by C. Implementors will anyway because few would support only C++, so failing to support those headers simply invites an inconsistency between the standard and practice.
Add the following to the end of paragraph 2.
Edit table 16 as follows.
Edit paragraph 3 as follows.
The supplied version of the header
<cstdlib>shall declare at least the functions
abort,
atexit,
at_quick_exit,
exit, and
quick_exit(18.5).
The supplied version of the headerThe other headers listed in this table shall meet the same requirements as for a hosted implementation.
<thread>shall meet the same requirements as for a hosted implementation or including it shall have no effect.
Edit the synopsis as follows.
namespace std {
#define __STDCPP_THREADS__ __cplusplus
.... | https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3280.htm | CC-MAIN-2022-27 | refinedweb | 343 | 51.55 |
Sometimes, it is useful to be able to run some editor script code in a project as soon as Unity launches without requiring action from the user. You can do this by applying the InitializeOnLoad attribute to a class which has a static constructor. A static constructor is a function with the same name as the class, declared static and without a return type or parameters (see here for more information):-
using UnityEngine; using UnityEditor; [InitializeOnLoad] public class Startup { static Startup() { Debug.Log("Up and running"); } }
A static constructor is always guaranteed to be called before any static function or instance of the class is used, but the InitializeOnLoad attribute ensures that it is called as the editor launches.
An example of how this technique can be used is in setting up a regular callback in the editor (its “frame update”, as it were). The EditorApplication class has a delegate called update which is called many times a second while the editor is running. To have this delegate enabled as the project launches, you could use code like the following:-
using UnityEditor; using UnityEngine; [InitializeOnLoad] class MyClass { static MyClass () { EditorApplication.update += Update; } static void Update () { Debug.Log("Updating"); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2017.3/Documentation/Manual/RunningEditorCodeOnLaunch.html | CC-MAIN-2019-39 | refinedweb | 209 | 50.77 |
After the most recent update to TestComplete (Version: 12.50.4142.7 x64), my python tests are no longer able to use the module `fdb`.
This is the exact line of code that breaks:
with fdb.connect(dsn='<redacted>', user='<redacted>', password='<redacted>', charset="none") as con:
This same line of code works when run through the same interpreter (Python 3.6) via pyCharm on the same machine.I've had to do some pretty dumb stuff to even get TC to see the module in the first place, seen here:
from os import sys
import os
sys.path.append('C:\Program Files (x86)\SmartBear\TestComplete 12\Bin\Extensions\Python\Python36\Lib\site-packages')
Without that append to the system path, I get:
And, again, I get none of these problems when run via PyCharm on the same machine. I even have PyCharm pointed to the same exact python executable:
Any help would be appreciated, I'm shut down until this is resolved. Thanks!
Solved!
Go to Solution.
Alright, for any poor souls that are as dense as I....
The problem was my local install of Python 3.6 is 32 bit, where as I've been running 64 bit Test Complete... running the 32 bit version of TC allowed my 3rd party modules to work again.Boy oh boy, what a waste of my day, lol.
View solution in original post
This is something you should probably discuss with Support. You can contact them here:
Compare images using the Region Checkpoint
Converting UTC TimeDate in an Excel file
Compare HTML table with Excel file and correct data in Excel file
How to execute remote test and obtain results via Test Runner REST API | https://community.smartbear.com/t5/TestComplete-Mobile-Application/WinError-193-after-latest-TestComplete-update-python-modules-do/td-p/162841 | CC-MAIN-2020-40 | refinedweb | 284 | 72.26 |
This is a contributed post by Oracle mainline Linux kernel
developer, Liu Bo, who recently presented at the 2015 China Linux
Storage and File System (CLSF) Workshop. CLSF is an invite-only event for Linux kernel developers in China. This event was held on October 15th and 16th in Nanjing.
Haomai Wang from XSKY introduced some basic concepts about Ceph, e.g. MDS, Monitor, and
OSD. Ceph caches all meta-data information (cluster map)
on monitor-nodes so clients could fetch data by just on jump in network
and if we use cephfs, which adds MDS, it still needs oonly one jump
because the MDS doesn't store data or metadata of filesystem but only to
store the context of distributed lock (range lock? not sure about this).
Ceph also supports Samba 2.0/3.0 now.
In Linux, it is recommended to use
iSCSI to access Ceph storage cluster because it will have to update
kernel in clients if we use rbd/libceph kernel modules. Ceph uses
a pipeline model in message processing therefore it is good for Hard Disk
but not SSD. In the future, developers will use async-framework (such as
Seastar) to refactor Ceph. And Robin (from Alibaba) asked that if
the client can write three copies concurrently if making a setup of
three replica in Ceph, Haomai answered it won't, the IO will come to the
primary OSD and then the primary OSD issues two other replicated IOs to
other two OSDs, waiting until the two IOs back before returning "the IO
is success" to client.
The future development plan for Ceph is de-duplication on pool level.
Coly Li (Suse) said that de-duplication is better to be implemented on
business level instead of block level because the duplicated information
has be split in block level.
Bob Liu from Oracle shared the work he'd done on xen block pv driver, the patch is aimed
to improve xen's performance by converting xen block pv driver to use
block-mq API and multi ring buffer.
The patch is located at
Asias He from OSv led this topic. ScyllaDB is a distributed Key/Value
store engine which is written in C++14 code and completely compatible to
Cassandra. It could also run CQL (Cassandra Query Language). The slides
show that, ScyllaDB is 40 times more faster than Cassandra. The
asynchronous developing framework in ScyllaDB is called Seastar. The
magic in ScyllaDB is that it shards requests to every CPU core, and runs
with no locks/no threads. Data is zero-copy and use bi-direction queue
to transfer messages between cores. The test result is base on kernel
TCP/IP network stack but they will use their own network stack in the
future.
Yanhai Zhu (from Alibaba) doubted that the test results is not
fair enough because ScyllaDB is designed to be run in multi-cores but
Cassandra is not, so it'd be more fair to compare ScyllaDB with running
24 Cassandra instances. Asias replied that ScyllaDB uses message queues
to transfer messages between CPU cores, so it avoids atomic-operation
and lock-operation cost. And, Cassandra is written by Java, which means
the performance will be low when the JVM do garbage- collection.
ScyllaDB is written completely by c++ so its performance is much steady.
Both of two projects are led by the KVM creator, Avi Kivity.
Fengguang Wu from Intel told me
that they're using btrfs a bit on their autotest farm but often experience
latency issues and we talked about using btrfs as docker's storage
backend, everything seems perfect except each docker instance is an
individual namespace so that they're not able to share page caching for
the same content. This limits btrfs's use if users need to run a great
amount of instances, memory becomes the biggest issue, besides that
Zheng Liu from Alibaba shared that overlayfs also has latency issues in
real production use, ie. if you just touch a large file, the file will
be COPIED from the lower layer to the upper layer. So we all agreed
that something should happen in this area.
This shows one thing, the problem of traditional VM is that
all kinds of VMs are aimed to simulate a bare metal machine in order to
run a normal OS, but that's not what we want. And one more thing, yper
is a vm which can provide secuity that is wanted by all production
use cases.
Gang He from Suse shared with us how VxFS implements its deduplication. It
provides online dedup and a serials of commands to control dedup
behaviours, e.g. we can schedule a dedup and control dedup task's cpu
usage, memory usage and priority, even it can 'dryrun' to find how many
blocks can be deduped but do not really perform dedup.
Robin Dong from Alibaba shared that how they developed a distributed storage system based on
a small open-source software called “sheepdog“, and modified it heavily
to improve data recovery performance and make sure it could run in
low-end but high-density storage servers. He talked about how they came
up with the idea, the system design and deployment, the difficult part
is not the design process, but to take care of every detail in the
deployment of the cluster, e.g. find a proper place and power management
for the machine. It's a good example to take advantage of opens ource
and make contribute to it.
Chao Yu from Samsung led a topic about F2FS. He listed what happened in the F2FS
community in the last year, and looks like that F2FS tends to be more
generic than just a flash friendly filesystem, which is implied by the
fact that F2FS now supports larger volume and larger sector and has
in-memory extent cache and a global shrinker. Besides that, F2FS also
improves its performance including flush performance, mixed data write
performance and multi-threads performance. Developers also optimized
F2FS a bit for SMR drive by allowing user to choose the over-provision
ratio lower than 1% in mkfs.f2fs and tuning GC. In the future F2FS is
planning to support online defrag, transparent compression and data
deduplication.
Yanhai Zhu from Alibaba led a topic about cache in virtual machines
environment. Alibaba chose Bcache as code base to develop a new cache
software. Yanhai explained why he didn't choose flashcache. flashcache was his first choice but it has some drawbacks which
cannot be worked around, i.e. flashcache uses hash data structure to
distributed IO requests at beginning, which will split the cache data in
multi-tenant environment, and thus flashcache is unfriendly to
sequential-write, so it proves that flashcache doesn't fit Alibaba's
requirements. After then they turned to bcache which uses B-tree
instead of hash-table to store data. For the strategy, they chose
radical writeback strategy in order to make cache squentialize write IOs
and make backend better at absorbing peak use.
Zheng Liu from Alibaba gave an update of ext4 in the last year, the biggest one is
'Remove ext3 filesystem driver'. Others are lazytime support,
filesystem-level encryption, orphan file handling (by Jan Kara) and
project quota. Besides that Seagate developers worked on ext4's SMR
support (), in the future ext4 is
likely to have data block checksumming and btree (by Mingming Cao of Oracle).
Zhongjie Wu is working at Memblaze, a famous startup company in China on
flash storage technology. Zhongjie showed us one of their products on top of
NVDIMM. An NVDIMM is not expensive, it is only a DDR DIMM with a
capacitor. Memblaze has developed a new 1U storage
server with a NVDIMM (as a write cache) and many flash cards (as the
backend storage). It contains their own developed OS and could use
Fabric-Channel/Ethernet to connect to client. The main purpose of NVDIMM
is to reduce latency, and they use write-back strategy. Zhongjie also
mentioned that NVDIMM's write performance is quite better than its read
performance, so they in fact uses shadow memory to increase read
performance..
Bob Liu of Oracle talked about NVDIMM support in linux. There are three options:
-- Liu Bo
Ed: note there is also coverage by another invitee, Robin Dong of Alibaba: day one, day two. | https://blogs.oracle.com/linuxkernel/china-linux-storage-and-file-system-clsf-workshop-2015-report | CC-MAIN-2017-26 | refinedweb | 1,391 | 59.43 |
ok, i’ve got some module and it has some instance variables i want to
be set by the classes extending the module, and aparently i’m not doing
this correctly:
module A
def test
raise ‘ARG’ if !@blah
@blah
end
end
class B
include A
extend A
@blah = ‘YES!’
end
irb(main):025:0> b = B.new
=> #<B:0x136376c>
irb(main):026:0> b.test
RuntimeError: ARG
from (irb):9:in `test’
from (irb):26
can anyone tell me what the syntax should be? how do i set @blah in the
B class so that its available to the test method | https://www.ruby-forum.com/t/simple-ruby-language-question/83976 | CC-MAIN-2018-47 | refinedweb | 102 | 81.93 |
time deposit(redirected from Large-denomination time deposit)
Also found in: Dictionary, Thesaurus, Wikipedia.
Time deposit
Interest-bearing deposit at a savings institution that has a specific maturity. Related: Certificate of deposit.
Term Deposit
A deposit at a bank or other financial institution that has a fixed return (usually via an interest rate) and a set maturity. That is, the depositor does not have access to the funds until maturity; in exchange, he/she is usually entitled to a higher interest rate. One of the most common examples of a term deposit is a certificate of deposit. It is also called a time deposit. See also: Demand deposit.
time deposit
An interest-bearing savings deposit or certificate of deposit at a financial institution. Although the deposits formerly included only deposits with specific maturities (such as certificates of deposit), they now are considered to include virtually all savings-type deposits. Compare demand deposit.
Time deposit.
When you put money into a bank or savings and loan account with a fixed term, such as a certificate of deposit (CD), you are making a time deposit.
Time deposits may pay interest at a higher rate than demand deposit accounts, such as checking or money market accounts, from which you can withdraw at any time.
But if you withdraw from a time deposit account before the term ends, you may have to pay a penalty -- sometimes as much as all the interest that has been credited to your account. Some other time deposits require you to give advance notice if you plan to withdraw money. | http://financial-dictionary.thefreedictionary.com/Large-denomination+time+deposit | CC-MAIN-2017-04 | refinedweb | 261 | 54.22 |
BreizhCTF 2019 - Hallowed be thy nameCTF URL:
Solves: ?? / Points: 300 / Category: crypto
Challenge description
We have the instructions to connect to a server and we can download its Python script.
The server offers 3 actions:
- “Enter plain, we give you the cipher”. It returns the ciphertext of the plaintext.
- “Need a flag ?”. It returns a base64 encoded string, probably encrypted, and different each time it is called even within the same connection 🤔
- Exit
Here is the server script, by @G4N4P4T1 (thank you for this challenge 👋), and of course the flag is redacted here:
import sys import random import base64 import socket from threading import * FLAG = "bzhctf{REDACTED}" serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) init_seed = random.randint(0,65535) class client(Thread): def __init__(self, socket, address): Thread.__init__(self) self.sock = socket self.addr = address self.start() def get_keystream(self, r, length): r2 = random.Random() seed = r.randint(0, 65535) r2.seed(seed) mask = '' for i in range(length): mask += chr(r2.randint(0, 255)) return mask def xor(self, a, b): cipher = '' for i in range(len(a)): cipher += chr(ord(a[i]) ^ ord(b[i])) return base64.b64encode(cipher) def run(self): r = random.Random() r.seed(init_seed) self.sock.send(b'Welcome to the Cipherizator !\n1 : Enter plain, we give you the cipher\n2 : Need a flag ?\n3 : Exit') while 1: self.sock.send(b'\n>>> ') response = self.sock.recv(2).decode().strip())) elif response == "3": self.sock.close() break if __name__ == "__main__": if len(sys.argv) != 2: print("usage: %s port" % sys.argv[0]) sys.exit(1) serversocket.bind(('0.0.0.0', int(sys.argv[1]))) serversocket.listen(5) print ('server started and listening') while 1: clientsocket, address = serversocket.accept() print("new client : %s" % clientsocket) client(clientsocket, address)
Best solution
@Creased_ from the winning team AperiKube shared on Twitter the best solution which did not involve brute-forcing at all!
Since the seed is the same for every client, I opened a first connection to the service, sent nullbytes to get the mask, then I used another connection to get the xored flag. A simple xor(mask, mask) then gives you the flag :)
Here is his very effective script:
Challenge resolution
Script analysis
The script accepts multiple clients in parallel through threads. When it starts, it generates a first seed with:
init_seed = random.randint(0,65535)
65’536 possible values: that is not a very random nor robust seed. This seed is global and it is used for all clients.
When a client connects, a new thread is started. A first
Random object is created using the global seed:
r = random.Random() r.seed(init_seed)
When the client uses the 1. or 2. action, the
self.get_keystream() function is called:))
The
get_keystream() function receives the first
Random object seeded with the global seed, and it does this:
def get_keystream(self, r, length): r2 = random.Random() seed = r.randint(0, 65535) r2.seed(seed) mask = '' for i in range(length): mask += chr(r2.randint(0, 255)) return mask
A second seed is created, based on the output of the first
Random object, and it is used to seed a second
Random object. Like for the first one, only 65’536 values are possible which is weak. The first
Random object is used to seed the second… From this second object, a mask is generated with the length passed as argument.
This length corresponds to the length of the data to encrypt. Indeed, the mask is combined with the input using the
xor() function. The mask can then be considered as an encryption key. Here, we recognize in
xor() a common function which applies the XOR operator on both inputs, character by character, and returns it base64-encoded:
def xor(self, a, b): cipher = '' for i in range(len(a)): cipher += chr(ord(a[i]) ^ ord(b[i])) return base64.b64encode(cipher)
Weakness
Random is a PRNG (Pseudorandom number generator) and so it has an interesting weakness: it is actually deterministic! Given a seed, it will always generate the same output sequence 😉
Combined with the fact that we can obtain the ciphertext of a plaintext of our choice, that the seeds are very small, and that the second
Random object is not shared with other players (so we will not be perturbated by others): we have a very good chance to brute-force the seeds off-line and therefore the encryption key that allows to decrypt the flag.
Solution
Our solution is to first send a static plaintext string to the server, then ask for the encrypted flag in the same connection (and nothing between). This way we know that the seed for the first
Random object is the same for both requests and that this object is used only twice.
# nc ctf.bzh 11000 Welcome to the Cipherizator ! 1 : Enter plain, we give you the cipher 2 : Need a flag ? 3 : Exit >>> 1 Enter plain : test Your secret : n3xljA== >>> 2 Your secret : UlXaKcVLuVuORY3lY3/0myvHh0FjDsoumjjOCempaoVQDRmtHSnJw1WOXb5P9I+I
Our script will brute-force the first seed by trying to encrypt our chosen-plaintext and comparing with the obtained ciphertext, with the first
Random object re-created everytime to start fresh. When it matches, the state of the first
Random object is the good one, and the same as it was on the server, and we can then ask it to generate a second
randint() for us. It will be the same as the one generated on the server to encrypt the flag we requested second, so we can generate a mask with it and decrypt the flag ciphertext we got.
There is no need to brute-force the second seed too (as we initially thought), as its value is a direct consequence of the state of the first
Random object.
This is the Python script:
import random import base64 import sys const = "test" const_out = "n3xljA==") # brute-force seed1 for seed1 in range(0, 65535): print "try seed1=%d" % seed1 rand1 = random.Random() rand1.seed(seed1) rand2 = random.Random() rand2.seed(rand1.randint(0, 65535)) # first call to first Random object # generate the mask mask = '' for i in range(len(const)): mask += chr(rand2.randint(0, 255)) # apply the mask to encrypt ret = xor(mask, const) if ret == const_out: # we found the seed1! print "GOT IT" print "seed1=%d" % seed1 rand2 = random.Random() seed2 = rand1.randint(0, 65535) # second call to first Random object print "seed2=%d" % seed2 rand2.seed(seed2) mask = '' for i in range(len(flag)): mask += chr(rand2.randint(0, 255)) print base64.b64decode(xor(mask, flag)) sys.exit(0)
And its output:
try seed1=0 GOT IT seed1=0 seed2=49673 bzhctf{The_sands_of_time_for_me_are_running_low}
As we are lucky, or the challenge creator is nice, the first seed is ‘0’ so it does not even have to loop and we instantly get the flag 😁
Fun fact: the challenge title “Hallowed be thy name”, is an Iron Maiden song, and the flag is a verse of the lyrics…
“Cheating” solution
The solution above is, we believe, the intended solution. However, when writing this, we found that actually we could brute-force only the second seed. Yes it is generated from a first random generator, but as it has only 65’536 possible values, so we can brute-force it on its own 😉
Our trick here is also to know that the flag certainly contains “breizhctf” or “bzhctf”. Without this, and with a truly random flag (otherwise we could search for ASCII-only candidates), this would not work.
Python script:
import random import base64 import sys) for seed2 in range(0, 65535): rand2 = random.Random() rand2.seed(seed2) # generate the mask mask = '' for i in range(len(flag)): mask += chr(rand2.randint(0, 255)) decode = base64.b64decode(xor(mask, flag)) # if it looks like a flag, it should be a flag ;) if "bzhctf" in decode or "breizhctf" in decode: print decode sys.exit(0)
It finds the flag in just a few seconds.
Author:
Clément Notin | @cnotin
Post date: 2019-04-14 | https://tipi-hack.github.io/2019/04/14/breizhctf-19-hallowed-be-thy-name.html | CC-MAIN-2019-39 | refinedweb | 1,333 | 64.2 |
ttl-hashtables
Extends hashtables so that entries added can be expired after a TTL
See all snapshots
ttl-hashtables appears in
ttl-hashtables-1.4.1.0@sha256:2585a6e430e74c2e7769841ab1b08dd42926a5d53fba3ec34046958967b91f3e,2013
Module documentation for 1.4.1.0
ttl-hashtables
This library extends fast mutable hashtables so that entries added can be expired after a given TTL (time to live). This TTL can be specified as a default property of the table or on a per entry basis.
How to use this module:
Import one of the hash table modules from the hashtables package.. i.e. Basic, Cuckoo, etc and “wrap” them in a TTLHashTable:
import Data.HashTable.ST.Basic as Basic type HashTable k v = TTLHashTable Basic.HashTable k v foo :: IO (HashTable Int Int) foo = do -- create a hash table with maximum 2 entries and a default TTL of -- 100 mS ht <- H.newWithSettings def { H.maxSize = 2, H.defaultTTL = 100 } runMaybeT $ do H.insert ht 1 1 H.insert ht 2 2 H.insert ht 3 3 -- will never get past this point since max size is 2 return ht main :: IO () main = do ht <- foo v0 <- H.find ht 1 threadDelay 200000 -- wait 200mS v1 <- H.find ht 2 v2 <- H.find ht 3 putStrLn $ "V0=" ++ show v0 -- v0 should be found putStrLn $ "V1=" ++ show v1 -- v1 won't be found (expired) putStrLn $ "V2=" ++ show v2 -- v2 won't be found (never got inserted) return ()
You can then use the functions in this module with this hashtable type. Note that the
functions in this module which can fail offer a flexible error handling strategy by virtue of
working in the context of a ‘Failable’ monad. So for example, if the function is used directly
in the IO monad and a failure occurs it would then result in an exception being thrown. However
if the context supports the possibiliy of failure like a
MaybeT or
ExceptT
transformer, it would then instead return something like
IO Nothing or
Left NotFound
respectively (depending on the actual failure of course).
None of the functions in this module are thread safe, just as the underlying mutable hash tables in the ST monad aren’t as well. If concurrent threads need to operate on the same table, you need to provide external means of synchronization to guarantee exclusive access to the table | https://www.stackage.org/lts-15.3/package/ttl-hashtables-1.4.1.0 | CC-MAIN-2021-17 | refinedweb | 389 | 63.39 |
My code seemed deadlock when I tried to do this:
object MoreRdd extends Serializable {
def apply(i: Int) = {
val rdd2 = sc.parallelize(0 to 10)
rdd2.map(j => i*10 + j).collect
}
}
val rdd1 = sc.parallelize(0 to 10)
val y = rdd1.map(i => MoreRdd(i)).collect
y.toString()
It never reached the last line. The code seemed deadlock somewhere since my
CPU load was quite low.
Is there a restriction not to create an RDD while another one is still
active? Is it because one worker can only handle one task? How do I work
around this?
--
View this message in context:
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org | https://mail-archives.us.apache.org/mod_mbox/spark-user/201409.mbox/%3C1409692488132-13302.post@n3.nabble.com%3E | CC-MAIN-2021-43 | refinedweb | 135 | 71.31 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.