text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Details
Description.
Activity
- All
- Work Log
- History
- Activity
- Transitions
TreeSet wraps a TreeMap, so you might as well just use a TreeMap whose keys and values are identical, or somesuch.
Now datanode's blocks is defined as TreeMap.
I am attaching the proposed patch for review. I have done brief testing and will test with manually deleting some blocks from the datanode to trigger new code.
block-prefs-2.patch. This removes another copy of block for files that are created in current instance of running Namenode. Also removes iterations when a file is added.
Extra copy still exists for files that are created at the beginning while reading fsimage.
2) Another candidate :
There are two global Maps for blocks
a) blockMap : block --> nodes that have this block.
b) activeBlock : block --> file Inode
We can have only one map :
blockMap : block --> BlockInfo{ Block, nodes, Inode, ... }
Of course this is a bigger code change.
Is TreeSet much bigger in size than a LinkedList? We have one TreeSet of nodes for each block.
I do not see how replacing TreeSet=TreeMap<Block, static final Object> by TreeMap<Block, Block> in DatanodeDescriptor
would reduce memory consumption.
Block getBlock( Block b)
looks very strange. So you need a block in order to get it ..........
> I do not see how replacing TreeSet=TreeMap<Block, static final Object> by TreeMap<Block, Block> in DatanodeDescriptor
> would reduce memory consumption.
This does not reduce memory .This will let us get hold of original object that was inserted in to the map. That fact that all nodes that have a particular block and blockMap reference the same object reduces memory. Now we have 1 Block object instead of 4 (in the case of 3 replicas). This references argument might be flawed since I am kind of new to Java.
> Block getBlock( Block b)
> looks very strange. So you need a block in order to get it ..........
. We could call it getStoredBloc() or getBlock(BlockId). We want a function that gives the Block object what was stored in the map.
block-refs-3.patch:
1) changed to getBlock(Block) to getBlock( long blockid ). Note that this version includes a 'new'.
2) Konstantin found a bug because of change in semantics of removeStoredBlock() and addStoredBlock().
previously they did not modify node's map. restored.
this also slightly modifies Block.compareTo().
Why not have TreeMap<BlockId, Block> instead of TreeMap<Block, Block>?
BlockId is long and it needs to be Long for this generic class. That implies another allocation of Long for each element in the map.
If the new report contains different block length do you update it in the stored block?
This patch does not. I was wondering about this. No where in the code do we check or enforce that lengths reported by datanodes are same. For e.g. when a file is closed all the blocks for the file use the length reported by first data node in that has that block. This patch does not change that behavior. Block length is rarely considered.
Different replicas having different lengths should be detected by check sums.
Your patch should update length imo, since before it was updated with every block report.
If an append occurs the file length should change.
Ok, so the length reported in the latest block report is the correct length. I will attach a new patch with this change.
block-refs-4.patch : adds Konstantin's suggestion above. Diff between 3 and 4 :
- block = containingNodes.first().getBlock(block.getBlockId());
+ Block storedBlock =
+ containingNodes.first().getBlock(block.getBlockId());
+ // update stored block's length.
+ if ( block != storedBlock && block.getNumBytes() > 0 ) { + storedBlock.setNumBytes( block.getNumBytes() ); + }
We now update the block length with the length reported by latest datanode.
As before this does not affect block lengths of blocks that belong to files that created during previous runs of the namenode.
please disregard 4. diff between patch 5 and 3:
- block = containingNodes.first().getBlock(block.getBlockId());
+ Block storedBlock =
+ containingNodes.first().getBlock(block.getBlockId());
+ // update stored block's length.
+ if ( block.getNumBytes() > 0 ) { + storedBlock.setNumBytes( block.getNumBytes() ); + block = storedBlock; + }
Currently blockMap maps a block to a TreeSet of DatanodeDescriptors. I would suggest that we use ArrayList in order to reduce the use of memory. Most of the time the set size is 3 because the default replication factor of a file is 3. So in term of speed, there is no benifit using TreeSet. However in term of memory TreeSet is way more expensive than ArrayList. An entry in TreeSet is at least 6 times as expensive as an entry in ArrayList and approximately we have 3*total#_of_blocks of such entries in FSNamesystem.
+1
I was thinking of the same.
How much do think the memory difference between a LinkedList entry and ArrayList entry?
A LinkedList entry contains a pointer to the previous entry and a pointer to the next entry. So it is more expensive entry-wise. One problem with ArrayList is that it creates an array of size 10 by default. But because most of time the set size is 3, we could set the intial array size to be 3.
The attached patch seems to save in the order of 1MB with around 3000 blocks (9000 total blocks/3). Without the patch Jconsole show around 5.3-5.57 MB of heap in use just after GC. With the patch it is around 4.4-4.8 MB. Calculation would be more accurate on a larger cluster. I will try.
Modified containingNodes to be ArrayList instead of sortedSet. On a lightly loaded 500 node cluster (each node has 500-600 blocks), with the patch memory (in MB) was low to mid 40s after GC and with out the patch memory was in mid 50s. I will submit a new patch.
The latest patch changes Datanode container associated with each block to ArrayList instead of a SortedSet. ArrayList's initial size is set to number of replications for the file.
+1, because applied and successfully tested against trunk revision r495045.
I just committed this. Thanks, Raghu!
This was causing problems and has been reverted in
HADOOP-898.
This is pretty weird. Both NPE and SmallBlock test failures in
HADOOP-898 are caused by the same problem : node.getBlock(blockId) returns null sometimes. But I verified that node.blocks contains this block earlier and right after this failure. Any ideas?
blocks map in DatanodeDescrptor is changed like this :
- private volatile Collection<Block> blocks = new TreeSet<Block>();
+ private volatile SortedMap<Block, Block> blocks = new TreeMap<Block, Block>();
and getBlock(long blockid) is defined as :{ return blocks.get( new Block(blockId, 0) ); }
The bug is in the following patch. Very costly oversight: The does not affect equals() but affects Block.comparedTo().
public int compareTo(Object o) {
- Block b = (Block) o;
- if (getBlockId() < b.getBlockId()) { - return -1; - }
else if (getBlockId() == b.getBlockId()){ - return 0; - }
else{ - return 1; - }
+ long diff = getBlockId() - ((Block)o).getBlockId();
+ return ( diff < 0 ) ? -1 : ( ( diff > 0 ) ? 1 : 0 );
}
e.g: 'diff' wont be < 0 when blocks ids are LONG_MAX and -10.
changing this to following fixes it.
+ Block b = (Block)o;
+ return ( blkid < b.blkid ) ? -1 :
+ ( ( blkid > b.blkid ) ? 1 : 0 );
Now TestSmallBlocks patch does not fail.
> e.g: 'diff' wont be < 0 when blocks ids are LONG_MAX and -10.
I meant 'wont be > 0'.
On a small cluster with 3035 blocks verified blocks references work as expected using netbeans profiler:
Number of Block objects :
before the patch : 5*3035 ( 3 replicas, 1 in blockMap, 1 in Inode/File )
after the patch : 2 * 3035 ( in blockMap and inode/File).
If the blocks were created after Namenode is started, one in inode will share the object with blockMap. When name nodes starts up, it initially creates all the blocks in Indode while reading the image file.. does not seem easy to share that reference.
TreeMap.Entry objects also reduced from 20k to around 11.5 k. Due to changing containingNodes to ArrayList instead of TreeMap.
TreeMap.Entry and Block used to take max memory after byte[] and char[] from the profiler. Now Blocks has gone down in the list.
I will submit the patch today. We could wait till current trunk is more stable to check it in.
Attaching patch for review. changes between 803.patch and 803-2.patch are minor.
Attaching the same patch. did not grant ASF license by mistake last time.
Another relatively simpler change :
each Inode allocates a TreeMap for chidren. Each TreeMap takes around 40 bytes (from profiler, not sure if it includes gc overhead). Since most nodes in FS don't have any children, we can postpone allocating TreeMap until it is needed. — (a)
INode.children does not strictly need to be a TreeMap. Each TreeMap entry seems to be around 30 bytes. I am not planning to include this change in this bug, but that would be another 30 bytes per node.
Few more thoughts: (these are not intended to be included in patch for this issue)
A big per file consumer of memory is INode.name. It stores full path. We can save hundred or more bytes per file if we store only the file name. Full path name can always be constructed from parent. — (b) .
Each directory has 'activeBlocks' which is a HashMap for block to INode. We already have a global blockMap (block to containingNodes). This also implies that every call to getBlock(File) results in recursing from root to the node, each of which involves a TreeMap look up in children map. I think we should have just Map : block to{ INode, self-ref, containingNodes ... }
. This will save HashMap entry (30+ bytes) and block object (20-30 bytes) for each block. It also improves getFile() by many times. This will also let us use ArrayList instead of TreeMap for INode.children (30-40 bytes per file) — (c)
803_3.patch add the following : remove allocation of a TreeMap in INode by default.
> A big per file consumer of memory is INode.name. It stores full path.
> We can save hundred or more bytes per file if we store only the file name.
> Full path name can always be constructed from parent. — (b) .
Konstantin pointed out INode.name is in fact just the file name. Since it is declared as String it still seems to be taking around 128 bytes. I will check if the size comes down if it is declared as char[]. Not sure if it can be declared byte[].
The patch is estimated to give around 25% on a large NameNode on 32bit JVM. )
Patch doesn't apply to latest trunk.
Updated patch.
Patch now applies to current trunk, assuming I resolved the conflict correctly...
Thanks Doug. The new patch looks fine.
For now I am removing the 'patch available' state until this patch gets reviewed.
> If the blocks were created after Namenode is started, one in inode will share the object with blockMap. When name nodes starts up, it initially creates all the blocks in Indode while reading the image file.. does not seem easy to share that reference.
Even after a name node starts up, a block in inode does not share the object with blockMap. So a name node contains two Block instantiations per block.
To reduce the number of block instantiations to 1, I think we can create a TreeSet of blocks which maps a block id to its block object. When a block is intially created, simply add the block to this block set. When later a reference to the block is needed, we can get it from the block set.
We can furthur remove the blockMap and activeBlock list by adding two non-persistent fields to the Block data structures. One is a reference to all its containing data nodes and one to the inode representing the file that the block belongs to.
> Even after a name node starts up, a block in inode does not share the object with blockMap.
> So a name node contains two Block instantiations per block.
In the patch, when a file is closed, we do use the reference in blockMap (because, datanode that has block would have informed namenode already).
Yes, we can get rid of one of blockMap and activeBlocks maps (not both). This also removes extra block object we have for files that exist before namenode restarts. The changes for this are a bit more intrusive, I am wondering if I should do it as part of this patch..
In the patch the type of the field "blocks" in DatanodeDescriptor is changed from TreeSet<Block> to TreeMap<Block, Block>. I think type TreeSet<Block> is the same as TreeMap<Block, Block>, but with a cleaner interface. Is there any reason that we need the change?
TreeMap<Block, Block> allows us to get the object that was inserted into the map. With TreeSet we can not get the object that was inserted. This is the base for removing extra block objects in this patch.
+1 code reviewed. Raghu, you might need to regenerate the patch.
attached 5.patch: Updated patch for current trunk.
Thanks for reviewing Hairong.
+1, because applied and successfully tested against trunk revision r507276.
I just committed this. Thanks, Raghu!
TreeSet() there does not seem to be any method to get reference of an object that exists in the set. | https://issues.apache.org/jira/browse/HADOOP-803?focusedCommentId=12466275&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-32 | refinedweb | 2,219 | 76.62 |
Opened 3 years ago
Closed 3 years ago
#19048 closed Bug (duplicate)
[Management commands] Unknown commands on python packages application
Description
We are using the python namespace package functionality for a django project.
__import__('pkg_resources').declare_namespace(__name__)
But the commands not in the project package (the one with manage.py) are not found by the manage.py commands.
Attachments (1)
Change History (4)
comment:1 Changed 3 years ago by Natim
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 3 years ago by Natim
More information here :
Changed 3 years ago by Natim
Patch to look at all pkg_resources
comment:3 Changed 3 years ago by claudep
- Resolution set to duplicate
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
It seams to come from this django.core.management.find_management_module which call imp.find_module('django_project')
This command, return the first pkg_module path and not all of them. | https://code.djangoproject.com/ticket/19048 | CC-MAIN-2015-27 | refinedweb | 157 | 50.57 |
Hi wonderful Groovy team,
I am really struggling to determine a "straight forward" groovy way to amend a simple linear
script into one using some level of concurrency. I cannot find suitable examples for the
task at hand, namely:
I use derby database to collect data from email files on a server. So
1. I walk the directory tree using a mix of eachDirRecurse and eachFileMatch.
2. High level directories are names of mailboxes - so I add these to the database, first
checking that the mailbox is not already in the db.
int addUser(def Username)
{
def res = sql.firstRow ("select id from user_info where user_name
= ?", [Username])
if (!res) {
def keys = sql.executeInsert("insert into user_info
(user_name) "
+ "VALUES (?)",
[Username])
return keys[0][0] //return the auto-generated
row id number from the db
} else {
return res.id
}
}
I guess I could just insert the data - and if it errors with 'duplicate key' then I know it
already exists, but then I would still need to obtain the row ID to return back to the caller.
1. And when I find a file that is an email type, I read it line-by-line until I obtain
the required header details (date: / subject: / from: / message-id: ) or a blank line ( end
of headers). I add these details to the database (again, checking that the item is not already
there).
So I currently use a single SQL connection and a simple loop over the directory and files
- its simple and works well. But as there are several million files, I really need to use
multiple threads.
I read that a "DataSource" is a way to pool Database connections. I just cant see how this
works - does it just dynamically create connections on demand [def sql = new Sql(mydatasource)],
and when the 'sql' variable is garbage collected, the connection is returned to the 'pool'?
Is each sql instance "thread safe" from each other?
And are prepared statements also in the 'pool' - so the sql statements are not parsed every
time regardless of the connection used?
As for concurrency...
I have previously used, in a basic sense, threads. And then I looked at GPARS, which seems
to be the appropriate way to go. So how might the 'eachDirRecurse' and 'eachFileMatch' be
altered to a GPARs "withPool" collection loop? How should each loop call the sql routines
so they are thread safe - presumably by creating sql connection from datasource (pool) >
do sql > done. The withPool will create upto cpu-count + 1 - but should I use more with
this type of process logic? I assume that I could use the "withPool" within another "withPool",
so that I can process [the pool count] some mailboxes in concurrently and also the files within
each mailbox in parallel.
Is there some metric that determines how effective concurrent disk actions (just reading in
this case) can be - e.g. so I could determine a sensible limit on the number of [email] files
being read at the same time. What monitoring method would help?
I don't think I need to use "actors" here, nor the "dataflow" feature.
Even after reading Groovy in Action (2ed), it is still not really clear how to proceed. I
have googled a lot, but still cannot map my ideas into a GPARS solution. So I thought I should
ask the experts - the groovy community - for some suggestions or appropriate reading material.
The nearest I have found to a useful template on this topic is
But I just cannot see how or why the db connection pool interacts with the GPARs so that the
same connection is not grabbed by each concurrent process.
Yours, hopefully,
Merlin Beedell | http://mail-archives.eu.apache.org/mod_mbox/groovy-users/202001.mbox/%3CLOYP123MB2894F5400A53992CD0543AE8B90E0@LOYP123MB2894.GBRP123.PROD.OUTLOOK.COM%3E | CC-MAIN-2020-34 | refinedweb | 611 | 70.63 |
Subject: Re: [boost] [config/multiprecision/units/general] Do we have a policy for user-defined-literals?
From: Vicente J. Botet Escriba (vicente.botet_at_[hidden])
Date: 2013-04-27 11:00:42
Le 27/04/13 14:29, John Maddock a écrit :
> Folks,
>
> I've been experimenting with the new C++11 feature
> "user-defined-literals", and they're pretty impressive ;-)
>
> So far I've been able to write code such as:
>
> auto i = 0x1234567890abcdef1234567890abcdef_cppi;
>
> and generate value i as a signed 128-bit integer. It's particulary
> remarkable because:
>
> * It actually works ;-)
> * The value is generated as a constexpr - all evaluation and
> initialization of the multiprecision integer is done at compile time.
> * The actual meta-code is remarkably short and concise once you've
> figured out how on earth to get started!
>
> Note however my code is limited to hexadecimal constants, because it
> can't do compile time radix conversion - that would require a huge
> meta-program for each constant :-(
>
> This is obviously useful to the multiprecision library, and I'm sure
> for Units as well, but that leaves a couple of questions:
>
> 1) We have no config macro for this new feature (I've only tested with
> GCC, and suspect Clang is the only other compiler with support at
> present). What should it be called? Would everyone be happy with
> BOOST_NO_CXX11_USER_DEFINED_LITERALS ?
This is fine. clang uses __has_feature(cxx_user_literals)
> 2) How should libraries handle these user defined suffixes? The
> essencial problem is that they have to be in current scope at point of
> use, you can never explicitly qualify them. So I suggest we use:
>
> namespace boost{ namespace mylib{ namespace literals{
>
> mytype operator "" _mysuffix(args...);
>
> }}}
>
> Then users can import the whole namespace easily into current scope
> right at point of use:
>
> int main()
> {
> using namespace boost::mylib::literals;
> boost::mylib::mytype t = 1234_mysuffix;
> }
>.
> 3) How should the suffixes be named? There is an obvious possibility
> for clashes here - for example the units lib would probably want to
> use _s for seconds, but no doubt other users might use it for strings
> and such like. We could insist that all such names added to a boost
> lib are suitably mangled, so "_bu_s" for boost.units.seconds, but I'm
> not convinced by that. Seems to make the feature much less useful?
>
>
I agree with Steven. We should choose the better suffixes for the
specific domain independently of other suffixes on other libraries.
Best,
Vicente
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2013/04/202978.php | CC-MAIN-2020-40 | refinedweb | 425 | 55.95 |
When writing unit tests in Django using django.test.TestCase the database will be flushed after each test. If you're not using an in-memory database this will create a lot of overhead writing to disk.
PostgreSQL is a popular database used in many Django projects but its default behaviour is to write data to disk which makes it much slower then running SQLite in memory. The default SQLite3 driver will run in memory but it won't be able to test PostgreSQL-specific aspects of your models (such as foreign key integrity on by default when loading in fixtures). MySQL supports an in-memory backend but among other things it does not support foriegn keys, blob / text columns or transactions. Using a RAM drive as a storage area for the test database is a good way to test all applicable aspects of PostgreSQL and get the performance of an in-memory database.
Setup a RAM Drive
The following commands were all run on Ubuntu Server 15.10.
To start, I'll create a 512MB RAM drive.
➫ sudo mkdir -p /media/pg_ram_drive ➫ sudo mount -t tmpfs -o size=512M tmpfs /media/pg_ram_drive/
I'll confirm I can see the drive mentioned among the mounting points on my system.
➫ mount | grep pg_ram_drive tmpfs on /media/pg_ram_drive type tmpfs (rw,relatime,size=524288k)
Benchmark comparison between an SSD & the RAM drive
The following benchmarks the RAM drive to see what sort of write performance it offers over the SSD drive on my system.
➫ dd if=/dev/zero \ of=/tmp/benchmark \ conv=fdatasync \ bs=4k \ count=100000 \ && rm -f /tmp/benchmark
100000+0 records in 100000+0 records out 409600000 bytes (410 MB) copied, 0.801915 s, 511 MB/s
➫ dd if=/dev/zero \ of=/media/pg_ram_drive/benchmark \ conv=fdatasync \ bs=4k \ count=100000 \ && rm -f /media/pg_ram_drive/benchmark
100000+0 records in 100000+0 records out 409600000 bytes (410 MB) copied, 0.129084 s, 3.2 GB/s
The SSD drive managed to write at 511 MB/s while the RAM drive was 6.4x faster at 3.2 GB/s.
Install PostgreSQL and setup the RAM Drive Tablespace
The following will install the PostgreSQL 9.4 packages we need:
➫ sudo apt-get update ➫ sudo apt-get install postgresql-server-dev-9.4 \ postgresql-client-9.4 \ postgresql-contrib-9.4 \ libpq-dev
I'll then make sure PostgreSQL's user has ownership over the RAM drive, create the regular django project's database (which will sit on my SSD drive), create the table space for the RAM drive and create a user account for Django to use which will have permissions to create and drop databases.
➫ sudo chown -R postgres /media/pg_ram_drive/ ➫ sudo -u postgres psql
postgres=# CREATE DATABASE django; postgres=# CREATE TABLESPACE ram_disk LOCATION '/media/pg_ram_drive'; postgres=# CREATE USER django WITH SUPERUSER PASSWORD 'django';
There should be a folder on the RAM drive now with a PG_ prefix that looks something like the following:
➫ sudo find /media/pg_ram_drive /media/pg_ram_drive /media/pg_ram_drive/PG_9.4_201409291 # This folder should be empty
Testing a Django project on the RAM drive
I'll install all the packages needed to test an example Django project:
➫ sudo apt-get install python-virtualenv python-pip python-dev git-core
In a previous blog post I created a project where a Django model is tested, I'll run these tests on the RAM drive.
➫ git clone ➫ virtualenv venv ➫ source venv/bin/activate ➫ pip install -r meetup-testing/requirements.txt ➫ pip install psycopg2 ➫ cd meetup-testing
This code base has a convention that settings that need to be overridden are done so in a base/local_settings.py file which is not kept in the git repo. In this file I'll set both the regular and test database settings.
The most important setting is the DEFAULT_TABLESPACE attribute which should be the name of the RAM disk tablespace that was created in PostgreSQL.
➫ vi base/local_settings.py
import sys DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'HOST': 'localhost', 'PORT': 5432, 'NAME': 'django', 'USER': 'django', 'PASSWORD': 'django', 'TEST': { 'NAME': 'django_test', }, }, } if 'test' in sys.argv: DEFAULT_TABLESPACE = 'ram_disk' SECRET_KEY = 'a' * 21
Now when the tests run they're using the RAM drive.
➫ python manage.py test Creating test database for alias 'default'... . ---------------------------------------------------------------------- Ran 1 test in 0.027s OK Destroying test database for alias 'default'...
This code base only has one test but on projects with a lot of tests there should be a significant decrease in the amount of time it takes to run the test suite.
Make the RAM drive permanent
To make sure the RAM drive is available if the system restarts add the following line to your /etc/fstab file:
tmpfs /media/pg_ram_drive tmpfs defaults,noatime,mode=1777 0 0
After a reboot the drive will be mounted and empty. When you run Django's test runner it'll create a test database from scratch on the drive again.
➫ sudo reboot ... ➫ python manage.py test Creating test database for alias 'default'... . ---------------------------------------------------------------------- Ran 1 test in 0.059s OK Destroying test database for alias 'default'... | https://tech.marksblogg.com/test-django-on-ram-drive.html | CC-MAIN-2019-09 | refinedweb | 847 | 60.14 |
by Michael S. Kaplan, published on 2006/10/22 03:02 -04:00, original URI:
Lots of people have pointed out both before and after I did in When a user sets something. please assume they meant it how unfortunate it is that so many different applications and processes turn on ClearType whether the user wants it on or not.
Well, just as I previously talked about how the design of IsNLSDefinedString was actually a a bit of an exercise in social engineering, the unfortunate fact is the documentation about ClearType and its pointers to the fdwQuality parameter in the CreateFont function and the lfQuality member of the LOGFONT structure make it really easy to decide what you want the behavior to be:
Looking at wingdi.h, the values behind these constants are:
#define DEFAULT_QUALITY 0
#define DRAFT_QUALITY 1
#define PROOF_QUALITY 2
#if(WINVER >= 0x0400)
#define NONANTIALIASED_QUALITY 3
#define ANTIALIASED_QUALITY 4
#endif /* WINVER >= 0x0400 */
#if (_WIN32_WINNT >= _WIN32_WINNT_WINXP)
#define CLEARTYPE_QUALITY 5
#define CLEARTYPE_NATURAL_QUALITY 6
#endif
Yes, there is that extra CLEARTYPE_NATURAL_QUALITY constant which is defined in the header file and documented all over creation except on MSDN right now. Good luck trying to determine what it means. :-)
The method that anyone who wants to ignore the user setting will use is to jut pass a different value in the fonts that are created....
And what about managed code?
Well, the truth is that GDI+ does not define any of this stuff in its font creation, as code like this shows:
using System;
using System.Drawing;
using System.Runtime.InteropServices;
namespace FontStuff {
class test {
private unsafe QUALITY lfQuality;
public byte lfPitchAndFamily;
public fixed char lfFaceName[32];
};
private enum QUALITY : byte {
DEFAULT_QUALITY = 0,
DRAFT_QUALITY = 1,
PROOF_QUALITY = 2,
NONANTIALIASED_QUALITY = 3,
ANTIALIASED_QUALITY = 4,
CLEARTYPE_QUALITY = 5,
CLEARTYPE_NATURAL_QUALITY = 6
}
[DllImport("gdi32.dll", CharSet=CharSet.Unicode, ExactSpelling=true, CallingConvention=CallingConvention.StdCall)]
private static extern int GetObjectW(IntPtr hgdiobj, int cbBuffer, out LOGFONT lpvObject);
[DllImport("gdi32.dll", CharSet=CharSet.Unicode, ExactSpelling=true, CallingConvention=CallingConvention.StdCall)]
private static extern IntPtr CreateFontIndirectW(ref LOGFONT lplf);
[DllImport("gdi32.dll", ExactSpelling=true, CallingConvention=CallingConvention.StdCall)]
private static extern bool DeleteObject(IntPtr hObject);
[STAThread]
static void Main(string[] args) {
if(args.Length != 1) {
Console.WriteLine("You have to give a font name.");
} else {
Font fnt = new Font(args[0], 9);
IntPtr hFont = fnt.ToHfont();
LOGFONT lf = new LOGFONT();
if(GetObjectW(hFont, Marshal.SizeOf(lf), out lf) != 0) {
Console.WriteLine("The GDI+ lfQuality of '{0}' is {1}.", args[0], lf.lfQuality);
lf.lfQuality = QUALITY.CLEARTYPE_QUALITY;
IntPtr hFont2 = CreateFontIndirectW(ref lf);
if(hFont2 != IntPtr.Zero) {
Font fnt2 = Font.FromHfont(hFont2);
IntPtr hFont3 = fnt.ToHfont();
LOGFONT lf2 = new LOGFONT();
if(GetObjectW(hFont3, Marshal.SizeOf(lf2), out lf2) != 0) {
Console.WriteLine("The modified GDI+ lfQuality of '{0}' is {1}.", args[0], lf2.lfQuality);
}
}
DeleteObject(hFont2);
}
}
}
}
}
It will pretty much always return DEFAULT_QUALITY for any font created from GDI+ even if it is created with a ClearType-enabled font on the Win32 side. And the same thing will happen with any font that heads through GDI+. This makes it much harder to actually translate ClearType usage across the managed/unmanaged boundary of GDI/GDI+.
To get ClearType support within GDI+, as the MSDN help topic Antialiasing with Text indicates, you have to set the Graphics.TextRenderingHint property to one of the members of the TextRenderingHint enumeration.
The docs are not entirely clear on the point though, because even though they say things like
The quality ranges from text (fastest performance, but lowest quality) to antialiased text (better quality, but slower performance) to ClearType text (best quality on an LCD display).
the actual values behind the enumeration are not given, and the members are listed alphabetically rather than by any kind of range order. The actual ordered list, values 0 to 5, would be:
Now this list does not seem to be in any kind of range order either, for what its worth. :-)
Though by making it a setting to make for each Graphics object rather than each font, it is perhaps a bit less common to actually change the value from what the system wants. It is a subtle difference, but in the end you are hopefully much more likely to see (for example) WinForms applications follow the system settings. Thank goodness for small favors.
Now Avalon (WPF) does this an entirely different way, and I will have to talk about that another day....
This post brought to you by ⎚ (U+239a, a.k.a. CLEAR SCREEN SYMBOL)
# Robert on 23 Oct 2006 2:05 AM:
CLEARTYPE_NATURAL_QUALITY: My guess was that text output functions would use native ClearType metrics rather than adjust glyphs to fit the metrics of ANTIALIASED_QUALITY/NONANTIALIASED_QUALITY. For example, 8pt Tahoma 's' on 96dpi is one pixel wider with CLEARTYPE_NATURAL_QUALITY enabled. But I may be entirely wrong...
# Mike Dimmick on 23 Oct 2006 5:37 AM:
I had an experience with Vista's scaling options on my new laptop over the weekend. The laptop has a 15.4" panel in 16:10 at 1680x1050 pixels, which makes it about 13.06" across and therefore about 128ppi. So I used the DPI control panel to set 120dpi - and received the same nasty font shapes and bad scaling that happened in Windows XP.
I thought Vista was supposed to fix this? I went back into the DPI Settings dialog and tried custom settings, and spotted the 'Use Windows XP Scaling' checkbox, which was checked. I unchecked it.
I can see why it was checked by default. With this checkbox checked, Vista applies ClearType _before_ passing the texture to the video card to be scaled. Result, really, really blurry text. Horrible.
I went back to 96 dpi. It might be that I'm more familiar with those character shapes, but it looks a lot better.
Then I wiped Vista (build 5744) off, because I'm not going to upgrade. I really just wanted to see how well the new computer performed compared to the old one (very well, although I had to run the experience score calculator a second time because I didn't believe the desktop graphics score of 2.2 - this went up to 3.1 the second time round).
With Vista it's very much, "you had me and you lost me." Through the betas I had problems with eVC 3.0 and 4.0 (still essential for my work) but hoped that these would get a compatibility shim..
# Michael S. Kaplan on 23 Oct 2006 10:20 AM:
Hi Mike,
Peter Constable has gotten me interested in the whole "Natural DPI" thing -- where you use a custom setting and get the ruler to make the inch look exactly like an inch. It made everything look pretty good (XP scaling off, of course!)....
It's too bad you didn't leave it installed just a little bit longer, in my opinion. You know, at least long enough to be able to look at some of the cool international features? :-)
But I understand what you are saying, and I hope you end up with an opportunity to reconsider at some point, because there are some really cool things there....
# Mike Dimmick on 23 Oct 2006 6:59 PM:
I'm just trying custom scaling on XP and IE7 is a little broken! You may not be aware that they're turning off ClearType (even if ClearType is ON in the Desktop control panel, grrr) anywhere that one of the DirectX Transforms is enabled, which gives you poor aliased type in the left-hand pane and bottom bar on the microsoft.com homepage, and numerous places on msdn.microsoft.com due to the use of gradient fills. Anyway, on turning on 144dpi (150%), the formerly aliased text seems to be rendered at 96dpi then scaled up using some filtering technique, because it's extremely blurry. It's the same at 120dpi, actually!
The fact that this was not spotted in beta suggests that I'm not the only one who sticks with 96dpi regardless of the monitor's actual ppi.
The icons on the buttons in the Favorites Center are also way too small. I'm not sure if that's better or worse than being hideously scaled like the icons in the shutdown dialog.
I'd post this on Connect, but the IE7 team have just shut down their feedback site for the time being. Presumably this is to review all the bugs and suggestions that weren't implemented in IE7 final.
Another problem I have is that I'd like to be able to use my old monitor - 19" LCD at 1280x1024, approx 86ppi - as a second monitor, but I can only select one global DPI value. Whichever DPI setting I choose will be a compromise.
# Dean Harding on 25 Oct 2006 1:58 AM:
Mike: I'm not sure what you mean by the text scaling issues. If you have that "Use XP-style scaling" UN-checked, then things certainly look pretty horrible most of the time (unless you manually go in to each application an uncheck the "apply scaling" in it's compatibly dialog). But if you check the XP-style scaling, then things look pretty good, in my opinion.
Well, mostly. Bitmaps don't scale very well, unfortunately. But I would definitely agree that most Microsofties probably don't run at anything other than the default 96DPI, because there are some pretty brain dead bugs in a few Microsoft apps (and of course, Win Forms has pretty atrocious support for non-96DPI display built right in, but at least that's fixed for WPF). At list Microsoft apps are better than most other software out there, though!
Non-96DPI support is something that's pretty near-and-dear to my heart, having run my laptop at a higher DPI for a couple of years now. MOST things are getting better, but you do sometimes run into a backwards step (like when upgrading from MSN messenger, which did high DPI nicely, to Windows Live Messenger, which does not).
I also agree that not being able to set DPI separately for each screen is a bit annoying (like not being able to set ClearType separately for each display, but lets not go there :).
# Michael S. Kaplan on 25 Oct 2006 2:14 AM:
On my Latitude D820:
Everything looks about the same size on the screen as it did at 96 DPI but everything looks sharper and crisper....
# Dean Harding on 25 Oct 2006 2:57 AM:
I'm about the same, except I turn XP scaling ON. The problem with having it off is that most apps are scaled by simply scaling their bitmap which makes everything blurry. The other problem I had with it was that some dialogs which tried to pop up in the centre of the screen ended up being moved down and to the right... (plus a couple of other "minor" bugs)
Though, to be honest, I only tried it in Vista beta 2, never in any of the RC versions, so maybe it improved?
# Yuhong Bao on 28 Oct 2006 2:40 PM:
."
Maybe, but if I were forced to choose between supporting VB 6 and VS 2003 when VS 2005 is supported, I would choose VB 6. Because it is MUCH easier to upgrade from VB .NET 2003 to VB 2005 than upgrading from VB 6 to VB 2005.
# shyam on 11 Nov 2007 2:48 AM:
i want to create a keyboard usercontrol in vb 6 what can i do
# Michael S. Kaplan on 11 Nov 2007 7:16 AM:
Maybe ask over in the Suggestion Box, for starters? The question has nothing to do with this post whatsoever....
referenced by
go to newer or older post, or back to index or month or day | http://archives.miloush.net/michkap/archive/2006/10/22/854820.html | CC-MAIN-2017-22 | refinedweb | 1,958 | 62.27 |
Checking bandwidth available for FiPy possible?
Hi.
Would have posted in the "Cellular" topic, but that is unavailable.
I am using a FiPy, which is used for sending sensor data to Azure IoT Hub using LTE NB-IoT. But i am finding that establishing the connection to Azure takes a long time, sometimes up to around 10 seconds.
Due to power-saving reasons, i don't want it to stay connected all the time, but the wait for connecting to Azure is not optimal.
I would like to investigate the bandwidth that we are operating with when using the LTE NB-IoT network. Is there a method that can do this in the network library?
Is the network library available for reading?
Best Regards
RM
So my bandwidth is not really the problem here. The problem is that the umqtt / simple library is slow in use. When my FiPy is already connected to LTE, and it then tries to connect to Azure IoT Hub, establishing the connection takes around 30 seconds, whereas simply publishing is fast and not a issue at all. So i am currently looking into optimizing / speeding up the connect routine if possible in the simple.py library.
Here is the code i use to connect:
def iot_connect(self): if self.IoTConnectedFlag == False: self.password = self.generate_sas_token(self.uri, self.primary_key, self.policy_name) beforeConn = utime.ticks_us() self.username_fmt = "{}/{}/api-version=2018-06-30" self.username = self.username_fmt.format(self.hostname, self.device_id) self.client = MQTTClient(client_id=self.device_id, server=self.hostname, port=8883, user=self.username, password=self.password, keepalive=4000, ssl=True) self.client.connect() print("IoT Connected!") self.IoTConnectedFlag = True afterConn = utime.ticks_us() connDone = afterConn - beforeConn print(connDone)
the utime.ticks_us() are used to observe how long time the method takes to execute.
Whole method = 28.61 secs
Whole method minus SAS Token generation = 27.27 secs
self.client.connect() = 16.94 secs
Any ideas on how to optimize this speed is very welcome! Thanks
/RM
Found the solution! Posting here anyone who would ask the same:
print(lte.send_at_cmd('AT+CSQ')) print(lte.send_at_cmd('AT+CSQ=?'))
This will show you the signal quality, and the number of errors!
/RM | https://forum.pycom.io/topic/5827/checking-bandwidth-available-for-fipy-possible | CC-MAIN-2022-33 | refinedweb | 365 | 52.26 |
New code for OpenShift online has been pushed out to production. It contains lots of bug fixes and more great work being done behind the scenes to fully enable the new cartridge format. rhc client tools: error detection and better inline help We've also had some great updates to the CLI tools. Please be sure to update them. For most people that's a simple “gem update rhc”. These updates include better detection and reporting of errors during app creation--like if your ssh key isn't created or present. The commands also include greatly improved inline help. Social sharing buttons on the Community site And for those of you wanting to tell more people about what you've found at our website, we've brought back social sharing buttons on the content pages. Twitter, Facebook, and Google+ are ready to go, so please use them and try them out. As always, if you have any questions or comments about the release or what we're working on just ask us. We're always interested in what you're doing too, so email us at openshift@redhat.com and let us know. Who knows, we might feature your application, or give you a shoutout. Disallow user namespace changes For the time being we're disallowing user namespace changes. This means if your app is “myapp-funtimes.rhcloud.com” you can't change the namespace “funtimes” to anything else without re-creating your app. What's Next? Sign up for OpenShift Online Interested in a private Platform As a Service (PaaS)? Register for an evaluation of OpenShift Enterprise Need Help? Post your questions in the forums Follow us on Twitter Categories OpenShift Online < Back to the blog | https://www.openshift.com/blog/new-online-features-for-april-2013 | CC-MAIN-2021-21 | refinedweb | 287 | 73.98 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi Team,
while creating the issue through REST API, 'Team' field of JIRA Portfolio is not populating.
I am trying to update Team with Custom ID: customfield_10300.
your immediate help is highly appreciated
Thanks and Regards
Ramachandra Reddy
JIRA won't let you create a new field during an issue creation/update. Those fields have to exist in JIRA already before you can update a value for that field on that issue.
There is a way to create new custom fields via REST as per
Alternatively you can also create the custom field with that name in the JIRA GUI as explained in Adding a Custom Field. Please note that when you first create a custom field in JIRA, you will probably have to reindex JIRA in order to then be able to use that new custom field.
Hi @Steven Behnke,
please find my code as below.
RestClient client = new RestClient("");
RestRequest request = new RestRequest("issue/", Method.POST);
client.Authenticator = new HttpBasicAuthenticator(txtJIRAUN.Text, txtJIRAUP.Text);
createIssue(request, client);
private void createIssue(RestRequest request, RestClient client)
{
try
{
var issue = new Issue
{
fields =
new Fields
{
description = JiraDesc,
summary = _tfsTitle,
project = new Project { key = "ProjectName" },
issuetype = new IssueType { name = "IssueType" },
priority = new Priority { name = _priority },
customfield_10400 = _tfsID,
customfield_11201 = _custName,
customfield_11500 = _supportid,
customfield_11100 = new Customfield_11100 { value = _itemType },
customfield_10300 = "TestTeam",
}
};
request.RequestFormat = DataFormat.Json;
request.AddJsonBody(issue);
var res = client.Execute<Issue>(request);
if (res.StatusDescription == "Created")
{
_jiraKey = res.Data.key;
bg.ReportProgress(0, "Issue Created, Id: " + _jiraKey);
}
else
{
bg.ReportProgress(0, "Issue Not Created.");
}
}
catch (Exception ex)
{
throw new Exception(ex.Message);
}
}
public class Issue
{
public string id { get; set; }
public string key { get; set; }
public Fields fields { get; set; }
}
public class Fields
{
public Project project { get; set; }
public IssueType issuetype { get; set; }
public Priority priority { get; set; }
public string summary { get; set; }
public string description { get; set; }
public string customfield_10400 { get; set; }
public string customfield_11500 { get; set; }
public string customfield_11201 { get; set; }
public Customfield_11100 customfield_11100 { get; set; }
public string customfield_10300 { get; set; }
}
public class Project
{
public string id { get; set; }
public string key { get; set; }
}
public class IssueType
{
public string id { get; set; }
public string name { get; set; }
}
public class Priority
{
public string id { get; set; }
public string name { get; set; }
}
public class Customfield_11100
{
public string id { get; set; }
public string value { get; set; }
}
Okay, the error clearly states the problem.
Team with ID 'TestTeam' could not be found
Can you create an issue in JIRA with the Team field set to TestTeam and perform an HTTP GET on /rest/api/2/issue/issueKey so you know what you need to send to JIRA?
TestTeam is available in Team field in JIRA. I can select Team field as TestTeam in JIRA, It's working fine.
I am passing TestTeam as below. (as mention in my code)
customfield_10300 = "TestTeam"
anything wrong here..
Can you perform an HTTP GET on an issue with it so you can see what value is stored? If the API isn't documented you should at least take a look at the GET shape and we can hope it's the same shape to POST. | https://community.atlassian.com/t5/Jira-questions/Create-an-Issue-with-third-party-field-through-rest-api/qaq-p/632110 | CC-MAIN-2018-43 | refinedweb | 538 | 50.97 |
Use the steps below to create a basic mapping app using the Gradle build tool. The steps use the Eclipse IDE, but other IDEs have Gradle support too.
Make sure your development machine meets the System requirements for 100.2.
Create a Gradle project in Eclipse
- In Eclipse, click File > New > Project in the menu bar. In the New Project dialog, select the Gradle folder and click Gradle Project. Click Next.
- In the Project name field, enter app and click Finish.
In the Package Explorer, your new project structure should appear as follows:
- Remove the automatically generated files. Delete the Library.java file in src/main/java and the LibraryTest.java file in src/test/java.
Add SDK dependencies to your buildscript
- In the Package Explorer, double-click the build.gradle file. Replace the contents with the following:
apply plugin: 'eclipse' apply plugin: 'application' //apply the ArcGIS Java SDK Plug-in for Gradle With_3D-Add_Graphics-Surface_Placement apply plugin: 'com.esri.arcgisruntime.java' buildscript { repositories { maven { url '' } } dependencies { classpath 'com.esri.arcgisruntime:gradle-arcgis-java-plugin:1.0.0' } } arcgis.version = '100.2.1' // download javadoc eclipse.classpath.downloadJavadoc = true
- Save the file. In the Package Explorer, right-click the app project and select Gradle > Refresh Gradle Project to apply the changes to the buildscript.
The ArcGIS Java SDK Plug-in for Gradle will download the SDK's native libraries to the user's home directory.
Note:
It may take a minute to download the native libraries depending on your network speed, but this will only happen the first time you use the SDK.
Develop a JavaFX map app
- Under the project's src/main/java source directory, create the package structure com.mycompany.app. In the app package, create a new class called MyMapApp.
- Double-click the class file in the Package Explorer and replace the code with the following JavaFX application:
package com.mycompany.app; import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public class MyMapApp extends Application { @Override public void start(Stage stage) throws Exception { // create stack pane and application scene StackPane stackPane = new StackPane(); Scene scene = new Scene(stackPane); // set title, size, and add scene to stage stage.setTitle("Display Map Sample"); stage.setWidth(800); stage.setHeight(700); stage.setScene(scene); stage.show(); } /** * Opens and runs application. * * @param args arguments passed to this application */ public static void main(String[] args) { Application.launch(args); } }
- Right-click the code in the editor and choose Run As > Java Application. The app will open to show a empty window with titled "Display Map Sample".
- Following imports are for adding a basemap to a map and displaying that map to the map view.
import com.esri.arcgisruntime.mapping.ArcGISMap; import com.esri.arcgisruntime.mapping.Basemap; import com.esri.arcgisruntime.mapping.view.MapView;
- After the class definition, a private variable for map view is added so it can be dispose of when application is finished.
public class MyMapApp extends Application { private MapView mapView;
- In the start function, after the stage.show() line the following tasks:
- Creates an ArcGISMap class, which defines the content of the map.
- Adds a basemap to the ArcGISMap class showing Imagery mapping.
- Creates a MapView JavaFX visual component that's linked to the Map class.
- Adds MapView to JavaFX application using the StackPane layout.
// create a ArcGISMap with the a Basemap instance with an Imagery base layer ArcGISMap map = new ArcGISMap(Basemap.createImagery()); // set the map to be displayed in this view mapView = new MapView(); mapView.setMap(map); // add the map view to stack pane stackPane.getChildren().addAll(mapView);
- Overrides the stop method to dispose of application resources when the app closes.
/** * Stops and releases all resources used in application. */ @Override public void stop() throws Exception { if (mapView != null) { mapView.dispose(); } }
- Example of what the map should look like. You can use the mouse to zoom and pan the map. | https://developers.arcgis.com/java/latest/guide/develop-your-first-map-app-with-gradle.htm | CC-MAIN-2018-17 | refinedweb | 650 | 59.6 |
The following table provides a list of system error codes (errors 9000 to 11999)..
DNS server unable to interpret format.
DNS server failure.
DNS name does not exist.
DNS request not supported by name server.
DNS operation refused.
DNS name that ought not exist, does exist.
DNS RR set that ought not exist, does exist.
DNS RR set that ought to exist, does not exist.
DNS server not authoritative for zone.
DNS name in update or prereq is not in zone.
DNS signature failed to verify.
DNS bad key.
DNS signature validity expired.
No records found for given DNS query.
Bad DNS packet.
No DNS packet.
DNS error, check rcode.
Unsecured DNS packet.
Invalid DNS type.
Invalid IP address.
Invalid property.
Try DNS operation again later.
Record for given name and type is not unique.
DNS name does not comply with RFC specifications.
DNS name is a fully-qualified DNS name.
DNS name is dotted (multi-label).
DNS name is a single-part name.
DNS name contains an invalid character.
DNS name is entirely numeric.
The operation requested is not permitted on a DNS root server.
The record could not be created because this part of the DNS namespace has been delegated to another server.
The DNS server could not find a set of root hints.
The DNS server found root hints but they were not consistent across all adapters.
The specified value is too small for this parameter.
The specified value is too large for this parameter.
This operation is not allowed while the DNS server is loading zones in the background. Please try again later.
The operation requested is not permitted on against a DNS server running on a read-only DC.
No data is allowed to exist underneath a DNAME record.
This operation requires credentials delegation.
DNS zone does not exist.
DNS zone information not available.
Invalid operation for DNS zone.
Invalid DNS zone configuration.
DNS zone has no start of authority (SOA) record.
DNS zone has no Name Server (NS) record.
DNS zone is locked.
DNS zone creation failed.
DNS zone already exists.
DNS automatic zone already exists.
Invalid DNS zone type.
Secondary DNS zone requires master IP address.
DNS zone not secondary.
Need secondary IP address.
WINS initialization failed.
Need WINS servers.
NBTSTAT initialization call failed.
Invalid delete of start of authority (SOA)
A conditional forwarding zone already exists for that name.
This zone must be configured with one or more master DNS server IP addresses.
The operation cannot be performed because this zone is shutdown.
Primary DNS zone requires datafile.
Invalid datafile name for DNS zone.
Failed to open datafile for DNS zone.
Failed to write datafile for DNS zone.
Failure while reading datafile for DNS zone.
DNS record does not exist.
DNS record format error.
Node creation failure in DNS.
Unknown DNS record type.
DNS record timed out.
Name not in DNS zone.
CNAME loop detected.
Node is a CNAME DNS record.
A CNAME record already exists for given name.
Record only at DNS zone root.
DNS record already exists.
Secondary DNS zone data error.
Could not create DNS cache data.
Could not create pointer (PTR) record.
DNS domain was undeleted.
The directory service is unavailable.
DNS zone already exists in the directory service.
DNS server not creating or reading the boot file for the directory service integrated DNS zone.
Node is a DNAME DNS record.
A DNAME record already exists for given name.
An alias loop has been detected with either CNAME or DNAME records.
DNS AXFR (zone transfer) complete.
DNS zone transfer failed.
Added local WINS server.
Secure update call needs to continue update request.
TCP/IP network protocol not installed.
No DNS servers configured for local system.
The specified directory partition does not exist.
The specified directory partition already exists.
This DNS server is not enlisted in the specified directory partition.
This DNS server is already enlisted in the specified directory partition.
The directory partition is not available at this time. Please wait a few minutes and try again.
The application directory partition operation failed. The domain controller holding the domain naming master role is down or unable to service the request or is not running Windows Server 2003.
A blocking operation was interrupted by a call to WSACancelBlockingCall.
The file handle supplied is not valid.
An attempt was made to access a socket in a way forbidden by its access permissions.
The system detected an invalid pointer address in attempting to use a pointer argument in a call.
An invalid argument was supplied.
Too many open sockets.
A non-blocking socket operation could not be completed immediately.
A blocking operation is currently executing.
An operation was attempted on a non-blocking socket that already had an operation in progress.
An operation was attempted on something that is not a socket.
A required address was omitted from an operation on a socket.
A message sent on a datagram socket was larger than the internal message buffer or some other network limit, or the buffer used to receive a datagram into was smaller than the datagram itself.
A protocol was specified in the socket function call that does not support the semantics of the socket type requested.
An unknown, invalid, or unsupported option or level was specified in a getsockopt or setsockopt call.
The requested protocol has not been configured into the system, or no implementation for it exists.
The support for the specified socket type does not exist in this address family.
The attempted operation is not supported for the type of object referenced.
The protocol family has not been configured into the system or no implementation for it exists.
An address incompatible with the requested protocol was used.
Only one usage of each socket address (protocol/network address/port) is normally permitted.
The requested address is not valid in its context.
A socket operation encountered a dead network.
A socket operation was attempted to an unreachable network.
The connection has been broken due to keep-alive activity detecting a failure while the operation was in progress.
An established connection was aborted by the software in your host machine.
An existing connection was forcibly closed by the remote host.
An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.
A connect request was made on an already connected socket.
A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied.
A request to send or receive data was disallowed because the socket had already been shut down in that direction with a previous shutdown call.
Too many references to some kernel object.
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
No connection could be made because the target machine actively refused it.
Cannot translate name.
Name component or name was too long.
A socket operation failed because the destination host was down.
A socket operation was attempted to an unreachable host.
Cannot remove a directory that is not empty.
A Windows Sockets implementation may have a limit on the number of applications that may use it simultaneously.
Ran out of quota.
Ran out of disk quota.
File handle reference is no longer available.
Item is not available locally.
WSAStartup cannot function at this time because the underlying system it uses to provide network services is currently unavailable.
The Windows Sockets version requested is not supported.
Either the application has not called WSAStartup, or WSAStartup failed.
Returned by WSARecv or WSARecvFrom to indicate the remote party has initiated a graceful shutdown sequence.
No more results can be returned by WSALookupServiceNext.
A call to WSALookupServiceEnd was made while this call was still processing. The call has been canceled.
The procedure call table is invalid.
The requested service provider is invalid.
The requested service provider could not be loaded or initialized.
A system call has failed.
No such service is known. The service cannot be found in the specified name space.
The specified class was not found.
A database query failed because it was actively refused.
No such host is known.
This is usually a temporary error during hostname resolution and means that the local server did not receive a response from an authoritative server.
A non-recoverable error occurred during a database lookup.
The requested name is valid, but no data of the requested type was found.
At least one reserve has arrived.
At least one path has arrived.
There are no senders.
There are no receivers.
Reserve has been confirmed.
Error due to lack of resources.
Rejected for administrative reasons - bad credentials.
Unknown or conflicting style.
Problem with some part of the filterspec or providerspecific buffer in general.
Problem with some part of the flowspec.
General QOS error.
An invalid or unrecognized service type was found in the flowspec.
An invalid or inconsistent flowspec was found in the QOS structure..
No such host is known securely.
Name based IPSEC policy could not be added.
Send comments about this topic to Microsoft
Build date: 7/2/2009 | http://msdn.microsoft.com/en-us/library/ms681391(VS.85).aspx | crawl-002 | refinedweb | 1,536 | 62.75 |
How to Use MongoDB with Node.js
March 25th, 2022
What You Will Learn in This Tutorial
How to connect a Node.js app to an existing MongoDB database using Node.js. mongodb-tutorial
Next, we want to install two dependencies,
mongodb and
express:
Terminal
npm i mongodb express
The first will give us access to the Node.js driver for MongoDB Starting MongoDB
Before we dig into the code, it's important that you have MongoDB installed and accessible on your computer. If you don't already have MongoDB installed, follow the instructions for the "Community Edition" for your operating system here.
Note: for this tutorial, you only need to ensure that MongoDB is installed. You do not need to follow the instructions on starting MongoDB as a background service. If you understand what this means you're more than welcome to, but we'll cover a different way to start the server next.
Starting a MongoDB Server
Before we start the MongoDB server, we need to have a directory accessible where MongoDB can store the data it generates. From the root of the project we just created under "Getting Started" we want to create a directory
data and inside of that, we want to create another directory
db. After you're done, your directory structure should look something like this:
/mongodb-tutorial -- /data ---- /db
Once you have this, in a terminal window,
cd into the root of the project folder (
mongodb-tutorial) and run the following:
Terminal
mongod --dbpath ./data/db
After running this, you should see some logging from MongoDB which will stop after a few seconds, signifying the server is up and running. Note: this will start MongoDB on its default port
27017. Knowing that will come in handy next when we wire up the MongoDB connection in our app.
Wiring Up the MongoDB Adapter in Node.js
In order to integrate MongoDB into our app, the first—and most important thing—we need to do is set up a connection to MongoDB using their official Node.js package (known as a "driver," a term commonly used to refer to the package or library used to connect to a database via code).
/connectToMongoDB.js
import { MongoClient } from "mongodb"; const connectToMongoDB = async (uri = '', options = {}) => { if (!process.mongodb) { const mongodb = await MongoClient.connect(uri, { useNewUrlParser: true, useUnifiedTopology: true, ssl: process.env.NODE_ENV === "production", ...options, }); const db = mongodb.db('example'); process.mongodb = db; return { db, Collection: db.collection.bind(db), connection: mongodb, }; } return null; }; export default await connectToMongoDB('mongodb://localhost:27017', {});
Starting at the top of our file, the first thing we want to do is import the named export
MongoClient from the
mongodb package we installed via NPM earlier. The "named" export part is signified by the curly braces wrapping the variable name where as no curly braces would suggest a "default" export.
Next, we want to define a function that will be responsible for establishing the connection to our database. Here, we've defined an arrow function
connectToMongoDB() which takes two arguments:
uri and
options.
Here,
uri refers to the MongoDB connection string. This is a special URI that MongoDB recognizes and explains where the MongoDB driver can find a running MongoDB database to connect to. For
options, these are any special configuration options we want to pass to the driver (e.g., overrides of defaults or options not set here in the tutorial).
Inside of the function, first, we make sure that we don't have an existing
process.mongodb value. This is a convention we're introducing for this tutorial. As we'll see, our goal will be to make our MongoDB database accessible on the process object so that, if we wish, we can access our MongoDB connection globally in our app. The benefit of this will be that we can "reuse" the same connection throughout our app which reduces the overall strain on the MongoDB server.
If we don't already have a value set to
process.mongodb, next, we want to tell the driver to connect to the passed
uri along with some default options. To do that, we call to
MongoClient.connect() passing the
uri (the same one passed to our
connectToMongoDB() function as the first argument) we want to connect to as the first argument, followed by an object containing the options for that connection as the second argument.
Note: we expect this function to return a JavaScript Promise, so, we've made use of the short-hand
async/awaitpattern in JavaScript to keep our code clean. In order to make this work, we place the
asynckeyword at the start of the function inside of which we'll use the
awaitkeyword and then place the
awaitkeyword in front of the function who's Promise we want to "wait on" before evaluating the reset of our code.
To the options object we're passing as the second argument, we've passed three defaults:
useNewUrlParserwhich tells the driver to respect the newer
mongodb+srv://style of connection URL.
useUnifiedTopologywhich tells the driver to use the new, more efficient "topology" (MongoDB's internal name for the core parts of the database) which combines all of the important parts of the DB together into one piece.
sslwhich tells MongoDB whether or not it should only accept connections over a secure SSL connection. Here, set to
trueonly if the value of
process.env.NODE_ENVis
"production".
Finally, beneath these defaults, we use the JavaScript spread
options object we've passed as the second argument to
connectToMongoDB will be copied onto the options object we're passing to
MongoClient.connect(). Additionally, if you want to configure one of the three default options listed above differently, this pattern will automatically overwrite the defaults if you specify a value (e.g., if you set
useUnifiedTopology: false on your
options object, that would override the default
true version).
...operator to say "take any options passed and spread (or "copy") them onto the object we're passing here." In other words, any properties defined on the
Next, with our connection (presumably) accessible in the
mongodb variable we assigned our
await MongoClient.connect() call to, next, we create another variable
db and assign it to
mongodb.db('example') where
example is an arbitrary database name that we want to connect to on our MongoDB server (this should be replaced with the name of your own database).
We call this here as it gives us short-hand access to the MongoDB database we're connecting to which avoids us having to write out the
.db('<database>') part in every query we want to run. Next, after this, we assign that
db value to
process.mongodb (remember we hinted at this earlier). This now gives us global access to our MongoDB database throughout our entire app.
One more step: from our function, we want to return an object which gives us access to our MongoDB connection in various ways. This gives us flexibility in our code so we're not stuck with limited access to the database.
On that object, we've defined three properties:
dbwhich is the
dbvariable we just created and explained above.
Collectionwhich is a "hack," which allows us to quickly create a handle for a specific collection in our database.
connectionwhich is the raw connection we established with
MongoClient.connect().
Finally, at the bottom of our
connectToMongoDB() function, we return
null if
process.mongodb is already set.
One more thing in this file before we move on. You'll notice that at the very bottom of the file, we're adding a default export of a call to our
connectToMongoDB() function. This is intentional. This allows us to establish a connection to MongoDB automatically wherever this file is imported in our app. If we look, we're hardcoding the URI for our MongoDB database as the first argument passed to the function
mongodb://localhost:27017.
This will be passed to
connectToMongoDB() as the
uri argument and, ultimately, become the database that the driver tries to connect to. Because we used the
async keyword in front of
connectToMongoDB(), when called, it will itself return a JavaScript Promise object, so, in front of our call at the bottom of the file, we use the
await keyword again to say "wait for the connection to establish before exporting the value."
With that, our connection is all set. Next, we're going to look at some examples of putting it to use in our app.
Creating a Collection and Test Data
First, in order to demonstrate our connection, we'll need some test data to work. This is a great opportunity to see how the custom
Collection function we exported from our
/connectToMongoDB.js file works.
/books.js
import MongoDB from './connectToMongoDB.js'; const Books = MongoDB.Collection('books'); if (await Books.countDocuments() < 3) { await Books.bulkWrite([ { insertOne: { document: { title: 'The Culture We Deserve', author: 'Jacques Barzun', year: '1989', }, }, }, { insertOne: { document: { title: 'The Fabric of Reality', author: 'David Deutsch', year: '1998', }, }, }, { insertOne: { document: { title: 'The Bitcoin Standard', author: 'Saifedean Ammous', year: '2018', }, }, } ]) } export default Books;
First, at the top of our file, we've imported the default export from the
/connectToMongoDB.js file we wrote above (the result of calling
await connectToMongoDB()). In the
MongoDB variable here, we expect to have the object that we returned from our
connectToMongoDB() function.
Remember that on that object, we added a special property
Collection which gives us an easy way to connect to a MongoDB collection with less code. Here, in order to create a handle for a new collection
books, we call to
MongoDB.collection('books'). This does two things:
- Creates the
bookscollection in MongoDB if it doesn't already exist.
- Returns the collection handle for use elsewhere in our code.
By "handle" we mean a reference back to the collection. We can see this handle put to use just below this where we attempt to seed the database with some test data. Here, we say "if
Books.countDocuments() returns a number less than three, insert the following documents into that collection."
Without this, we'd have to write something like...
await process.mongodb.collection('books').countDocuments(); or MongoDB.db.collection('books').countDocuments();
Much more concise thanks to our
Collection function.
Though it's not terribly relevant to our work here, inside of the
if statement, assuming that we do not have three existing books, we call to the
.bulkWrite() method MongoDB provides as part of the driver, inserting three books for our test data.
The important part: at the bottom of our file, we take the
Books variable we stored our collection handle in and export it as the default value from our file. This will come in handy next when we read some data back from the database.
Reading Data
To finish up, now, we want to demonstrate reading data from MongoDB using the collection handle we just established with
MongoDB.Collection(). To do it, we're going to wire up a simple Express.js app with a single route
/books where we can retrieve the current list of books in our collection.
/index.js
import express from 'express'; import Books from './books.js'; const app = express(); app.get('/books', async (req, res) => { res.setHeader('Content-Type', 'application/json'); res.status(200); res.send(JSON.stringify({ books: await Books.find().toArray() }, null, 2)); }); app.listen(3000, () => { console.log('App running on localhost:3000'); });
A quick overview of the Express parts: here, we import
express from the
express package we installed earlier and then create a new instance by calling
express() as a function and storing that instance in the variable
app.
Next, at the bottom of our file, we start our Express.js server on port
3000 by calling
app.listen() and providing a callback function where we log out a message to our terminal to let us know the server is running.
The part we care about here: in the middle, we've added a call to
app.get() which defines a route in our application
/books which supports an
HTTP GET request. For that route, we've defined a handler function (pay attention to the usage of
async in front of the function, signifying that we'll use
await somewhere inside of the function) which is designed to respond with a list of our books.
To do it, we make sure to set the
Content-Type header on the
response object to
application/json, then provide an HTTP status code of
200 (meaning
ok or
success) and then finally, call to
res.send(), passing a
JSON.stringify() call, to which we're passing an object with a property
books which is assigned to the result of calling
await Books.find().toArray() which leverages the
Books handler we created in the previous step to perform a query on our books collection.
That's it! If we make sure our MongoDB database is up and running and then start up this server with
node index.js from our terminal (you will need one terminal window/tab for MongoDB and one for this Express server), we should see our books displayed if we visit
Wrapping Up
In this tutorial, we learned how to wire up a connection to a MongoDB database using the official
mongodb package. We learned how to write a wrapper function to help us establish that connection along with some convenience methods to make interacting with MongoDB easier in our code. We also learned how to create a new collection and seed it with some data, as well as how to read data back from a collection via a route in Express.js.
Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox.
No spam. Just new tutorials, course announcements, and updates from CheatCode. | https://cheatcode.co/tutorials/how-to-use-mongodb-with-node-js | CC-MAIN-2022-21 | refinedweb | 2,299 | 62.38 |
What is JSON?
, JSON parser is used to make JSON work with other languages... Engineer"
}
Read more tutorials about JSON.
Which programming language supports... can check the complete list at
Read more tutorials about...(Thread.java:724)
Caused by: java.lang.NoClassDefFoundError: javax/json/Json
JSON Tutorials
JSON Tutorials
... tutorials:
What is JSON?
- Understand What...
In the previous section of JSON tutorials you
have seen how JSON can be used
Creating Message in JSON with JavaScript
Creating Message in JSON with JavaScript... about the JSON
in JavaScript's some basic concepts of creating a simple object... a message with JSON
in JavaScript.
In this example of creating message in JSON
JSON-RPC
JSON-RPC
JSON-RPC-Java is a dynamic JSON-RPC implementation in
Java. It allows you to transparently call server-side Java code from JavaScript
with an included lightweight JSON-RPC
JEE7 JSON: How to use JEE 7 JSON API?
JEE7 JSON: How to use JEE 7 JSON API?
In this tutorial I will explain you how you can use JEE 7 JSON API in your
program for consuming and generating the JSON data on fly. The JSON stands for
JavaScript Object Notation. The JSON array objects retrieval in javascript
("application/json");
response.getWriter().write(jsonObj.toString());
want to get above...JSON array objects retrieval in javascript I am fetching some data... box is not populating any value, perhaps i am doing something wrong in json
How to Make HTTP Requests Using Curl and Decoding JSON Responses in PHP
How to Make HTTP Requests Using Curl and Decoding JSON Responses in PHP Make HTTP Requests Using Curl and Decoding JSON Responses in PHP
... to make request to other server like payment gateways integration, fetching catalog... more JEE tutorials.
Thanks
HI,
Check more JEE tutorials.
Thanks
Minute to Win It
Minute to Win It Frnds i urgently need the code for an online game similar to cows and bulls played by two opponents. The game can be played... number may not begin with 0. Any number can be guessed in less than one minute
Getting Json data from servlet to javascript variable
Getting Json data from servlet to javascript variable How do i get json data from my servlet on to a variable in javascript n bind the data to display onto sigma grid.Has anyone Idea how to do
MISTAKES CORRECTION - Java Beginners
for your early and helpfull reply. I shall once again to the practical work
Tree structure from json whose parents can be dragged and dropped from child
Tree structure from json whose parents can be dragged and dropped from child I want to create tree structure whose parents will get adjusted... want to make child elements as parent then tree structure should look like
MISTAKES CORRECTION - Java Beginners
,
P.Ravichristy I used ur code which
import package.subpackage.* does not work
.
A.java
B.java
C.java
Below is the code block
A.java:-
package com.test...();
}
}
When i am compiling the above 3 class, i am getting an error message like... or make sure it appears in the correct subdirectory of the classpath
How can I initialize the JSONArray and JSON object with data?
How can I initialize the JSONArray and JSON object with data? How can I initialize the JSONArray and JSONObject with data
JSP Tutorials - Page2
JSP Tutorials page 2
JSP Examples
Hello World JSP Page... through the HTML code in the JSP page. You can simply
use the <form><... of handling the form through the JSP code. This section
provides JSP code which
NoughtsAndCrossesGame play button doesn't work
in the above loop but is shown separately here to make
// it clearer...NoughtsAndCrossesGame play button doesn't work /*
* To change... main(String[] args) {
// TODO code application logic here
JSP Tutorials Resource - Useful Jsp Tutorials Links and Resources
JSP Tutorials
.... By actually typing in the examples
and getting them to work, you will gain...-building tools you normally use. You then
enclose the code for the dynamic
How to Work - JSP-Servlet
to directly insert java code into jsp file, this makes the development process very... "
PHP Make Time
PHP Make Time
In PHP, mktime() funtion is used for doing date arithmetic...
minute
The number of the minute...;
To calculate next day use the following code:
<?php
<?php
Tutorials on Java
anywhere can make use of
them. Tutorials along with Video tutorials, examples...
Tutorials on Java
Tutorials on Java topics help programmers to learn... and work your way up the
ladder by becoming proficient in the language.
Following
Java Swing Tutorials
Java Swing Tutorials
Java Swing tutorials
- Here you will find many Java Swing examples with running source code.
Source code provide here are fully tested
Submit Tutorials - Submitting Tutorials at RoseIndia.net
Submit Tutorials
Submitting Tutorials at RoseIndia.net is very easy. We
welcome all members to submit their tutorials at RoseIndia.net. We are big
tutorial web site
Catching Exceptions in GUI Code - Java Tutorials
the given below code to identify the uncaught exception :
import...);
gui.show();
}
}
First we compile and run this code using javaw.exe... stay
pressed.
When you run the same code using java.exe you will get
Ajax Code Libraries and Tools
, JSON and TEXT ajax transactions. My-BIC has also been tested to work.... These properties make JSON an ideal data-interchange language.
JSON is built...
Ajax Code Libraries and Tools
HTML5 Tutorials
HTML 5 Tutorials
In this section we have listed the tutorials of HTML 5...
of HTML5 is to make browser as a application platform. Now HTML5 allows
you... of HTML5 which helps the developers to make
attractive UI for their web
JSF - Java Server Faces Tutorials
JSF - Java Server Faces Tutorials
Complete Java Server Faces (JSF) Tutorial -
JSF Tutorials. JSF Tutorials at Rose India covers... JSF tutorial comes with free source code and
configuration files
why above code returning the value of squares is 82... even after returning an increnented value of squares... please help me...
why above code returning the value of squares is 82... even after returning an increnented value of squares... please help me...
public class d55 {
int squares = 81;
public static void main(String
Java HashMap - Java Tutorials
; map = new HashMap<String, Object>();
The difference between the above... implementation without any code compatibility
problem. But this is not possible... the hash code for the invoking map.
boolean isEmpty( )
Returns
Java : Servlet Tutorials
() function, which runs the
code after every 1 minute...
Java : Servlet Tutorials
... easier to use Tomcat for learning
Java Servlet. These step-by-step tutorials
Java Video Tutorials for beginners
Java Video Tutorials for beginners are being provided at Roseindia online for
free. These video tutorials are prepared by some of the best minds of Java... tutorials showcase the
practical implementation of programming, how a program
What are Interceptors in Struts 2 and how do they work?
What are Interceptors in Struts 2 and how do they work?
Understanding Struts 2.... Interceptors performs very important work during pre and post
processing of any request... of interceptors is their ability to execute code
before and after an Action is invoked
Servlet Tutorials Links
Servlet Tutorials Links
... tutorial for writing HTTP Servlets with complete source code for the example Servlets....
What
is servlet:
Servlets are modules of Java code that run
jQuery - jQuery Tutorials and examples
jQuery - jQuery Tutorials and examples
... and jQuery Tutorials on the web. Learn and master jQuery from scratch. jQuery is nice piece of code that provides very good support for ajax. jQuery can be used
Welcome to the MySQL Tutorials
MySQL Tutorial - SQL Tutorials
... the password then make new user .This lesson you learn how to
create new... popular database and make quick
easy to store or its access, and update
Ajax Tutorials
these technologies work together
- from an overview to a detailed look -- to make...
Ajax Tutorials
... computer. They might use the Internet to download updates, but the code
Exception in Java - Java Tutorials
you compile the above code, you will get an error something like...;);}
}
Amazingly the above code compile without any hassle. Now, we try to run...Commenting Erroneous Code & Unicode newline Character
In this section, you
how make excel
);
}
}
}
For the above code, you need Apache POI library.
Thanks...how make excel how make excel spreadsheet IN JAVA.
please send this code argently
Hello Friend,
Try the following code:
import
how to make multiple rectangles
.
please help
my code as of now:(but this doesn't work)
import java.awt.Color...how to make multiple rectangles I,m a beginner , m sorry...(JFrame.EXIT_ON_CLOSE);
//Set JFrame size
setSize(400,400);
//Make JFrame
How to make my first JSP page?
based application. In this tutorial you will
learn how to make your web... is simple text file with .jsp extension and it contains
HTML code along with embedded Java Code. JSP file is compiled into Servlet and
then run
C++Tutorials
benefit to download the source code for the example programs, then compile... other tutorials, such as C++: Annotations by Frank Brokken and Karel Kubat...;
The
CPlusPlus Language Tutorial
These tutorials explain the C++ language
Java Field Initialisation - Java Tutorials
;
}
}
In the above code you might me thinking that what is the necessity to
initialize the variable above. But when you compile the above code... the initialization code
is copied into all the constructor. For Example
public
What is the difference between MongoDB and MySql?
than MySql will work perfectly but if a data is complex and you want to store serialized arrays or JSON objects then MongoDB is advised.
MySql has JOIN operation, which allows it to work across multiple tables while MongoDB does
Write Tutorials and Earn Extra Cash
Write Tutorials and Earn Extra Cash
Write tutorials for our site and earn Extra Cash in your... hundred words and with correct example code ready to run. You can zip and send your
AWT Tutorials
;BODY>
<APPLET ALIGN="CENTER" CODE="AppletExample.class" width = "260" height
Java Programming Tutorials for beginners
Java Programming tutorials for beginners are made in such a way...
programming with ease.
The Java tutorials available at Roseindia are prepared b... are seeking a breakthrough in the IT
industry. Java tutorials that come under
JDBC (Java Database Connectivity) -Tutorials
JDBC (Java Database Connectivity) -Tutorials
... for the above
mentioned SQL-compliant databases. JDBC abstracts much...
make these API calls for database access directly. The JDBC API provides
Tutorials - Java Server Pages Technology
. This can make it
difficult to separate and reuse portions of the code when...
Tutorials - Java Server Pages Technology
..., and Java code, which is
secure, fast, and independent of server platforms
Chart & Graphs Tutorials in Java
with source code to make
this tutorial very user-friendly.
Introduction...
Chart & Graphs Tutorials in Java
Best collection of Graphs and Charts Tutorials
How to work with Ajax in spring
How to work with Ajax in spring give some sample code for ajax with spring (example like if i select a state from one drop down in another drop down related districts should come
Struts 1.1 Tutorials
Struts 1.1 Tutorials
This page is giving tutorials on Struts 1.1. Struts 1.1 was the earlier
version of Struts... applications.
In this section we have given the details and example code
JDK 1.4 the NullPointerException - Java Tutorials
];
System.out.println(string[1].charAt(1));
}
}
Code Description
In the above... charAt() function, the above code throw
NullPointerException.
Output... of the above code
You can Correct the above code as follows :
package simpleCoreJava
Commenting out your code - Java Tutorials
you compile the above code, you will get an error something like...;);}
}
Amazingly the above code compile without any hassle. Now, we try to run...Commenting Erroneous Code & Unicode newline Correct
In this section, you
Java Example Codes and Tutorials
Java Tutorials - Java Example Codes and Tutorials
Java is great programming... that cause common programming errors. Java source code files are
compiled...
Java programming tutorials:
Core Java
any one help me in alfresco technology - Development process
and JSON materials.please any body can u responding my questions. Hi friend,
Code to give idea bout JSON :
Array Object is =>...://
Thanks
Java Training and Tutorials, Core Java Training
and deletion of memory automatically, it
helps to make bug-free code in Java...
Java Training and Tutorials, Core Java Training
Introduction to online Java
tutorials for new java programmers.
Java is a powerful object
XML,XML Tutorials,Online XML Tutorial,XML Help Tutorials
that the parser produced by the above code will ignore comments.TransformerFactory factory... the Document BuilderFactory is used to create new DOM
parsers. Methods used in code... the File.
Xml code for the program generated
Photoshop - Photoshop Tutorials
design, I have not hard work here to make so don't worry. It is a easy...Photoshop - Photoshop Tutorials
Photoshop is one of the most...
Effect
Here you will find many Photoshop tutorials
learn jquery
learn jquery is it possible to learn myself jquery,ajax and json
Yes, you can learn these technologies by yourself. Go through the following links:
Ajax Tutorials
JSON Tutorials
JQuery Tutorials | http://www.roseindia.net/tutorialhelp/comment/88555 | CC-MAIN-2015-11 | refinedweb | 2,174 | 58.28 |
So I was sitting on my couch a few days prior to Halloween, and I got bored. So I decided to build some Halloween- themed stuff. Then I thought of an electronic candy dispenser, with an Arduino and an LCD display. So I got building. I must confess, I am not the best programmer in the entire world. I have trouble with languages that are not native (pun intended) to me, and since I don't program sketches frequently, I had some trouble with this simple device. After I finished, however, I realized just how simple it was. So follow along, and it'll be a breeze.
Step 1: Parts
Here are all the components you'll need:
- Arduino Uno
- L3293D Motor Driver Chip
- LCD 1602 Module
- DC Motor
- Green LED Bulb
- 10k Ω Potentiometer
Pushbutton Momentary Switch
12V Battery
Numerous M-M Jumper Wires
- Large Breadboard
- Small Breadboard
- Mini Breadboard
Amazon- This kit is my personal recommendation. It has everything here (barring the battery)
- 3D Printed Parts
- 1' Diameter Candy Bowl
- Large Piece of Cardboard
- Toothpick
- Rubber Band
Tools:
- 3D Printer
- Hot Glue Gun
- Knife
- Soldering Iron (If you want it permanently)
- Wire Strippers
Step 2: The Easy Stuff
Let's start with the lid of the bowl before we get into the electronics. First, cut a circular piece of cardboard (1ft Diameter) with the knife. Cut a hole from that piece. Cut out another piece out another section, in accordance with the picture above. Next print the plastic hardware (Thingiverse, Thingiverse, Thingiverse). Next, stick the toothpick through the wheel, and the toothpick through the hole in the cradle, as pictured above. Hot glue the toothpick in place. Glue the little plastic rings onto the large 3D printed square. Glue all the 3D Printed parts onto the cardboard, as shown above.
Step 3: The Circuit
Oh, what fun electronics are. Wire the circuit according to the pictures above. Also, sub the 9V battery for a 12V. Tinkercad doesn't have a 12V option. The orange LED is optional as well.
*Footnote: This took me a while to concoct, so if you could be so kind to vote for me in the Halloween Contest, or leave me a comment below.
Step 4: Finishing Up
Glue the rubber band in place, as shown above. Make it pretty. Here's how: 1-Cut a hole for the LCD and button. 2-Glue the screen and pushbutton in place. 3-Glue the cardboard lid on. Plug the USB cable into the wall. The final product should look something like the pic above. Just shove all the circuitry in the bowl, there should be enough room for the candy.
Step 5: Code
Here is a GitHub link: (Candy-Dispenser/ u92master)
This is the code in plain text:
#include <LiquidCrystal.h>
lcd(6, 8, 9, 10, 11, 12); int pin = 6; #define ENABLE 5 #define DIRA 3 #define DIRB 4
int i; void setup() { lcd.begin(16, 2); Serial.begin(9600); lcd.setCursor(1,0); lcd.print("Please Take One"); digitalWrite(ENABLE,HIGH); // enable digitalWrite(DIRA,HIGH); //open lid delay(10000); digitalWrite(DIRB, HIGH); //close lid lcd.setCursor(1,0); lcd.print("Visit GriffinC7"); lcd.setCursor(0,1); lcd.print("On Instructables"); }
void loop() { }
Step 6: Help!
Help! I need somebody!
I have no idea if I am just some goofy 13-year-old typing up a dumb 'ible at midnight, or if my projects are even decent, or anything. So please comment below if you want me to stop publishing these stupid (or not- that is what I have to decide), tell me- show no mercy!
*Footnote: Ya, I like the Beatles. Ya, I'm 13. Ya, it's weird. (She loves you "Ya, Ya Ya"- Get It?)
Discussions | https://www.instructables.com/id/Halloween-Candy-Dispenser/ | CC-MAIN-2019-13 | refinedweb | 625 | 75.2 |
RAMcache for COREBlog update
Tweak, tweak, tweak!
More hints for setting up Zope's RAMCacheManager with COREBlog and especially the entry_body dtml-method. So far my wisdom is that for "Names from the DTML namespace to use as cache keys" in the "cache" tab I need to enter "id" and "noextendlink". Without "id" all entries will get rendered as the same story, without "noextendlink" the extended, full view of the story is the same as the "lead" on the weblog page. I don't know yet how caching will behave with comments (because there aren't any so far...), but chances are that trackback and comment counts should be added to that list too.
Posted by betabug at 09:26 | Comments (0) | Trackbacks (0) | http://betabug.ch/blogs/ch-athens/48 | CC-MAIN-2015-06 | refinedweb | 124 | 79.6 |
Java SE Behind the scenes: How do lambda expressions really work in Java? Look into the bytecode to see how Java handles lambdas. by Ben Evans September 25, 2020 Download a PDF of this article What does a lambda expression look like inside Java code and inside the JVM? It is obviously some type of value, and Java permits only two sorts of values: primitive types and object references. Lambdas are obviously not primitive types, so a lambda expression must therefore be some sort of expression that returns an object reference. Let’s look at an example: public class LambdaExample { private static final String HELLO = "Hello World!"; public static void main(String[] args) throws Exception { Runnable r = () -> System.out.println(HELLO); Thread t = new Thread(r); t.start(); t.join(); } } Programmers who are familiar with inner classes might guess that the lambda is really just syntactic sugar for an anonymous implementation of Runnable. However, compiling the above class generates a single file: LambdaExample.class. There is no additional class file for the inner class. This means that lambdas are not inner classes; rather, they must be some other mechanism. In fact, decompiling the bytecode via javap -c -p reveals two things. First is the fact that the lambda body has been compiled into a private static method that appears in the main class: private static void lambda$main$0(); Code: 0: getstatic #7 // Field java/lang/System.out:Ljava/io/PrintStream; 3: ldc #9 // String Hello World! 5: invokevirtual #10 // Method java/io/PrintStream.println:(Ljava/lang/String;)V 8: return You might guess that the signature of the private body method matches that of the lambda, and indeed this is the case. A lambda such as this public class StringFunction { public static final Function<String, Integer> fn = s -> s.length(); } will produce a body method such as this, which takes a string and returns an integer, matching the signature of the interface method private static java.lang.Integer lambda$static$0(java.lang.String); Code: 0: aload_0 1: invokevirtual #2 // Method java/lang/String.length:()I 4: invokestatic #3 // Method java/lang/Integer.valueOf:(I)Ljava/lang/Integer; 7: areturn The second thing to notice about the bytecode is the form of the main method: public static void main(java.lang.String[]) throws java.lang.Exception; Code: 0: invokedynamic #2, 0 // InvokeDynamic #0:run:()Ljava/lang/Runnable; 5: astore_1 6: new #3 // class java/lang/Thread 9: dup 10: aload_1 11: invokespecial #4 // Method java/lang/Thread."<init>":(Ljava/lang/Runnable;)V 14: astore_2 15: aload_2 16: invokevirtual #5 // Method java/lang/Thread.start:()V 19: aload_2 20: invokevirtual #6 // Method java/lang/Thread.join:()V 23: return Notice that the bytecode begins with an invokedynamic call. This opcode was added to Java with version 7 (and it is the only opcode ever added to JVM bytecode). I discussed method invocation in “Real-world bytecode Handling with ASM” and in “Understanding Java method invocation with invokedynamic” which you can read as companions to this article. The most straightforward way to understand the invokedynamic call in this code is to think of it as a call to an unusual form of the factory method. The method call returns an instance of some type that implements Runnable. The exact type is not specified in the bytecode and it fundamentally does not matter. The actual type does not exist at compile time and will be created on demand at runtime. To better explain this, I’ll discuss three mechanisms that work together to produce this capability: call sites, method handles, and bootstrapping. Call sites A location in the bytecode where a method invocation instruction occurs is known as a call site. Java bytecode has traditionally had four opcodes that handle different cases of method invocation: static methods, “normal” invocation (a virtual call that may involve method overriding), interface lookup, and “special” invocation (for cases where override resolution is not required, such as superclass calls and private methods). Dynamic invocation goes much further than that by offering a mechanism through which the decision about which method is actually called is made by the programmer, on a per-call site basis. Here, invokedynamic call sites are represented as CallSite objects in the Java heap. This isn’t strange: Java has been doing similar things with the Reflection API since Java 1.1 with types such as Method and, for that matter, Class. Java has many dynamic behaviors at runtime, so there should be nothing surprising about the idea that Java is now modeling call sites as well as other runtime type information. When the invokedynamic instruction is reached, the JVM locates the corresponding call site object (or it creates one, if this call site has never been reached before). The call site object contains a method handle, which is an object that represents the method that I actually want to invoke. The call site object is a necessary level of indirection, allowing the associated invocation target (that is, the method handle) to change over time. There are three available subclasses of CallSite (which is abstract): ConstantCallSite, MutableCallSite, and VolatileCallSite. The base class has only package-private constructors, while the three subtypes have public constructors. This means that CallSite cannot be directly subclassed by user code, but it is possible to subclass the subtypes. For example, the JRuby language uses invokedynamic as part of its implementation and subclasses MutableCallSite. Note: Some invokedynamic call sites are effectively just lazily computed, and the method they target will never change after they have been executed the first time. This is a very common use case for ConstantCallSite, and this includes lambda expressions. This means that a nonconstant call site can have many different method handles as its target over the lifetime of a program. Method handles Reflection is a powerful technique for doing runtime tricks, but it has a number of design flaws (hindsight is 20/20, of course), and it is definitely showing its age now. One key problem with reflection is performance, especially since reflective calls are difficult for the just-in-time (JIT) compiler to inline. This is bad, because inlining is very important to JIT compilation in several ways, not the least of which is because it’s usually the first optimization applied and it opens the door to other techniques (such as escape analysis and dead code elimination). A second problem is that reflective calls are linked every time the call site of Method.invoke() is encountered. That means, for example, that security access checks are performed. This is very wasteful because the check will typically either succeed or fail on the first call, and if it succeeds, it will continue to do so for the life of the program. Yet, reflection does this linking over and over again. Thus, reflection incurs a lot of unnecessary cost by relinking and wasting CPU time. To solve these problems (and others), Java 7 introduced a new API, java.lang.invoke, which is often casually called method handles due to the name of the main class it introduced. A method handle (MH) is Java’s version of a type-safe function pointer. It’s a way of referring to a method that the code might want to call, similar to a Method object from Java reflection. The MH has an invoke() method that actually executes the underlying method, in just the same way as reflection. At one level, MHs are really just a more efficient reflection mechanism that’s closer to the metal; anything represented by an object from the Reflection API can be converted to an equivalent MH. For example, a reflective Method object can be converted to an MH using Lookup.unreflect(). The MHs that are created are usually a more efficient way to access the underlying methods. MHs can be adapted, via helper methods in the MethodHandles class, in a number of ways such as by composition and the partial binding of method arguments (currying). Normally, method linkage requires exact matching of type descriptors. However, the invoke() method on an MH has a special polymorphic signature that allows linkage to proceed regardless of the signature of the method being called. At runtime, the signature at the invoke() call site should look like you are calling the referenced method directly, which avoids type conversions and autoboxing costs that are typical with reflected calls. Because Java is a statically typed language, the question arises as to how much type-safety can be preserved when such a fundamentally dynamic mechanism is used. The MH API addresses this by use of a type called MethodType, which is an immutable representation of the arguments that a method takes: the signature of the method. The internal implementation of MHs was changed during the lifetime of Java 8. The new implementation is called lambda forms, and it provided a dramatic performance improvement with MHs now being better than reflection for many use cases. Bootstrapping The first time each specific invokedynamic call site is encountered in the bytecode instruction stream, the JVM doesn’t know which method it targets. In fact, there is no call site object associated with the instruction. The call site needs to be bootstrapped, and the JVM achieves this by running a bootstrap method (BSM) to generate and return a call site object. Each invokedynamic call site has a BSM associated with it, which is stored in a separate area of the class file. These methods allow user code to programmatically determine linkage at runtime. Decompiling an invokedynamic call, such as that from my original example of a Runnable, shows that it has this form: 0: invokedynamic #2, 0 And in the class file’s constant pool, notice that entry #2 is a constant of type CONSTANT_InvokeDynamic. The relevant parts of the constant pool are #2 = InvokeDynamic #0:#31 ... #31 = NameAndType #46:#47 // run:()Ljava/lang/Runnable; #46 = Utf8 run #47 = Utf8 ()Ljava/lang/Runnable; The presence of 0 in the constant is a clue. Constant pool entries are numbered from 1, so the 0 reminds you that the actual BSM is located in another part of the class file. For lambdas, the NameAndType entry takes on a special form. The name is arbitrary, but the type signature contains some useful information. The return type corresponds to the return type of the invokedynamic factory; it is the target type of the lambda expression. Also, the argument list consists of the types of elements that are being captured by the lambda. In the case of a stateless lambda, the return type will always be empty. Only a Java closure will have arguments present. A BSM takes at least three arguments and returns a CallSite. The standard arguments are of these types: MethodHandles.Lookup: A lookup object on the class in which the call site occurs String: The name mentioned in the NameAndType MethodType: The resolved type descriptor of the NameAndType Following these arguments are any additional arguments that are needed by the BSM. These are referred to as additional static arguments in the documentation. The general case of BSMs allows an extremely flexible mechanism, and non-Java language implementers use this. However, the Java language does not provide a language-level construct for producing arbitrary invokedynamic call sites. For lambda expressions, the BSM takes a special form and to fully understand how the mechanism works, I will examine it more closely. Decoding the lambda’s bootstrap method Use the -v argument to javap to see the bootstrap methods. This is necessary because the bootstrap methods live in a special part of the class file and make references back into the main constant pool. For this simple Runnable example, there is a single bootstrap method in that section: BootstrapMethods: 0: : #29 ()V #30 REF_invokeStatic LambdaExample.lambda$main$0:()V #29 ()V That is a bit hard to read, so let’s decode it. The bootstrap method for this call site is entry #28 in the constant pool. This is an entry of type MethodHandle (a constant pool type that was added to the standard in Java 7). Now let’s compare it to the case of the string function example: 0: : #28 (Ljava/lang/Object;)Ljava/lang/Object; #29 REF_invokeStatic StringFunction.lambda$static$0:(Ljava/lang/String;)Ljava/lang/Integer; #30 (Ljava/lang/String;)Ljava/lang/Integer; The method handle that will be used as the BSM is the same static method LambdaMetafactory.metafactory( ... ). The part that has changed is the method arguments. These are the additional static arguments for lambda expressions, and there are three of them. They represent the lambda’s signature and the method handle for the actual final invocation target of the lambda: the lambda body. The third static argument is the erased form of the signature. Let’s follow the code into java.lang.invoke and see how the platform uses metafactories to dynamically spin the classes that actually implement the target types for the lambda expressions. The lambda metafactories The BSM makes a call to this static method, which ultimately returns a call site object. When the invokedynamic instruction is executed, the method handle contained in the call site will return an instance of a class that implements the lambda’s target type. The source code for the metafactory method is relatively simple: public static CallSite metafactory(MethodHandles.Lookup caller, String invokedName, MethodType invokedType, MethodType samMethodType, MethodHandle implMethod, MethodType instantiatedMethodType) throws LambdaConversionException { AbstractValidatingLambdaMetafactory mf; mf = new InnerClassLambdaMetafactory(caller, invokedType, invokedName, samMethodType, implMethod, instantiatedMethodType, false, EMPTY_CLASS_ARRAY, EMPTY_MT_ARRAY); mf.validateMetafactoryArgs(); return mf.buildCallSite(); } The lookup object corresponds to the context where the invokedynamic instruction lives. In this case, that is the same class where the lambda was defined, so the lookup context will have the correct permissions to access the private method that the lambda body was compiled into. The invoked name and type are provided by the VM and are implementation details. The final three parameters are the additional static arguments from the BSM. In the current implementation, the metafactory delegates to code that uses an internal, shaded copy of the ASM bytecode libraries to spin up an inner class that implements the target type. If the lambda does not capture any parameters from its enclosing scope, the resulting object is stateless, so the implementation optimizes by precomputing a single instance—effectively making the lambda’s implementation class a singleton: jshell> Function<String, Integer> makeFn() { ...> return s -> s.length(); ...> } | created method makeFn() jshell> var f1 = makeFn(); f1 ==> $Lambda$27/0x0000000800b8f440@533ddba jshell> var f2 = makeFn(); f2 ==> $Lambda$27/0x0000000800b8f440@533ddba jshell> var f3 = makeFn(); f3 ==> $Lambda$27/0x0000000800b8f440@533ddba This is one reason why the documentation strongly discourages Java programmers from relying upon any form of identity semantics for lambdas. Conclusion This article explored the fine-grained details of exactly how the JVM implements support for lambda expressions. This is one of the more complex platform features you’ll encounter, because it is deep into language implementer territory. Along the way, I’ve discussed invokedynamic and the method handles API. These are two key techniques that are major parts of the modern JVM platform. Both of these mechanisms are seeing increased use across the ecosystem; for example, invokedynamic has been used to implement a new form of string concatenation in Java 9 and above. Understanding these features gives you key insight into the innermost workings of the platform and the modern frameworks upon which Java applications rely. Dig deeper Java 8: Lambdas, Part 1 Java 8: Lambdas, Part 2 Real-world bytecode handling with ASM The ASM bytecode framework Loop unrolling The evolving nature of Java interfaces OpenJDK Project Lambda Chapter 6. The Java Virtual Machine instruction set | https://blogs.oracle.com/javamagazine/behind-the-scenes-how-do-lambda-expressions-really-work-in-java | CC-MAIN-2021-04 | refinedweb | 2,614 | 52.7 |
We need to defacto ways to build projects by convention and configuration, as an alternative to the current knowledge builder api.
We'll borrow the OSGI idea of a "bundle" for now to indicate a zipped project.
Layout
- 0..1 bundle
- 1..n resource paths (like eclipse classpath entries)
- 1..n packages
- 1..n ProcessModule
- 1..n bpmn2
- 1..n RuleModules (collection of rules, like agenda groups, see RuleModule )
- 1..n Rules
Layout Example
- bundle (eclipse project)
- main/java
- org/domain/xxx
- file1.java
- file2.java
- file3.drl // drl files in the java space provide type declarations and functions
- file4.drl // drl files in the java space provide type declarations and functions
- main/kbase1 (kbase1 is the id name for the kbase)
- org/domain/xxx
- ProcessModule pm1
- bpmfile1.bpmn2
- bpmfile2.bpmn2
- RuleModule rm1
- rulefile1.drl
- rulefile2.drl
- main/kbase2 (kbase2 is the id name for the kbase)
- org/domain/yyy
- ProcessModule pm1
- bpmfile1.bpmn2
- bpmfile2.bpmn2
- RuleModule rm1
- rulefile1.drl
- rulefile2.drl
- main/kbase3 extenda kbase2 // we allow for "parent" kbases
- org/domain/yyy
- ProcessModule pm1
- ....
- RuleModule rm1
- ....
Conventions and Behaviours
- Each KBase will map to resource path (classpath entry) folder.
- There can only be one KBase per resource path. All sub folders for that path make up that KBase.
- There is a meta file with an entry per resource path that tracks all folders and files in that path, similar to the spring xml we have now already. This is maintained by the tooling and ensures when the paths are merged into a single bundle zip we can track which rules go in which kbase. It also provides a name for the kbase, which will allow for annotation injection.
- Meta file that defines named knowledge bases, also does the same for sessions
- A Bundle provides export information, that works with annotation driven wiring
- list of named kbases
- list of named ksessions, if the bundle defines any
- list of named channels
- File entries must follow java like conventions such that the package maps to a folder structure and rule modules and rules for that package are in that folder.
- Single classloader at the bundle level, all KBases for that bundle use the same classloader. Thus a type declaration's definition is visible to all java classes and kbase's in that bundle.
- We might possible add "scoped" type declarations, via a keyword. Which would only be visible to that KBase path, however this makes tooling a lot more complicated, so needs to be though through carefully.
- For instance if they really need isolation, why not just do it in another project bundle?
- Type Declarations are also global for the bundle and must be declared in their correct package namespace
- I'm tempted to say that type declarations should be a different resource path, same for functions. As it doesn't make sense to declare them inside of a kbase path, yet have them global to all kbases.
- A Kbase can declare that it extends another.
- This effectively merges the parent into the child for a single child kbase.
- Maven pom should be able to build a knowledge bundle
- essential for the project to fit in with existing architecture approaches
- Any file, or the entire bundle, may be synced with an external source, like guvnor, that information should be recorded and tracked as part of the bundle information. It should record the version and the url fo the external resource.
Annotations
The Bundle itself will build kbses and ksessions and allow those to be injected where needed:
@KnowledgeBase
- Will inject the kbase instance
@StatefulKnowledgeSession
- Create and return new StatefulKnowledgeSession from a known kbase
- Return a registered existnig StatefulKnowledgeSession
@StatelessKnowlegeSession
- Always creates and returns a new StatelessKnowlegeSession
@Query
- Injects a query from a registered ksession
We can also use annotations that will wire themselves into known ksessions, such as listeners or query handles:
@ReactiveQuery
- Must specify a registered StatefulKnowledgeSession and query within that session
- Will also automatically wire the listener to that ksession to receive live updates
@WorkingMemoryEventListner,AgendaEventListner,ProcessEventListener
- Must specify a registered StatefulKnowledgeSession and query within that session
- Will also automatically wire the listener to that ksession to receive live updates
Version Management and Incremental Updates
Each bundle specifies it's current version but aso provides provide change information. This information lists all added, removed and changed files from the previous version. However the bundle contains all changes sets for each version diff. This information makes it possible to statefully upgrade any existing bundle to the new one, by merging the change sets from the current deployed version to the target version.
Any updates will need to be validated first, to ensure they will succeed and retain integrity.
- Cannot remove Classes or fields used by existing rules or processes.
- Cannot change Class if instances exist for that class
- At some point we might allow for mini auto-migration scripts which can handle conversion.
JIT and CodeGeneration
It should noe be possible and optional to pre-generate all bytecode ahead of time and store it in the zip, so that it's accessible when the system needs it. This needs to be optional as those both building and running on the same machine at the same time may wish to delay JIT to spread out the cpu hit.
Version Compatability
For performance reasons we need to store the serialised package's in the Bundle, similar to how we pass around packages now. However we still do not have binary compatability across rule versions. The compiled with version should be stored, if the client does not match that, the sysem can fall back to the drl (also stored in the bundle) and compile on demand.
Over time we hope to develop an intermediary langauge that is a compromise between binary compatability across versions and performance.
Project Scoped WorkingMemory
Typically each KnowledgeBase has it's own working memory that objects must be inserted into. We can probalby have an additional project scoped working memory, any instances inserted/retracted/modified in here are available to all created kbase's ksessions. i.e. you isnert into this project working memory and that fact is inserted into all the ksession's for all the kbases, and vice versa if you do a remove. So it's more like a shared object store.
RuleModule and ProcessModule
RuleModule and ProcessModule are java classes that expose an auto wired listener model and provide some pluggable services.
In the layout the drl and bpmn2 files are shown as nested folders ot the module. It is open an open question as to whether they are physically nested in a folder or instead in the same folder with header information that specifies their parent module.
RuleModule is still being specced and very early stages, but in essence it would look something like below: please take as illustrative only, as we will determine this more thoroughly later in the RuleModuleand ProcessModule spec documents.
We'll initially provide pure pojo ones and allow for compact drl definition ones too at a later date.
It is likely we'll have definitions by both interfaces and annotations. Annotations can be ideal as they do not require dummy methods, where as interfaces need all methods ot be declared, even if not used.
@RuleModule(name == "my rule module 1") // default is the class name public class MyRuleModule1 { public void onBeforeEnter(); public void onEnter(); public void onBeforeExit(); public void onExit(); public void onMatch(Match match) {} public void onUnMatch(Match match) {} public void onCancelFire(Match match) {} public void onBeforeFired(Match match) {} public void onAfterFired(Match match) {} }
The RuleModule is assocated with the kbase and all rule matching and execution would trigger the relevant callback. Note there is no wiring for the end user, simply delcare it it works - you do not manually add this to the kbase, as you do with listeners currently. As shown in RuleModule spec, these implementations can provide extensible behavioural change to rules. Will do a similar thing for ProcessModule. | https://community.jboss.org/wiki/DroolsProject | CC-MAIN-2014-15 | refinedweb | 1,324 | 50.36 |
iPortalCallback Struct ReferenceWhen a sector is missing this callback will be called.
More...
[Crystal Space 3D Engine]
#include <iengine/portal.h>
Inheritance diagram for iPortalCallback:
Detailed DescriptionWhen a sector is missing this callback will be called.
If this callback returns false then this portal will not be traversed. Otherwise this callback has to set up the destination sector and return true. The given context will be either an instance of iRenderView, iFrustumView, or else 0.
This callback is used by:
Definition at line 129 of file portal.h.
Member Function Documentation
Traverse to the portal.
It is safe to delete this callback in this function.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.0.2 by doxygen 1.4.7 | http://www.crystalspace3d.org/docs/online/api-1.0/structiPortalCallback.html | CC-MAIN-2016-36 | refinedweb | 128 | 60.31 |
GCC, the GNU Compiler Collection, supports a number of languages. This
port installs the C and C++ front ends as gcc42 and g++42, respectively.
WWW:
Gerald Pfeifer <gerald@FreeBSD.org>
NOTE: FreshPorts displays only required dependencies information. Optional dependencies are not covered.
No installation instructions: this port has been deleted.
The package name of this deleted port was: gcc42
gcc42
No options to configure
Number of commits found: 146 (showing only 46 on this page)
« 1 | 2
Update to the 20060826 snapshot of GCC 4.2.0. Among others, this fixes
two cases where the common (file) namespace was polluted by Java-specific
files.
Disable building libgomp on FreeBSD 4.x and early versions of FreeBSD 5.0
due to pthread-related build issues there.[1]
Reported by: kris (pointyhat) [1]
Update to the 20060819 snapshot of GCC 4.2.0.
Update to the 20060812 snapshot of GCC 4.2.0.
Setting java.home, changing default awt peer to gtk,
and using cairo backend for WITH_JAVA_AWT
Approved by: gerald
Update to the 20060805 snapshot of GCC 4.2.0.
Update to the 20060729 snapshot of GCC 4.2.0.
Update to the 20060722 snapshot of GCC 4.2.0.
No longer create ${PREFIX}/libdata/ldconfig, the issue has been addressed
in Mk/bsd.port.mk now.
Be more friendly for additional patches.
Submitted by: maho (implicitly)
Update to the 20060715 snapshot of GCC 4.2.0.
Update to the 20060708 snapshot of GCC 4.2.0.
Simplify the subdirectory we use for GCC-specific libraries and include
files from gcc/${CONFIGURE_TARGET}/${PORTVERSION} to gcc-${PORTVERSION}.
Remove the hack to set RANLIB=: now that this has been addressed upstream.
Update to the 20060701 snapshot of GCC 4.2.0.
bootstrap-lean is back, which means quite a bit less disk space used when
building this port. Also, Java comes with new applications gappletviewer42,
gjarsigner42, and gkeytool42 and a new libgcj-tools-4.2.0.jar.
Update to the 20060624 snapshot of GCC 4.2.0.
mf-runtime.h no longer pollutes public filename space, so we can
remove our workaround. Refresh files/java-patch-hier.
Update to the 20060617 snapshot of GCC 4.2.0.
Employ the new USE_LDCONFIG feature, which allows us to get rid of the
various, much more manual and error-prone hacks we needed so far.
Reviewed by: flz (for lang/gcc40)
Update to the 20060610 snapshot of GCC 4.2.0.
Add zip as a build dependency of Java (libgcj). [1]
Reported by: kris (pointyhat) [1]
Update to the 20060603 snapshot of GCC 4.2.0.
Java support is back (on i386), and all those additional libtool
files we are currently installing as part of libgcj will be gone
with next week's snapshot.
Update to the 20060527 snapshot of GCC 4.2.0.
Fix dependency of libart
Submitted by: kris
Approved by: gerald (implicitly)
Update to the 20060520 snapshot of GCC 4.2.0.
Update to the 20060513 snapshot of GCC 4.2.0.
Update to the 20060506 snapshot of GCC 4.2.0.
ia64 and sparc64 should build again now; 25865"> has
been addressed.
Avoid hard-coding the GCC release series in the cklatest target.
Chase gmp library and bump PORTREVISION.
IGNORE on ia64 and sparc64, because we know things currently cannot work
on these two.
Make sure all lang/gcc* ports I maintain can be properly used as master
ports by allowing MAINTAINER and COMMENT to be overridden.
Update to the 20060422 snapshot of GCC 4.2.0.
Update to the 20060415 snapshot of GCC 4.2.0. Adjust the explanation
on why Java still is disabled.
Update to the 20060408 snapshot of GCC 4.2.0.
Update to the 20060401 snapshot of GCC 4.2.0.
Fix handling of shared libraries via rc.d for non-default prefixes.
Update to the 20060325 snapshot of GCC 4.2.0. Two minor cleanups on the
Java side, without real functional changes.
- add RC_D_SH to keep shared libs working after reboot
Update to the 20060318 snapshot of GCC 4.2.0.
Update to the 20060311 snapshot of GCC 4.2.0.
The spamming of $PREFIX/include/ssp is now finally gone after my reports
upstream, which allows us to restrict the conflict with gcc-4.1.* to the
case where we build Java.
Convert the build-time dependency on math/mpfr to a full one, since the
Fortran frontend also needs this at run time.
Always build both shared and static libraries instead of having these as
two exclusive options defaulting to the former.
Remove bogus USE_X11 (which was not used by default nor any other port).
No longer hardcode the version number in LATEST_LINK.
Update to the 20060218 snapshot of GCC 4.2.0.
Remove USE_REINPLACE= as advised by new portlint. Also note that at
least some of the installation hierarchy problems with libgomp have
been fixed now due to my report upstream.
Update to the 20060211 snapshot of GCC 4.2.0.
Update to the 20060204 snapshot of GCC 4.2.0.
Update to the 20060128 snapshot of GCC 4.2.0.
Update to the 20060121 snapshot of GCC 4.2.0, which now includes libgomp.
Update to the 20060114 snapshot of GCC 4.2.0.
Update to the 20060107 snapshot of GCC 4.2.0.
GCC no longer installs an empty ${PREFIX}/share/classpath/, so we can
avoid my hack to remove it.
Update to the 20051231 snapshot of GCC 4.2.0.
Update to the 20051224 snapshot of GCC 4.2.0.
Improve packaging by using @dirrm include/ssp instead of speculative
removal. Remove broken removal of the info/gcc42 directory; this has
to be handled by Mk/bsd.ports.mk.
Update to the 20051215 snapshot of GCC 4.0.3.
Install the .info files of the lang/gcc40 port in a port-specific
subdirectory, and move include/mf-runtime.h into a version specific
directory. This allows us to remove the conflicts with lang/gcc33,
lang/gcc41 and lang/gcc42.
Also, convert pkg-plist to use a new substitution (%%SUFFIX%%) instead
of hardcoding the version number 40.
Install the .info files of the lang/gcc42 port in a port-specific
subdirectory, which allows us to remove the conflicts with lang/gcc33
and lang/gcc34.
Dedicated to: obrien
Update to the 20051217 snapshot of GCC 4.2.0. Due to changes in the build
systems, this will consume more diskspace to build (some 900MB on i386).
Update program names to account for GCC 4.2.
Complete the repocopy of lang/gcc41 to lang/gcc42 and update to the 20051210
snapshot of GCC 4.2.0.
Change pkg-plist to avoid hardcoding any version number.
PR: 90253
Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD
7 vulnerabilities affecting 9 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities | http://www.freshports.org/lang/gcc42/?page=2 | CC-MAIN-2013-48 | refinedweb | 1,144 | 68.06 |
What is strcmp supposed to return if one or both arguments passed to it are NULL ?
Discussion in 'C Programming' started by spib
Why use "return (null);" instead of "return null;" ?Carl, Aug 21, 2006, in forum: Java
- Replies:
- 21
- Views:
- 1,025
- Patricia Shanahan
- Aug 24, 2006
NULL argument to strcmpFred L. Kleinschmidt, Dec 15, 2004, in forum: C Programming
- Replies:
- 15
- Views:
- 1,370
- Keith Thompson
- Dec 17, 2004
strcmp() question, 4 words, two strings, equal return value.Steven, Dec 29, 2005, in forum: C Programming
- Replies:
- 9
- Views:
- 417
- Keith Thompson
- Dec 29, 2005
Null pointer (NULL array pointer is passed)aneuryzma, Jun 15, 2008, in forum: C++
- Replies:
- 3
- Views:
- 780
- Jim Langston
- Jun 16, 2008
Is Scanner's nextLine() Supposed to Return True with Unread Empty Lines?KevinSimonson, Mar 13, 2011, in forum: Java
- Replies:
- 1
- Views:
- 846
- Daniele Futtorovic
- Mar 13, 2011 | http://www.thecodingforums.com/threads/what-is-strcmp-supposed-to-return-if-one-or-both-arguments-passed-to-it-are-null.443500/ | CC-MAIN-2014-52 | refinedweb | 148 | 64.34 |
We did meet today. We determined we will continue to meet at this time until we have another meeting time. fedora-meeting: EPEL meeting - 2009-06-12 Meeting started by nirik at 21:00:02 UTC. Action Items stahnma will send to list about bug day stahnma will try to move many of the wiki meeting logs into the correct namespace dgilmore and SmootherFrOgZ will work on bodhi/koji. stahnma will try to finish preparing the EL5 push... nirik will post the meeting logs to the mailing list and ask about meeting times again. LinuxCode changed wiki to reflect new meeting time People Present (lines said): nirik (88) stahnma (48) Jeff_S (25) LinuxCode (22) dgilmore (13) rayvd (8) abadger1999 (7) mmcgrath (5) SmootherFrOgZ (5) zodbot (1) schlobinux_ (1) Minutes: Log: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: <> | https://listman.redhat.com/archives/epel-devel-list/2009-June/003660.html | CC-MAIN-2022-33 | refinedweb | 152 | 61.87 |
class Solution { public: int countRangeSum(vector<int>& nums, int lower, int upper) { long long offset=0,subsum=0; multiset<long long> ms; for(int i=0;i<nums.size();i++){ offset-=nums[i]; ms.insert(nums[i]+offset); auto itlow=ms.lower_bound(lower+offset),itup=ms.upper_bound(upper+offset); subsum+=distance(itlow,itup); } return (int)(subsum); } };
I tried this solution too.
distance(itlow, itup) uses a for loop to calculate the distance, so the algorithm becomes O(n^2logn) in the worst case.
Thank your code, it makes me learned the function distance. At first I wanted to use operator - to calculate the distance between two iterators, however the feedback was compile error. So I decided to see the discuss, maybe I can find some similar solution which also uses multiset. Really I found it. By the way in multiset it calculate distance using operator ++, the time complexity is O(n), be careful. Good luck to you .
The correct analysis of its complexity is actually O(nlogn) + K, which K is the output of the algorithm. So this algorithm's complexity depends on its output. There are many other algorithm that falls into this category too.
Just explain the solution. The element of the set multiset is the sum of the first n elements in nums . Suppose that x is the element of the set and the next added element of the set is y. And a represents lower and b represents upper. The number of x that satisfies a<=y-x<=b is the value of function distance(). Before we do some thing, change the form a<=y-x<=b and get y-b<=x<=y-a. The soltution is that
class Solution { public: int countRangeSum(vector<int>& nums, int lower, int upper) { long long offset=0,sum=0; multiset<long long> set; for(int i=0;i<nums.size();i++){ offset+=nums[i]; set.insert(offset-nums[i]); auto up=set.upper_bound(offset-lower),low=set.lower_bound(offset-upper); sum+=distance(low,up); } return (int)sum; } };
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/35351/8-line-c-with-multiset-runs-104ms-beats-30 | CC-MAIN-2017-39 | refinedweb | 354 | 51.34 |
User talk:Cubbi
examples
Hi! Welcome to the wiki. Your contributions look awesome:) Just one note: could we have only one example per page? It can quickly become a mess, especially if these examples demonstrate more or less the same feature used in the similar way.P12 14:29, 17 August 2011 (PDT)
- No problem, I felt that the return value of std::for_each deserved a special mention since so few people even know it exists, but it's true that for the most part that example was a duplicate. I don't mind it gone or anything -- I just put it together on the spot. Also, nice to know I can use range for loops here! --Cubbi 14:41, 17 August 2011 (PDT)
- Well, any feature in C++11 can be used, provided that the examples become simpler. After all, it's not the for loop that is being demonstrated, but the particular function, so there should be as little auxiliary code as possible.P12 14:53, 17 August 2011 (PDT)
- Incidentally, why do so many examples here use std::endl where '\n' is implied? It makes it look unprofessional, as if the next line is going to be system("pause") --Cubbi 19:43, 17 August 2011 (PDT)
- Did you mean that there's no reason to use std::endl as '\n' is a shorter alternative, or that there's no reason to output newline at the end of the program? The former choice actually has no strong rationale behind, just I felt that std::cout is better for the examples. As for the second, most examples would be run only in interactive mode and most probably not edited in any way, so the newline helps produce clear output. Compare:
p12@p12-laptop:~$ ./a.out abcdp12@p12-laptop:~$
And
p12@p12-laptop:~$ ./a.out abcd p12@p12-laptop:~$
P12 01:25, 18 August 2011 (PDT)
- I mean that << std::endl is premature pessimization: it is, by definition, equivalent to << '\n' << std::flush and flush is only needed before system("pause") and in some other multiprocessing/multithreading situations or when debugging a program that segfaults (in which case always-flushed cerr works better anyway). Stroustrup's "hello world" uses \n in "The C++ Programming Language". Also, recent discussion on SO and a not so recent one --Cubbi 06:53, 18 August 2011 (PDT)
Lambda / Example
Hi! Good point on the std::function usage being already promised. //
Nitpick: isn't it better to say "stored" instead of "captured" in "captured in std::function"?
Clarification: shouldn't we at least say that there's a difference between these two cases? Note that std::function in this case can introduce severe inefficiencies (regardless on whether it is common or not, this makes it a bad practice, IMHO).
Compare the results of the following benchmark:
Further references:
// Edit: if we do decide that keeping it is a good idea (I'm not convinced), I think <functional> header needs to go back in, too.
// Request for clarification: just to make sure I'm not breaking any rules, is it better to discuss issues like this here or at the page-specific discussion section? // which would be
Note I'm not trying to be pedantic, my thinking is simply that we should strive to provide "the best practices" code in the examples whenever possible (unlike that certain other C++ reference...) and it seems to be possible in this case.
Md 13:14, 1 May 2013 (PDT)
- You're right about the header of course (I tested with gcc and just pasted it as-is), and yes, discussion of language/lambda is better on the language/lambda's talk page -- that's where visitors would look for comments and opinions such as these. std::function of course has the potential for overhead, but I think the best place to expand on that is std::function's main page, in a ===Notes=== section, where implementation details and caveats would be appropriate (e.g. compare to std::shared_ptr's Notes). Wrapping a lambda in a function is not justified in the example, but it is justified in enough real-life use cases (to pass to a non-template function, to apply std::not1, etc) --Cubbi 14:13, 1 May 2013 (PDT)
"must return"?
Regarding the comment on this and other diffs. int main() does not require a return statement, Not in C (since 1999), not in C++ (ever). Stroustrup's "hello world", again, has no return statement. But if its a good idea to have all code here include that statement, Help:Manual of style needs to be updated. --Cubbi 14:20, 23 August 2011 (PDT)
- Oh great. I must have wasted a lot of return statements. If they are not mandatory, we can omit them. BTW, could you start discussions like this in my talk, so that I get a notification?P12 02:09, 24 August 2011 (PDT)
using std::less in std::lower_bound
Is this edit correct? What if the argument types to std::less are not convertible to each other, yet have a comparison operator defined? Is there's a solution to that problem?P12 14:43, 24 August 2011 (PDT)
- You're right, that less is missing the second type. In LLVM's libc++ library, the first form of std::lower_bound is defined exactly this way:
template <class _ForwardIterator, class _Tp>
inline _LIBCPP_INLINE_VISIBILITY
_ForwardIterator
lower_bound(_ForwardIterator __first, _ForwardIterator __last, const _Tp& __value)
{
return _VSTD::lower_bound(__first, __last, __value,
__less<typename iterator_traits<_ForwardIterator>::value_type, _Tp>());
} I'll edit accordingly. --Cubbi 15:15, 24 August 2011 (PDT)
- So let's keep the duplicate then. Unrelatedly, I think there's no need to have exact permutation, sort and other long algorithms outlined in the equivalent function section. They're too complex for anyone to grasp quickly, so it might be a better idea just to leave a 'black box' instead.P12 15:38, 24 August 2011 (PDT)
- Even std::random_shuffle can be hard to grasp if you're unfamiliar with Fisher-Yates-Knuth algorithm. I think implementations of the permutations are rather simple for what they do, and to me it was educational to learn how the three permutation functions are implemented (identically in both libraries I looked at, too). I agree about sorts, stable_partition, and anything that does something different "if additional memory is available" - those are complex and they are implemented differently in different libraries. If I were to cut one, I'd cut std::minmax_element, it's too complex for what it does. But there's so much work to be done on this wiki still, it feels like a minor thing anyway. --Cubbi 18:57, 24 August 2011 (PDT)
- Ok, after coming across the page of std::is_permutation I reversed my opinion. The algorithm is quite clear if there are some comments about how the code works. Maybe all we need is a very short explanation above the code for the most complex functions? Anyway, equivalent functions, like examples, reside at the very end of the page, so they aren't really important. Since probably the only people, who will come across those sections, are the ones who're looking for additional reference, lots carefully selected information will be certainly beneficial.P12 13:03, 25 August 2011 (PDT)
wide character strings
Wide character strings are not the same as multibyte character strings. There are actually three character string types:
- character / byte / narrow-character null-terminater string, e.g. ASCII-8
- multibyte null-terminated string, e.g. UTF-8
- wide / wide-character null-terminated string, e.g. UCS, UTF-16, etc.
Multibyte character string is like an 'extension' to regular byte character strings, since a multibyte character string is always a valid byte character string, so it is can be processed by all functions at cpp/string/byte, but not the functions at cpp/string/wide.
Anyways, this is a quite complex subject, it needs to be explained better in the article lead sections.P12 15:49, 2 September 2011 (PDT)
- You're right, the definition of NTMBS is not suitable for string/wide at all. An NTMBS is more like what a std::wstring_convert().to_bytes() would produce from a wchar_t[]/wstring. But yes, it would be nice to detail the differences between C++'s four character types and the data that can be stored in them. --Cubbi 16:44, 3 September 2011 (PDT)
Template:ddcl list namespace
Hi. My edits adding {{ddcl list namespace}} were a mistake, since the namespace can easily be deduced from the page title as is done e.g. std::this_thread::sleep_for. I'll revert them. Sorry for inconveniences.P12 13:34, 4 October 2011 (PDT)
{{param none}}, {{return none}}, {{throw none}}
Hi. I decided that {{param none}}, {{return none}}, {{throw none}} serve no purpose as the current style is adequate and probably won't change in the future. I'll replace these templates with the text they resulted in - (none). I hope this will make editing easier. P12 13:03, 11 October 2011 (PDT)
Re. mem=
Hi. I've fixed the inconvenience caused by the necessity to use
mem parameter in the
dcl list mem * templates. Now if a
dcl list mem * template is placed in a page which is not a sibling or parent of the target page the template links to, member of std:: is added automatically. E.g.
So the
mem= parameter is not needed anymore. P12 13:23, 19 January 2012 (PST)
- Cool, thanks. I wish i could understand that code you just changed! --Cubbi 13:27, 19 January 2012 (PST)
- The idea is not very complicated, the syntax is. The
dcl list *templates always get a path to function/class page they represent. When used, the template now checks whether the target page and the page they are used in have common ancestor. If yes, everything is as in the past. Otherwise, the template acquires and displays the name of the parent class of the target page. Here it knows that if we have
a/b/parent/member_function, then there's always
Template:a/b/parent/titleand if the title template gets
#MAGICTITLESTRING#, then it just yields in the part of the title which would be displayed in the smaller font. So for e.g.
{{cpp/container/vector/title|#MAGICTITLESTRING#}}we get std::vector which is what we want. P12 16:50, 19 January 2012 (PST)
sorry for being such a pain
I'll try to avoid making radical changes without searching first. I need to actually look more into the standard before modifying things. >.< --- Undeterminant 12:29, 11 February 2012 (PST)
- adding a function or two is nothing radical. It's a wiki. If someone thinks it can be improved, they will. --Cubbi 12:32, 11 February 2012 (PST)
operator++ with or without (int) overload in links
Hey, thanks for all the work on the iterators. I only wondered why you put the (int) overloads into the links for the ++ operators, for example
Usually the overloads are not specified in the links (e.g. constructors, operator[]).
Would you be ok, if I started changing this to the shorter version below?
Or are there some caveats that I missed?
Tobi 03:23, 3 April 2012 (PDT)
- They are not overloads, they are post-increments. Post-increment and pre-increment are fairly different (but not different enough to have separate pages).. On second thought, I suppose it does clutter the link unnecessarily. Ultimately, style decisions are up to P12, I'm just an editor. --Cubbi 05:43, 3 April 2012 (PDT)
Russian translation
Hi. Could you take a look at the Russian wiki and translate the most frequently used templates if you have time? Most of the templates are only few words or sentences, so the translation would need at most several tens of minutes. I then would import the rest of the English content and add some advertising at the Dokuwiki version too gather attention of other editors. Thanks! -- P12 09:37, 25 April 2012 (PDT)
- I started, but can't promise to be fast. --Cubbi 10:40, 26 April 2012 (PDT)
Svg conversion of the File:Streambuf.png
Hi. I've created an explanatory image for std::basic_streambuf based on your initial draft. Your image was really helpful, thanks! P12 18:02, 16 May 2013 (PDT) | http://en.cppreference.com/mwiki/index.php?title=User_talk:Cubbi&oldid=47843 | CC-MAIN-2014-42 | refinedweb | 2,053 | 62.68 |
> There might be a flaw there if we're supposed to support two distinct
> cookies that have the same name, but different paths. But
> under the current
> scheme you should be able to have cookies 'X' and 'Y'.
Only X or Y will be saved if appendRawResponse is called. I didn't
realize that was only called for forwarded pages until I read your
response but it must happen a lot. index.py forwards to the real
home page and my "Home" link goes to ""; which calls
index.py. This might explain why I had such a hard time finding
the cookie problem. Sometimes I thought it was working, then it
stopped working. I was alternating between index.py and the page
I was testing cookies on. I thought Max-Age was working when I
set Version=1 but when I tried to reproduce it, it didn't work
again.
> > I tried to save some state using cookies, but no cookie file
> > was created.
> What cookie file?
The files that IE creates to save cookies for a given domain.
> What ver of Python do you use? Also, what ver of Webware?
Python 2.0, WebWare 0.5.1 rc2
> Under Python 2.0, WebKit.Cookie should use Python's Cookie
> module instead
> of zCookieEngine (which is the predecessor of the Python version).
zCookieEngine probably isn't being called on my system, I went
there to see how it handled Expired and noticed the typo.
> >I checked into why I can't set two cookies.
> HTTPResponse._headers is a
> >dictionary and filters out duplicate headers. So, even though
> >HTTPResponse.rawResponse() creates a *list* of headers containing a
> >"Set-Cookie" header for each cookie, they get filtered out by
> >HTTPResponse.appendRawResponse() when it saves the headers to a
> >*dictionary*. It seems that dictionaries are not
> appropriate for storing
> >HTTP headers.
>
> The way I see it, appendRawResponse() seems to have the flaw
> you refer to.
> However, that method should only be used for
> forwardRequest(). Without a
> forward request, there shouldn't be a problem. Right?
Probably not but since the home page forwards requests, it happens
a lot. Is there a way to correct it?
BTW, Cold Fusion can't handle setting ANY cookies when
using the <cflocation url="..."> tag because it only sets the
redirect header and no others. So you're one step ahead of them :)
How do you feel about adding more parameters to setCookie()? I could
not use it since it doesn't allow the expiration date to be set and
I would think that most cookies will need to do that.
Regards,
Jeff
Python 2.0 Cookies.py has the same typo regarding "expires"/"Expires". I
guess my browser isn't case sensitive.
Once you send a blank line, the client should interpret everything
elseas content. If the headers included a "Location" directive, you
just told the client to stop listening to anything you say and talk to
the target of the location header.
Try the following script as a cgi:
#!/usr/bin/env python
print 'Set-Cookie: bob=slob; path=/; expires=Wednesday, 29-Dec-02 23:12:40 GMT'
print 'Location:';
print 'Set-Cookie: rob=prob; path=/; expires=Wednesday, 29-Dec-02 23:12:40 GMT'
print
#end script
Two cookies will be set, and then you will go to Yahoo. If an app
server framework can't do this, it's a limit of the framework, not of
HTTP, or of Apache (the web server I use).
On Wed, 04 Apr 2001, you wrote:
>.
One last tip, I ran into this.
If you do set a cookie, make sure you don't forward the user
to another page. The cookie settings only get set if you
actually respond or redirect the user. When you forward,
a new response object is created and your cookie change
is lost.
> Thanks Luke & Geoff for the cookies tip.
>
> So what's the best way of getting to know when a client enters any
> page for the first time "per session" within the context. I would
> prefer to initiate the long-life cookie on a place outside a
> SitePage.py alike servlet.
>
>
>.
>How do you feel about adding more parameters to setCookie()? I could
>not use it since it doesn't allow the expiration date to be set and
>I would think that most cookies will need to do that.
HTTPResponse has two methods:
def setCookie(self, name, value):
def addCookie(self, cookie):
The first is a convenience. The second lets you do anything you want, since
you can create a WebKit.Cookie and add it. I suppose we could add keyword
args to setCookie() to make it even more convenient:
def setCookie(self, name, value, **extraArgs):
I'll add that to my TO DO list. I'd like to go with "max-age" being the
"standard" name and have the underlying implementation switch to "expires"
(or send both) if necessary.
-Chuck
-Chuck | http://sourceforge.net/p/webware/mailman/webware-discuss/thread/Pine.LNX.4.40.0206141411540.11393-100000@pacman.redwoodsoft.com/ | CC-MAIN-2015-35 | refinedweb | 822 | 75 |
18.3.4 Display Serial Interface Timing Characteristics (4-line SPI system)TFT Instruction write cycles are 100ns. Read cycles 150ns.
18.3.1 Display Parallel 18/16/9/8-bit Interface Timing Characteristics (8080-Ⅰ system)Write cycle = 66ns. Read cycle is 450ns.
The library needs a license file.That's why I like the concept of "Help Yourself Software", that I try to introduce..but given the similarity of the code and and a few of the comments in the .cpp and .h files
This library has been derived from the Adafruit_GFXand
library and the associated driver library. See text
at the end of this file
// Fill a triangle - original Adafruit function works well and code footprint is smallit appears that it is based on (as in uses code from Adafruit_GFX library) but then does not depend on that library.
Ah! Missed the BSD reference, it seems that Adafruit have different license and license versions sprinkled through their various libraries.They have been and continue be sloppy with their licenses. And in some past cases they have violated licenses and copyrights by claiming a different license for their derivative work that is simply not allowed.
The NodeMCU has an ESP8266 at heart which makes a networked sketch for the library a must, so here is a starter sketch that will (after some more tweaks) find it's way into the examplesHave you seen the WifiManager? It is really cool.
@bperrybapWiFi manager is just to set up the ssid and password so you can do it with a browser vs having to re-compile the code.
Had a look at WifiManager, briefly hoped it would allow me to upload sketches remotely (OTA) and securely, but that is not the purpose...
CPU 160MHz, SPI 80MHz
Benchmark Time (microseconds)
Screen fill 81880
Text 17484
Lines 110823
Horiz/Vert Lines 8784
Rectangles (outline) 6683
Rectangles (filled) 168311
Circles (filled) 79768
Circles (outline) 65434
Triangles (outline) 25870
Triangles (filled) 100298
Rounded rects (outline) 30082
Rounded rects (filled) 212347
Done! Total = 0.907055 s
CPU 80MHz, SPI 80MHz
Benchmark Time (microseconds)
Screen fill 82850
Text 26730
Lines 162672
Horiz/Vert Lines 10251
Rectangles (outline) 8617
Rectangles (filled) 170695
Circles (filled) 121724
Circles (outline) 97799
Triangles (outline) 37727
Triangles (filled) 137253
Rounded rects (outline) 43787
Rounded rects (filled) 236197
Done! Total = 1.135728 s
CPU 160MHz, SPI 40MHz
Benchmark Time (microseconds)
Screen fill 157817
Text 20258
Lines 130615
Horiz/Vert Lines 15264
Rectangles (outline) 10667
Rectangles (filled) 323955
Circles (filled) 100700
Circles (outline) 74626
Triangles (outline) 30953
Triangles (filled) 151077
Rounded rects (outline) 36637
Rounded rects (filled) 384025
Done! Total = 1.435837 s
80MHz, 40MHz SPI
Benchmark Time (microseconds)
Screen fill 161944
Text 29219
Lines 179507
Horiz/Vert Lines 16988
Rectangles (outline) 12709
Rectangles (filled) 332815
Circles (filled) 142200
Circles (outline) 106428
Triangles (outline) 42014
Triangles (filled) 189318
Rounded rects (outline) 50262
Rounded rects (filled) 414494
Done! Total = 1.677213 s
160MHz, 20MHz SPI
Benchmark Time (microseconds)
Screen fill 312993
Text 26787
Lines 173766
Horiz/Vert Lines 28570
Rectangles (outline) 18852
Rectangles (filled) 642021
Circles (filled) 145685
Circles (outline) 98588
Triangles (outline) 41397
Triangles (filled) 255954
Rounded rects (outline) 52092
Rounded rects (filled) 735513
Done! Total = 2.531470 s
80MHz, 20MHz SPI
Benchmark Time (microseconds)
Screen fill 315007
Text 35582
Lines 221512
Horiz/Vert Lines 30092
Rectangles (outline) 20778
Rectangles (filled) 646544
Circles (filled) 186471
Circles (outline) 129292
Triangles (outline) 52337
Triangles (filled) 292740
Rounded rects (outline) 65204
Rounded rects (filled) 761167
Done! Total = 2.756191 s
SPIClass *_SPI;
//_SPI = SPIdev;
_SPI = &SPI;
#include <Fonts/GFXFF/Yellowtail_32.h>
#include <Fonts/GFXFF/gfxfont.h>
// Read the colour of a pixel at x,y and return value in 565 format
uint16_t readPixel(int32_t x0, int32_t y0);
// The next functions can be used as a pair to copy screen blocks (or horizontal/vertical lines) to another location
// Read a block of pixels to a data buffer, buffer is 16 bit and the array size must be at least w * h
void readRect(uint32_t x0, uint32_t y0, uint32_t w, uint32_t h, uint16_t *data);
// Write a block of pixels to the screen
void pushRect(uint32_t x0, uint32_t y0, uint32_t w, uint32_t h, uint16_t *data);
// This next function has been used successfully to dump the TFT screen to a PC for documentation purposes
// It reads a screen area and returns the RGB 8 bit colour values of each pixel
// Set w and h to 1 to read 1 pixel's colour. The data buffer must be at least w * h * 3 bytes
void readRectRGB(int32_t x0, int32_t y0, int32_t w, int32_t h, uint8_t *data);
I downloaded the new library and get the following error...OK, that is an SD library generated error.
I finally got it to load...Great!
I don't see where to enter my network id & pass word and I don't think
the settings I entered for weather underground are correct.
const String WUNDERGRROUND_API_KEY = "<WUNDERGROUND KEY HERE>";
//const String WUNDERGRROUND_API_KEY = "1c265fajf48s0a82"; // Example only of what the above line should look like
const String WUNDERGRROUND_LANGUAGE = "EN"; // Language EN = English
const String WUNDERGROUND_COUNTRY = "Peru"; // UK etc
const String WUNDERGROUND_CITY = "Lima"; // City, London etc
#include "Arduino.h"
class GxIO
{
public:
GxIO(){};
virtual void reset();
virtual void init();
virtual void writeCommandTransaction(uint8_t c);
virtual void writeDataTransaction(uint8_t d);
virtual void writeData16Transaction(uint16_t d);
virtual void writeData16Transaction(uint16_t d, uint32_t num);
virtual void writeCommand(uint8_t c);
virtual void writeData(uint8_t d);
virtual void writeData16(uint16_t d);
virtual void writeData16(uint16_t d, uint32_t num);
virtual void writeData2x8(uint16_t d);
virtual void startTransaction();
virtual void endTransaction();
virtual void setBackLight(bool lit);
protected:
int8_t _cs, _rs, _rst, _wr, _rd, _bl; // Control lines
};
#if defined(ARDUINO_ARCH_SAM)
class GxIO_TikyOnDue : public GxIO
{
public:
GxIO_TikyOnD);
};
class GxIO_HVGAOnDue : public GxIO
{
public:
GxIO_HVGAOnD);
};
#endif
Thanks, you have given me "food for thought" and that is helpful, I will think on this further...Thank you for looking at my design idea.
const String WUNDERGROUND_COUNTRY = "FL US";
const String WUNDERGROUND_CITY = "Boca Raton";
const String WUNDERGRROUND_LANGUAGE = "EN";
const String WUNDERGROUND_COUNTRY = "US";
const String WUNDERGROUND_CITY = "FL/Boca_Raton";
Hi Bill,Great. One area that would be nice to get updated in the ReadMe is the Software Requirements/Libraries.
Issue reporting has been switched on, the project is still a WIP so some bugs are expected and the ReadMe is not very complete.
it seems that the Windows based Arduino IDE that I use (and the one used by original author - Daniel Eichhorn) is tolerant of file name letter case errors.It isn't the IDE, it is Windows. While NT is fully capable of handling the filename characters properly, Microsoft chose to keep the case insensitivity as a default in filenames that they inherited all the way back from MS DOS which came from CPM.
One area that would be nice to get updated in the ReadMe is the Software Requirements/Libraries.Yes. I'm busy at the moment but will update the ReadMe soon and probably add links to all the libraries required in the main sketch header.
There are a few more than what is listed.
Hi Stan,Hi,
You will have to wait and see if the DST time is automatically applied on the correct day for your time zone :-)
It would be nice if it could run on a larger display like a 3.6 or 3.9 display.Hi Stan,
#ifndef _GxIO_H_
#define _GxIO_H_
#include <Arduino.h>
#include <SPI.h>
class GxIO
{
public:
GxIO() {};
const char* name = "GxIO";
virtual void reset();
virtual void init();
virtual uint8_t transferTransaction(uint8_t d);
virtual uint16_t transfer16Transaction(uint16_t d);
virtual uint8_t readDataTransaction()
{
return 0;
};
virtual uint16_t readData16Transaction()
{
return 0;
};
virtual void writeCommandTransaction(uint8_t c);
virtual void writeDataTransaction(uint8_t d);
virtual void writeData16Transaction(uint16_t d, uint32_t num = 1);
virtual void writeCommand(uint8_t c);
virtual void writeData(uint8_t d);
virtual void writeData(uint8_t* d, uint32_t num);
virtual void writeData16(uint16_t d, uint32_t num = 1);
virtual void writeAddrMSBfirst(uint16_t d);
virtual void startTransaction();
virtual void endTransaction();
virtual void setBackLight(bool lit);
};
#if defined(__AVR) || defined(ESP8266)
class GxIO_SPI : public GxIO
{
public:
GxIO_SPI(SPIClass& spi, int8_t cs, int8_t dc, int8_t rst = -1, int8_t bl = -1);
const char* name = "GxIO_SP;
int8_t _cs, _dc, _rst, _bl; // Control lines
};
class GxIO_SPI3W : public GxIO
{
public:
GxIO_SPI3W(SPIClass& spi, int8_t cs, int8_t dc, int8_t rst = -1, int8_t bl = -1,
// defaults are for RA8875
uint8_t cmd_read = 0xC0, uint8_t data_read = 0x40, uint8_t cmd_write = 0x80, uint8_t data_write = 0x00);
const char* name = "GxIO_SPI3;
uint8_t _cmd_read, _data_read, _cmd_write, _data_write;
int8_t _cs, _dc, _rst, _bl; // Control lines
};
// GxCTRL.h
#ifndef _GxCTRL_H_
#define _GxCTRL_H_
#include "GxIO.h"
class GxCTRL
{
public:
GxCTRL(GxIO& io) : IO(io) {};
const char* name = "GxCTRL";
virtual void init();
virtual void setWindow(uint16_t x0, uint16_t y0, uint16_t x1, uint16_t y1);
virtual void setRotation(uint8_t r);
virtual void invertDisplay(boolean i) {IO.writeCommandTransaction(i ? 0x21 : 0x20);};
protected:
GxIO& IO;
};
TFT_eSPI library test!
Benchmark Time (microseconds)
Screen fill 625771
Text 37269
Lines 502658
Horiz/Vert Lines 58450
Rectangles (outline) 32978
Rectangles (filled) 1511987
Circles (filled) 359877
Circles (outline) 290419
Triangles (outline) 104101
Triangles (filled) 599591
Rounded rects (outline) 132635
Rounded rects (filled) 1726473
Done!
TFT_eSPI library test!
Benchmark Time (microseconds)
Screen fill 312986
Text 24360
Lines 150932
Horiz/Vert Lines 28403
Rectangles (outline) 18591
Rectangles (filled) 641954
Circles (filled) 137072
Circles (outline) 88468
Triangles (outline) 36436
Triangles (filled) 251335
Rounded rects (outline) 47381
Rounded rects (filled) 734838
Done!
Benchmark Time (microseconds)
Screen fill 115291
Text 28714
Lines 393270
Horiz/Vert Lines 11581
Rectangles (outline) 8164
Rectangles (filled) 279258
Circles (filled) 186516
Circles (outline) 240076
Triangles (outline) 78149
Triangles (filled) 167982
Rounded rects (outline) 94215
Rounded rects (filled) 348198
Done!
Total = 1.9496
#define,Try the attached version. It is not perfect (some text position issues) but it should work OK. It is for a 480x320 screen.?It should run just as fast as only one screen needs to be handled at a time..The image attached appears to be corrupted so is not much use.
Try the attached version. It is not perfect (some text position issues) but it should work OK. It is for a 480x320 screen.Hi Bodmer,
I think it's related to the touch screen since testing with only touch examples fails.I had not noticed Daniel's update to use the touch screen, so that is interesting.
(XPT2046 touch screen according to the PCB)
#include <XPT2046_Touchscreen.h>
#include <SPI.h>
// These are the pins for all ESP8266 boards
#define PIN_D0 16
#define PIN_D1 5
#define PIN_D2 4
#define PIN_D3 0
#define PIN_D4 2
#define PIN_D5 14 // SCLK
#define PIN_D6 12 // MISO
#define PIN_D7 13 // MOSI
#define PIN_D8 15
#define PIN_D9 3
#define PIN_D10 1
#define CS_PIN PIN_D1 // XPT2046 chip select
#define TFT_CS PIN_D8 // TFT chip select
XPT2046_Touchscreen ts(CS_PIN);
void setup() {
Serial.begin(38400);
digitalWrite(TFT_CS, 1); // Disable TFT for test only
ts.begin();
while (!Serial && (millis() <= 1000));
}
void loopB() {
TS_Point p = ts.getPoint();
Serial.print("Pressure = ");
Serial.print(p.z);
if (ts.touched()) {
Serial.print(", x = ");
Serial.print(p.x);
Serial.print(", y = ");
Serial.print(p.y);
}
Serial.println();
// delay(100);
delay(30);
}
void loop() {
if (ts.touched()) {
TS_Point p = ts.getPoint();
Serial.print("Pressure = ");
Serial.print(p.z);
Serial.print(", x = ");
Serial.print(p.x);
Serial.print(", y = ");
Serial.print(p.y);
delay(30);
Serial.println();
}
}
Yes! Touch is working!OK, good to hear it is working. I had a look a Daniel's code and it is a "work in progress" as there the touch response code is incomplete and commented out.
Now, with your Planespotter when pressed, I see a menu and it's no longer crashing.
And now it also know more about coding Arduino sketches so it wasn't a waste of time
It seems no commands are defined so I going to dive into the world of buttons and touch and commands.
Let's see if I can merge PS and WS together.
if ((pt.x>=420) && (pt.x<=480) && (pt.y>=0) && (pt.y<=40)) {
if ((pt.x>=420) && (pt.x<=480) && (pt.y>=0) && (pt.y<=40)) {
if (currentPage == '0'){
// Page 0 PanAndZoom, page 1 MainMenu, Page 2 manual update, based on presstime
tft.fillScreen(TFT_BLACK);
Serial.println("Zoom IN pressed") ;
currentZoomlevel = geoMap.getCurrentZoomlevel(); //added in geomap.h
for (int i = currentZoomlevel; i++;){
delay(500);
geoMap.downloadMap(mapCenter, i, _downloadCallback);
geoMap.convertToCoordinates({0,0});
geoMap.convertToCoordinates({MAP_WIDTH, MAP_HEIGHT});
tft.fillRect(0, geoMap.getMapHeight(), tft.width(), tft.height() - geoMap.getMapHeight(), TFT_BLACK);
Does that mean that we cannot use something like this or wouldn't that only be inaccurate?If you run the code in post #122 you will see in the Serial Monitor window that the coordinates are in raw ADC values. These could be used but are likely to be in the range 0 - 4095. Also the coordinate origin of the touch screen may not be in the same place as the origin of the screen.
sorrry I am new to this....I am using a st7735 display and a nodemcu esp32You need to use numeric numbers for the I/O pins with an ESP32, for example use:
thanks so much it finally worked !!!It looks like red and blue are swapped so try a different "TAB" option in the setup file if this is the case.
Is there an easy way to download (6Kb) bmp (from a/my internal server) and throw them out of memory when no longer needed?The simplest method if the server handles http GET commands is to use a method like this example (). However as the image is small, and I assume frequently fetched and erased, then the image can be stored in a RAM array otherwise you could wear out the FLASH memory.
What I assume (Please verify) is that it is not possible to write to SPIFFS during runtime, only during setup phase.SPIFFS is a FLASH based filing system so files can be created and erased during runtime. The main problem is that the FLASH will eventually wear our after maybe less than 1 million write cycles.
Since i can't store the 1500+ images directly to SPIFFS since it's total size is more than 5MB, i was thinking that I simply download the images I need while they're being seen (silhouettes of the airplane) and store them on SPIFFS. I don't think I will ever see all the 1500+ Airplanes on the device so that's not a problem.
Everyime I download the images during runtime it resets and throws a stack trace on the serial interface.
The images are ok since storing (a few ofcourse) of them to SPIFFS directly and show them on the display works.
I am ok with storing them into ram but well, i can't find any examples for it.
C:\Program Files (x86)\Arduino\libraries\TFT_ILI9341_ESP-master/Fonts/glcdfont.c:6:22: fatal error: pgmspace.h: No such file or directory
spr.setColorDepth(8);
uint16_t flake[NUMFLAKES][3];
Hi Stan,Hi Bodmer! I have installed all libraries but it is showing the following:
The Weather Station code has been updated on Github () and is ready for tests.
Some 2.8" ILI9341 displays turned up today and are working fine, It is nice to have a bigger display so it can be seen at a distance.
@DancopyCoincidentally, I filed a bug report on this earlier today:
The library was developed using version 2.3.0 of the ESP8266 Board Support Package, this was more tolerant of the types assigned to program pgm_read_xxxx(*) memory pointers. The TFT_ILI9341_ESP library will generate the errors you describe under later board support packages and I see you are using the latest 2.4.1.
/*
Sketch to show creation of a sprite with a transparent
background, then plot it on the TFT.
Example for library:
A Sprite is notionally an invisible graphics screen that is
kept in the processors RAM. Graphics can be drawn into the
Sprite just as it can be drawn directly to the screen. Once
the Sprite is completed it can be plotted onto the screen in
any position. If there is sufficient RAM then the Sprite can
be the same size as the screen and used as a frame buffer.
A 1 bit Sprite occupies (width * height)/8 bytes in RAM. So,
for example, a 320 x 240 pixel Sprite occupies 9600 bytes.
*/
// A new setBitmapColor(fg_color, bg_color) allows any 2 colours
// to be used for the 1 bit sprite. One colour can also be
// defined as transparent when rendering to the screen.
#include <TFT_eSPI.h> // Include the graphics library (this includes the sprite functions)
TFT_eSPI tft = TFT_eSPI(); // Create object "tft"
TFT_eSprite img = TFT_eSprite(&tft); // Create Sprite object "img" with pointer to "tft" object
// the pointer is used by pushSprite() to push it onto the TFT
void setup(void) {
Serial.begin(250000);
tft.init();
tft.setRotation(0);
}
void loop() {
tft.fillScreen(TFT_NAVY);
// Draw 10 sprites containing a "transparent" colour
for (int i = 0; i < 10; i++)
{
int x = random(240-70);
int y = random(320-80);
int c = random(0x10000); // Random colour
drawStar(x, y, c);
}
delay(2000);
uint32_t dt = millis();
// Now go bananas and draw 500 nore
for (int i = 0; i < 500; i++)
{
int x = random(240-70);
int y = random(320-80);
int c = random(0x10000); // Random colour
drawStar(x, y, c);
yield(); // Stop watchdog reset
}
// Show time in milliseconds to draw and then push 1 sprite to TFT screen
numberBox( 10, 10, (millis()-dt)/500.0 );
delay(2000);
}
// #########################################################################
// Create sprite, plot graphics in it, plot to screen, then delete sprite
// #########################################################################
void drawStar(int x, int y, int star_color)
{
// Create an 1 bit (2 colour) sprite 70x80 pixels (uses 70*80/8 = 700 bytes of RAM)
// Colour depths of 8 bits per pixel and 16 bits are also supported.
img.setColorDepth(1); // Set colour depth before creating the Sprite
img.createSprite(70, 80); // Create the sprite
img.setBitmapColor(star_color, TFT_BLACK); // Set the 2 pixel colours
// Fill Sprite with the colour that will be defined later as "transparent"
// We could also fill with any colour as transparent, and later specify that
// same colour when we push the Sprite onto the display screen.
img.fillSprite(TFT_BLACK);
// Draw 2 triangles to create a filled in star
img.fillTriangle(35, 0, 0,59, 69,59, star_color);
img.fillTriangle(35,79, 0,20, 69,20, star_color);
// Punch a star shaped hole in the middle with a smaller "transparent" star
img.fillTriangle(35, 7, 6,56, 63,56, TFT_BLACK);
img.fillTriangle(35,73, 6,24, 63,24, TFT_BLACK);
// Push sprite to TFT screen CGRAM at coordinate x,y (top left corner)
// Specify what colour is to be treated as transparent (black in this example).
img.pushSprite(x, y, TFT_BLACK);
// Delete Sprite to free memory, creating and deleting takes very little time.
img.deleteSprite();
}
// #########################################################################
// Draw a number in a rounded rectangle with some transparent pixels
// #########################################################################
void numberBox(int x, int y, float num )
{
// Size of sprite
#define IWIDTH 80
#define IHEIGHT 35
// Create a 8 bit sprite 80 pixels wide, 35 high (2800 bytes of RAM needed)
// this gives 256 colours per pixel, this example uses 3 colours,
// "transparent", red and white
img.setColorDepth(8);
img.createSprite(IWIDTH, IHEIGHT);
// Fill it with black (this will be the transparent colour)
img.fillSprite(TFT_BLACK);
// Draw a background for the numbers
img.fillRoundRect( 0, 0, 80, 35, 15, TFT_RED);
img.drawRoundRect( 0, 0, 80, 35, 15, TFT_WHITE);
// Set the font parameters
img.setTextSize(1); // Font size scaling is x1
img.setTextColor(TFT_WHITE); // White text, no background colour
// Set text coordinate datum to middle right
img.setTextDatum(MR_DATUM);
// Draw the number to 3 decimal places at 70,20 in sprite using font 4
img.drawFloat(num, 3, 70, 20, 4);
// Push sprite to TFT screen at coordinate x,y (top left corner)
// All black pixels will not be drawn hence will show as "transparent"
img.pushSprite(x, y, TFT_BLACK);
// Delete sprite to free up the RAM
img.deleteSprite();
}
And for a "beginner" like me (why not say a layman yet!), What should be done? Or, do you intend to adapt the library? Thank youThe problem is in the esp8266 core code supplied by the esp8266 guys.
@bperrybapSure. Also, there are a few other things like the weather underground API key that I want to move away from the code and into a WEB page setting. That way nothing that is unique/specific to the users environment is in the actual code.
Those improvements sound good, can you create a pull request of Github at some point?
The latest code is working ok for me. I'm using a Wemos D1 mini.But, you can see that according to the attached image, the screen is opening normally but the icons are strange and are missing weather information.
--- bill
#define ILI9486_DRIVER
#define ILI9488_DRIVER
tft.writecommand(0x3A); // Pixel Interface Format
tft.writedata(0x66); // 18 bit colour for SPI
tft.pushImage(x, y--, w, 1, (uint16_t*)lineBuffer);
With:
tft.pushImage(x, y--, w, 1, (uint16_t*)lineBuffer, COLOUR);
// Demo using arcFill to draw ellipses and a segmented elipse
#include <TFT_eSPI.h> // Hardware-specific library
//#include <Adafruit_GFX.h>
#include <SPI.h>
#include <ESP8266WiFi.h>
#include "Alert.h"
#include "sole2.h"
TFT_eSPI tft = TFT_eSPI(); // Invoke custom library
//
#define BLACK 0x0000
#define BLUE 0x001F
#define RED 0xF800
#define GREEN 0x07E0
#define CYAN 0x07FF
#define MAGENTA 0xF81F
#define YELLOW 0xFFE0
#define WHITE 0xFFFF
#define DEG2RAD 0.0174532925
#define LOOP_DELAY 10 // Loop delay to slow things down
boolean range_error = 0; // eliminare se non usato
byte inc = 0;
unsigned int col = 0;
byte red = 31; // Red is the top 5 bits of a 16 bit colour value
byte green = 0;// Green is the middle 6 bits
byte blue = 0; // Blue is the bottom 5 bits
byte state = 0;
void setup(void)
{
tft.begin();
tft.setRotation(0); // 0 / 90 / 180 o 270
tft.fillScreen(TFT_BLACK);
Serial.begin(115200);
// -------------------------------------------------------************----------------------------------------
drawIcon(alert, 40, 80, alertWidth, alertHeight);
//drawIcon(info, 210, 80, infoWidth, infoHeight);
drawIcon(sole2, 210, 80, sole2Width, sole2Height);
tft.drawLine(10, 330, 300, 330, red);
tft.drawLine(10, 335, 300, 335, BLUE);
delay(200);
}
void loop()
{
}// fine main loop
// To speed up rendering we use a 64 pixel buffer
#define BUFF_SIZE 64
// Draw array "icon" of defined width and height at coordinate x,y
// Maximum icon size is 255x255 pixels to avoid integer overflow
void drawIcon(const unsigned short* icon, int16_t x, int16_t y, int8_t width, int8_t height)
{
uint16_t pix_buffer[BUFF_SIZE]; // Pixel buffer (16 bits per pixel)
// Set up a window the right size to stream pixels into
tft.setWindow(x, y, x + width - 1, y + height - 1);
// Work out the number whole buffers to send
uint16_t nb = ((uint16_t)height * width) / BUFF_SIZE;
// Fill and send "nb" buffers to TFT
for (int i = 0; i < nb; i++)
{
for (int j = 0; j < BUFF_SIZE; j++)
{
pix_buffer[j] = pgm_read_word(&icon[i * BUFF_SIZE + j]);
}
tft.pushColors(pix_buffer, BUFF_SIZE);
}
// Work out number of pixels not yet sent
uint16_t np = ((uint16_t)height * width) % BUFF_SIZE;
// Send any partial buffer left over
if (np)
{
for (int i = 0; i < np; i++) pix_buffer[i] = pgm_read_word(&icon[nb * BUFF_SIZE + i]);
tft.pushColors(pix_buffer, np);
}
}
// ------------------------------------------------------------------------------
// We need this header file to use FLASH as storage with PROGMEM directive:
#include <pgmspace.h>
// Icon width and height
const uint8_t sole2Width = 64;
const uint8_t sole2Height = 64;
const unsigned short sole2[1024]01, 0x00,
0x00, 0xxF0, 0xE0, 0xC0, 0x80, 0x81, 0x83, 0x87, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x7F, 0x7F, 0x7F, 0x7E, 0x7E,
0x7E, 0x7E, 0x7F, 0x7F, 0x7F, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0x87, 0x83, 0x81, 0x80, 0xC0, 0xE0, 0xF1F, 0x0F, 0x07, 0x03, 0x81, 0xC1, 0xE0, 0xE0, 0xF0, 0xF0, 0xF8, 0xF8, 0xF8, 0xF8,
0xF8, 0xF8, 0xF8, 0xF8, 0xF0, 0xF0, 0xE0, 0xE0, 0xC1, 0x81, 0x03, 0x07, 0x0F, 0x1F, 0x3F, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0x7F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x07,
0x00, 0x00, 0x00, 0xF0, 0xFC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFC, 0xF0, 0x00, 0x00, 0x00,
0x07, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x3F, 0x7F,
0xFE, 0xFC, 0xFC, 0xFC, 0xFC, 0xFC, 0xFC, 0xFC, 0xFC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xE0,
0x00, 0x00, 0x00, 0x0F, 0x3F,0F, 0x00, 0x00, 0x00,
0xE0, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFC, 0xFC, 0xFC, 0xFC, 0xFC, 0xFC, 0xFC, 0xFC, 0xFE,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0xFC, 0xF8, 0xF0, 0xE0, 0xC0, 0x81, 0x83, 0x07, 0x07, 0x0F, 0x0F, 0x1F, 0x1F, 0x1F, 0x1F,
0x1F, 0x1F, 0x1F, 0x1F, 0x0F, 0x0F, 0x07, 0x07, 0x83, 0x81, 0xC0, 0xE0, 0xF0, 0xF8, 0xFC,0F, 0x07, 0x03, 0x01, 0x81, 0xC1, 0xE1, 0xFF,
0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFE, 0xFE, 0xFE, 0x7E, 0x7E,
0x7E, 0x7E, 0xFE, 0xFE, 0xFE, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
0xFF, 0xE1, 0xC1, 0x81, 0x01, 0x03, 0x07, 0x080, 0x00,
0x00, 0x80,
};
//const unsigned short sole2[1024] PROGMEM = {
const uint8_t sole2[512] PROGMEM = {
void show_ks0108(int x, int y, const uint8_t *bmp, int w, int h, uint16_t color, uint16_t bg, uint8_t pad = 7)
{
for (int page = 0; page < h / 8; page++) {
uint8_t mask = 0, c;
for (int col = 0; col < w; col++) {
for (int row = 0; row < 8; row++) {
if (mask == 0) {
c = pgm_read_byte(bmp + page * w + col);
mask = 0x01;
}
tft.drawPixel(x + col, y + row + page * 8, (c & mask) ? color : bg);
mask <<= 1;
}
}
}
}
show_ks0108(x, y, sole2, 64, 64, BLUE, BLACK);
Your "bitmap" was written for a KS0108 style controller. i.e. each byte draws 8 pixels down the screen.Many thanks David, now I go to try.
....
David.
One option is to not defined TFT_CS at all in the setup file and to control each display chip select in the sketch, depending on which display you want to update.thanks for prompt reply,
@krutoy1961
MCUFRIEND_kbv library should detect the controller and perform everything correctly in software (until the hardware overheats and fails).David,
David.'s the screen we are talking about. Post #251.
Touch your wrist to the bottom of the screen.esp32 the simplest for $4 with aliexpress,
Warm is ok. Hot is BAD.
David.
MCUFRIEND_kbv library should detect the controller and perform everything correctly in software (until the hardware overheats and fails).David,
David.
#elif defined(ESP32) //regular UNO shield on TTGO D1 R32 (ESP32)
#define LCD_RD 2 //LED
#define LCD_WR 4
#define LCD_RS 15 //hard-wired to A2 (GPIO35)
#define LCD_CS 33 //hard-wired to A3 (GPIO34)
#define LCD_RST 32 //hard-wired to A4 (GPIO36)
#define LCD_D0 12
#define LCD_D1 13
#define LCD_D2 26
#define LCD_D3 25
#define LCD_D4 17
#define LCD_D5 16
#define LCD_D6 27
#define LCD_D7 14
I stronglyDavid,
Which controller do you have?
reg(0x00BF) FF FF 68 14 00 FF ILI9481, HX8357-B
Delete your current library installation directory and install the Beta from GitHub ZIP.David, perhaps I do something not so,
I came across this site ()recently and have been impressed at what a ESP8266 based NodeMCU can be persuaded to do.Hi Bodmer!
The sketch set uses a WiFi Locator to work out your location and download a jpeg map off the internet (e.g. Google maps). It then checks what planes are in the area and plots their location on the map, with info displayed on the nearest plane. How cool is that!
I have forked the project here () and adapted it to be compatible with the latest TFT_ILI9341_ESP library () and the latest JPEGDecoder library ().
So, if you have the TFT_ILI9341_ESP library running with a display it should be straightforward to download the entire project sketch folder and get it up and running.
Try the attached version. It is not perfect (some text position issues) but it should work OK. It is for a 480x320 screen.Bodmer
It is always wise to start with the recommended wiring and library examples.David,
David.
BodmerI have the same display, it normally works,
It can be with this screen and esp8266 (Nodemcu):
@DancopyHi Bodmer!
The link you provide is for an 8 bit parallel display, the TFT_eSPI library only supports displays of this type when using an ESP32.
The eBay advert indicates that one of many controllers could be fitted and thus you may end up getting a screen that is incompatible with the current version of the library.
I do not have any plans at the moment to migrate the Plane Spotter or weather sketches to a 320x480 display. In principle you could do this yourself. The plane and weather sketches that exist will run with a 320x480 display, they just won't fill it and the icons etc may end up looking smaller due to a smaller pixel size.
Just looked at your video.David,
It is a very bad idea (tm) to guess the controller.David I have 2 displays 3.5 320_480 for UNO
David.
The Red board has a Raydium RM68140. It is not supported by Bodmer.David,
The Blue board has an Ilitek ILI9481. It should work 100% with Bodmer's TFT_eSPI library.
There are considerable differences between the two controllers.
RM68140 is nearer to ILI9486 but the MADCTL register behaves differently. (rotations will not behave)
David.
fontconvert C:\Windows\Fonts\arialbd.ttf 24 32 255 > arial_black_32.h
tft.begin();
tft.invertDisplay(1);
tft.begin();
tft.invertDisplay(0);
17:21:38.578 -> TFT_eSPI ver = 1.4.5
17:21:38.578 -> Processor = ESP8266
17:21:38.578 -> Frequency = 80 MHz
17:21:38.578 -> Voltage = 3.15 V
17:21:38.578 -> Transactions = No
17:21:38.578 -> Interface = SPI
17:21:38.578 -> SPI overlap = No
17:21:38.578 ->
17:21:38.578 -> Display driver = 9486
17:21:38.578 -> Display width = 320
17:21:38.610 -> Display height = 480
17:21:38.610 ->
17:21:38.610 -> TFT_CS = D8 (GPIO 15)
17:21:38.610 -> TFT_DC = D3 (GPIO 0)
17:21:38.610 -> TFT_RST = D4 (GPIO 2)
17:21:38.610 ->
17:21:38.610 -> Font GLCD loaded
17:21:38.610 -> Font 2 loaded
17:21:38.610 -> Font 4 loaded
17:21:38.610 -> Font 6 loaded
17:21:38.610 -> Font 7 loaded
17:21:38.610 -> Font 8 loaded
17:21:38.610 -> Smooth font enabled
17:21:38.610 ->
17:21:38.610 -> Display SPI frequency = 15.0 MHz
17:21:38.610 ->
#define TFT_INVERSION_ON
tft.setSwapBytes(true);
//define touchArea for button 1
u_int 16 OnScreenBtnP.X = 100
u_int 16 OnScreenBtnP.X1 = 160
u_int16 OnScreenBtnP.Y = 80
u_int16 OnScreenBtnP.Y1 = 140
/*----------------------------------------------------------------------*
* read() returns the state of the on-screen touch button, 1==pressed, 0==released, missed etc. *
*----------------------------------------------------------------------*/
uint8_t Button::read(void)
{
if (touchController.isTouched(500) && millis() - lastTouchMillis > 1000) {
TS_Point p = touchController.getPoint();
lastTouchPoint = p;
lastTouchMillis = millis();
if (p.y = > OnScreenBtnP.Y and < OnScreenBtnP.Y1) {
if (p.x = > OnScreenBtnP.X and < OnScreenBtnP.Y1) {
_state = 1 // Consider virtual button button1 pressed
}
return _state;
}
else {
_state = 0; //missed, released, not pressed or wrong area, consider state 0
}
return _state;
}
}
}
//#define TFT_MISO 19
//#define TFT_MOSI 23
//#define TFT_SCLK 18
//#define TFT_CS 15 // Chip select control pin
//#define TFT_DC 2 // Data Command control pin
//#define TFT_RST 4 // Reset pin (could connect to RST pin)
//#define TFT_RST -1 // Set TFT_RST to -1 if display RESET is connected to ESP32 board RST
ESP32 RPi ST7796 80MHZ SPI
Benchmark, Time (microseconds)
Screen fill, 180556 (clear screen = 36 ms = ~28 fps)
Text, 13699
Lines, 166066
Horiz/Vert Lines, 16290
Rectangles (outline), 9376
Rectangles (filled), 440312
Circles (filled), 102275
Circles (outline), 101998
Triangles (outline), 34580
Triangles (filled), 164103
Rounded rects (outline), 43216
Rounded rects (filled), 490934
Total = 1.7634s
ESP8266 RPi ST7796 40MHZ SPI
Benchmark, Time (microseconds)
Screen fill, 317434 (clear screen = 63 ms = ~18 fps)
Text, 20888
Lines, 270627
Horiz/Vert Lines, 27145
Rectangles (outline), 16004
Rectangles (filled), 768065
Circles (filled), 173822
Circles (outline), 163897
Triangles (outline), 55797
Triangles (filled), 293139
Rounded rects (outline), 70044
Rounded rects (filled), 863555
Total = 3.0404s
bool initDMA(void); // Initialise the DMA engine and attach to SPI bus - typically used in setup()
void deInitDMA(void); // De-initialise the DMA engine and detach from SPI bus - typically not used
// Push an image to the TFT using DMA, buffer is optional and grabs (double buffers) a copy of the image
// Use the buffer if the image data will get over-written or destroyed while DMA is in progress
// If swapping colour bytes is defined, and the double buffer option is NOT used then the bytes
// in the original data image will be swapped by the function before DMA is initiated.
// The function will wait for the last DMA to complete if it is called while a previous DMA is still
// in progress, this simplifies the sketch and helps avoid "gotchas".
void pushImageDMA(int32_t x, int32_t y, int32_t w, int32_t h, uint16_t* data, uint16_t* buffer = nullptr);
// Push a block of pixels into a window set up using setAddrWindow()
void pushPixelsDMA(uint16_t* image, uint32_t len);
// Check if the DMA is complete - use while(tft.dmaBusy); for a blocking wait
bool dmaBusy(void); // returns true if DMA is still in progress
void dmaWait(void); // wait until DMA is complete
C:\Users\StarX\Documents\Arduino\libraries\TFT_eSPI\Processors/TFT_eSPI_ESP32.c: In member function 'bool TFT_eSPI::initDMA()':
C:\Users\StarX\Documents\Arduino\libraries\TFT_eSPI\Processors/TFT_eSPI_ESP32.c:674:3: error: 'spi_bus_config_t' has no non-static data member named 'flags'
};
C:\Users\StarX\Documents\Arduino\libraries\TFT_eSPI\Processors/TFT_eSPI_ESP32.c:690:3: error: 'spi_device_interface_config_t' has no non-static data member named 'input_delay_ns'
};
Error compilando para la tarjeta DOIT ESP32 DEVKIT V1
bool TFT_eSPI::initDMA(void)
{
if (DMA_Enabled) return false;
esp_err_t ret;
spi_bus_config_t buscfg = {
.mosi_io_num = TFT_MOSI,
.miso_io_num = TFT_MISO,
.sclk_io_num = TFT_SCLK,
.quadwp_io_num = -1,
.quadhd_io_num = -1,
.max_transfer_sz = TFT_WIDTH * TFT_HEIGHT * 2 + 8 // TFT screen size
//.flags = 0,
//.intr_flags = 0
};
spi_device_interface_config_t devcfg = {
.command_bits = 0,
.address_bits = 0,
.dummy_bits = 0,
.mode = TFT_SPI_MODE,
.duty_cycle_pos = 0,
.cs_ena_pretrans = 0,
.cs_ena_posttrans = 0,
.clock_speed_hz = SPI_FREQUENCY,
//.input_delay_ns = 0,
.spics_io_num = TFT_CS,
.flags = 0,
.queue_size = 7,
.pre_cb = dc_callback, //Callback to handle D/C line
.post_cb = 0
};
Width = 240, height = 320
86 ms | https://forum.arduino.cc/index.php?action=printpage;topic=443787.0 | CC-MAIN-2020-29 | refinedweb | 5,662 | 60.55 |
internal grids rowmap/colmap/xmap/ymapfraxinus Dec 3, 2010 3:24 AM
Hi,
What is the latest on internal $$RowMap etc grids in ArcGIS 10? One has to go the numpy route?
What is the latest on internal $$RowMap etc grids in ArcGIS 10? One has to go the numpy route?
This content has been marked as final. Show 5 replies
Re: internal grids rowmap/colmap/xmap/ymapmanuelgimond Dec 12, 2010 11:55 AM (in response to fraxinus)
Re: internal grids rowmap/colmap/xmap/ymapfraxinus Dec 13, 2010 2:50 AM (in response to fraxinus)Thanks for the link - I have voted it up!
Re: internal grids rowmap/colmap/xmap/ymapDan_Patterson Dec 13, 2010 11:47 AM (in response to fraxinus)From this thread, in version 10
try the example
Re: internal grids rowmap/colmap/xmap/ymaplpinner May 31, 2011 8:34 PM (in response to fraxinus)I found I can use the built in GRID variables or scalars in ArcGIS 10 with a bit of a hack. It appears that the python arcgisscripting module is still included in Desktop ArcGIS 10, perhaps for backwards compatibility (or perhaps I didn't uninstall 9.3 properly...), so I wrote a little script that uses the SingleOutputMapAlgebra tool, created a script tool and added that to a custom toolbox and then just use that as required.
import arcgisscripting gp = arcgisscripting.create(9.3) #This works in ArcGIS 10!!! expr=gp.getparameterastext(0) output=gp.getparameterastext(1) result=gp.SingleOutputMapAlgebra(expr,output)
I've attached a screenshot.
internal grids rowmap/colmap/xmap/ymapcurtvprice May 12, 2013 5:00 PM (in response to fraxinus):
[post=288698]$$NCOLS + $$ROWMAP?[/post] | https://geonet.esri.com/thread/17736 | CC-MAIN-2015-14 | refinedweb | 275 | 53.61 |
Redux or MobX: An attempt to dissolve the Confusion
I used Redux excessively the last year, but spent the recent time with MobX as state management alternative. It seems that Redux alternatives evolve naturally into confusion in the community. People are uncertain which solution to pick. The issue isn’t necessarily Redux vs MobX. Whenever there exists an alternative, people are curious what’s the best way to solve their problem. I am writing these lines to dissolve the confusion around both state management solutions Redux and MobX.
Often the article references React for the usage of state management libraries like MobX and Redux. Yet you can substitute React often with other solutions like Angular, Vue etc.
In the beginning of 2016 I wrote a fairly big application in React + Redux. After I discovered MobX as alternative, I took the time to refactor the application from Redux to MobX. Now I am pretty comfortable in using both and in explaining both approaches.
What is this article going to be about? If you are === TLDR: you can have a look at the Table of Contents. But to give you more detail: First, I want to revisit shortly the problem a state management library is solving for us. After all you would be doing fine by using
setState() in React or a variation of it in another SPA framework. Second, I will continue to give you an overview of both solutions by showing the consistencies and differences. Third, I want to give newcomers to the React ecosystem a roadmap to learn state management in React. Spoiler alert: Begin with
setState() before you dive into MobX or Redux. Last but not least, if you already have an application running with MobX or Redux, I want to give you more insights in refactoring from one to another state management library.
Table of Contents
- What problem do we solve?
- What’s the difference between Redux and MobX?
- The Learning Curve in React State Management
- Another state management solution?
- Final Thoughts
- Fact Sheet
- Key Takeaways
- More Resources
What problem do we solve?
Everyone wants to have state management in an application. But what problem does it solve for us? Most people start with a small application and already introduce a state management library. Everyone is speaking about it, aren’t they? Redux! MobX! But most applications don’t need ambitious state management from the beginning. It is even more dangerous, because most people are never going to experience which problems libraries like Redux or MobX solve.
Nowadays, the status quo is to build a frontend application with components. Components have internal state. For instance, in React such a local state is handled with
this.state and
setState(). In a growing application the state management can get chaotic quickly with local state, because:
- a component needs to share state with another component
- a component needs to mutate the state of another component
At some point, it gets more difficult to reason about the application state. It becomes a messy web of state objects and state mutations across your component hierarchy. Most of the time, the state objects and state mutations are not necessarily bound to one component. They reach through your component tree and you have to lift state up and down.
The solution therefore is to introduce a state management library like MobX or Redux. It gives you tools to save your state somewhere, to change your state and to receive state updates. You have one place to find your state, one place to change it and one place to get updates from. It follows the principal of a single source of truth. It makes it easier to reason about your state and state changes, because they get decoupled from your components.
State management libraries like Redux and MobX often have utility add-ons, like for React they have react-redux and mobx-react, to give your components access to the state. Often these components are called container components or, to be more specific, connected components. From anywhere in your component hierarchy you can access and alter the state by upgrading your component to a connected component.
What's the difference between Redux and MobX?
Before we dive into the differences, I want to give you the consitencies between MobX and Redux.
Both libraries are used to manage state in JavaScript applications. They are not necessarily coupled to a library like React. They are used in other libraries like AngularJs and VueJs too. But they integrate well with the philosophy of React.
If you choose one of the state management solutions, you will not experience a vendor lock-in. You can change to another state management solution any time. You can go from MobX to Redux or from Redux to MobX. I will demonstrate you later on how this works.
Redux by Dan Abramov is a derivation of the flux architecture. In contrast to flux, it uses a single store over multiple stores to save state. In addition, instead of a dispatcher it uses pure functions to alter the state. If you are not familiar with flux and you are new to state management, don’t bother with the last paragraph.
Redux is influenced by functional programming (FP) principles. FP can be done in JavaScript, but a lot of people come from an object-oriented background, like Java, and have difficulties to adopt functional programming principles in the first place. That adds up later on why MobX might be easier to learn as a beginner.
Since Redux embraces functional programming, it uses pure functions. A function gets an input, returns an output and does not have other dependencies but pure functions. A pure function produces always the same output with the same input and doesn’t have any side-effects.
(state, action) => newState
Your Redux state is immutable. Instead of mutating your state, you always return a new state. You don’t perform state mutations or depend on object references.
// don't do this in Redux, because it mutates the array function addAuthor(state, action) { return state.authors.push(action.author); } // stay immutable and always return a new object function addAuthor(state, action) { return [ ...state.authors, action.author ]; }
Last but not least, in idiomatic Redux, your state is normalized like in a database. The entities only reference each other by id. That’s a best practice. Even though not everyone is doing it like that, you can use a library like normalizr to achieve such a normalized state. Normalized state enables you to keep a flat state and to keep entities as single source of truth.
{ post: { id: 'a', authorId: 'b', ... }, author: { id: 'b', postIds: ['a', ...], ... } }
In comparison, MobX by Michel Weststrate is influenced by object-oriented programming, but also by reactive programming. It wraps your state into observables. Thus you have all the capabilities of Observable in your state. The data can have plain setters and getters, but the observable makes it possible to retrieve updates once the data changed.
In Mobx your state is mutable. Thus you mutate the state directly:
function addAuthor(author) { this.authors.push(author); }
Additionally the entities stay in a (deeply) nested data structure in relation to each other. You don’t normalize your state. The state stays denormalized and nested.
{ post: { id: 'a', ... author: { id: 'b', ... } } }
One Store vs Multiple Stores
In Redux you keep all your state in one global store or one global state. The one state object is your single source of truth. Multiple reducers, on the other hand, allow it to alter the immutable state.
In contrast, MobX uses multiple stores. Similar to Redux reducers, you can apply a divide and conquer by technical layers, domain etc. You might want to store your domain entities in separate stores yet also have control over the view state in on of your stores. After all, you collocate state that makes the most sense in your application.
Technically you can have multiple stores in Redux too. Nobody forces you to use only one store. But that’s not the advertised use case of Redux. It would go against best practices to use multiple stores. In Redux you want to have one store that reacts via its reducers to global events.
What does the implementation look like?
In Redux it would need the following lines of code to add a new user to the global state. You can see how we make use of the object spread operator to return a new state object. You could also use
Object.assign() to have immutable objects in JavaScript ES5.
const initialState = { users: [ { name: 'Dan' }, { name: 'Michel' } ] }; // reducer function users(state = initialState, action) { switch (action.type) { case 'USER_ADD': return { ...state, users: [ ...state.users, action.user ] }; default: return state; } } // action { type: 'USER_ADD', user: user };
You would have to
dispatch({ type: 'USER_ADD', user: user }); to add a new user to the global state.
In MobX a store would only manage a substate (like a reducer in Redux manages a substate) but you are able to mutate the state directly. The
@observable annotation makes it possible to observe state changes.
class UserStore { @observable users = [ { name: 'Dan' }, { name: 'Michel' } ]; }
Now it is possible to call
userStore.users.push(user); on a store instance. It is a best practice though to keep your state mutations more explicit with actions.
class UserStore { @observable users = [ { name: 'Dan' }, { name: 'Michel' } ]; @action addUser = (user) => { this.users.push(user); } }
You can strictly enforce it by using
useStrict() in MobX. Now you can mutate your state by calling
userStore.addUser(user); on a store instance.
You have seen how to update the state in both Redux and MobX. It is different. In Redux your state is read-only. You can alter the state only by using explicit actions. In contrast, in MobX the state enables read and write. You can mutate the state directly without using actions yet you can opt-in explicit actions by using
useStrict().
The Learning Curve in React State Management
Both Redux and MobX are mostly used in React applications. But they are standalone libraries for state management, which could be used everywhere without React. Their interoperability libraries make it easy to combine them with React components. It is react-redux for Redux + React and mobx-react for MobX + React. Later I will explain how to use both in a React component tree.
In recent discussions, it happened that people argued about the learning curve in Redux. It was often in the context of React: people began to learn React and already wanted to leverage state management with Redux. Most people would argue that React and Redux themselves have a good learning curve, but both together can be overwhelming. An alternative therefore would be MobX, because it is more suitable for beginners.
However, I would suggest a different approach for React newcomers to learn state management in the React ecosystem. Start to learn React with its own local state management functionality in components. In a React application you will first learn the React lifecycle methods and you will deal with local state management by using
setState() and
this.state. I highly recommend that learning path. Otherwise you will get overwhlemed quickly by the React ecosystem. Eventually, on this path, you will realize that the (internal) state management of components is getting difficult. After all, that’s how the book The Road to learn React approaches to teach state management in React.
Now we are at the point: What problem does MobX or Redux solve for us. Both libraries give a way of managing application state externally to the components. The state gets decoupled from the components. Components can access the state, manipulate it (explicit, implicit) and get updated with the new state. The state is a single source of truth.
Now you have to make the decision to choose a state management library. You know why you need to solve the problem in the first place. Moreover after having already a larger application in place, you should feel comfortable with React by now.
Redux or MobX for Newcomers?
Once you are familiar with React components and the internal state management, you can choose a state management library to solve your problem. After I used both libraries, I would say MobX can be very suitable for beginners. We could already see that MobX needs less code, even though it uses some magic annotations we may not need to know about yet.
In MobX you don’t need to be familiar with functional programming. Terms like immutability might be still foreign. Functional programming is a rising paradigm, but novel for most people in JavaScript. There is a clear trend towards it, but since not everyone has a functional programming background, it might be easier for people with an object-oriented background to adopt the principles of MobX.
On a side note: MobX is suitable for internal component state in exchange for React setState as well. I would recommend to keep
setState()over MobX for internal component state management. But it clearly shows how easy you could weave MobX into React to accomplish internal component state management.
A Growing Application
In MobX you are mutating annotated objects and your components will render an update. MobX comes with more internal implementation magic than Redux, which makes it easier to use in the beginning with less code. Coming from an Angular background it felt very much like using two-way data binding. You hold some state somewhere, watch the state by using annotations and let the component update do the rest once the state was mutated.
MobX allows it to mutate the state directly from the component tree.
// component <button onClick={() => store.users.push(user)} />
A better way of doing it would be to have a MobX
@action in the store.
// component <button onClick={() => store.addUser(user)} /> // store @action addUser = (user) => { this.users.push(user); }
It would make the state mutating more explicit with actions. Moreover there exists a little functionality to enforce state mutations via explicit actions like you have seen above.
// root file import { useStrict } from 'mobx'; useStrict(true);
Mutating the state directly in the store like we did in the first example wouldn’t work anymore. Coming from the first to the latter example shows how to embrace best practices in MobX. Moreover once you are doing explicit actions only, you are already using Redux constraints.
I would recommend to use MobX to kickstart projects. Once the application grows in size and contributors, it makes sense to apply best practices like using explicit actions. They are embracing the Redux constraints, which say you can never change the state directly and only by using actions.
Transition to Redux
Once your application gets bigger and has multiple developers working on it, you should consider to use Redux. It enforces by nature to use explicit actions to change the state. The action has a type and payload, which a reducer can use to change the state. In a team of developers it is very easy to reason about state changes that way.
// reducer (state, action) => newState
Redux gives you a whole architecture for state management with clear constraints. That is the success story behind Redux.
Another advantage of Redux is using it on the server side. Since we are dealing with plain JavaScript, you could send the state across the network. Serializing and deserializing a state object works out of the box. Yet it is possible in MobX too.
MobX is less opinionated, but by using
useStrict() you can enforce clearer constraints like in Redux. That’s why I wouldn’t say you cannot use MobX in scaling applications, but there is a clear way of doing things in Redux. The documentation in MobX even says: “[MobX] does not tell you how to structure your code, where to store state or how to process events.” The development team would have to establish a state management architecture in the first place.
After all the state management learning curve isn’t that steep. When we recap the recommendations, a newcomer in React would first learn to use setState() and this.state properly. After a while you would realize the problems of using only setState() to maintain state in a React application. When looking for a solution, you stumble upon state management libraries like MobX or Redux. But which one to choose? Since MobX is less opinionated, has less boilerplate and can be used similar to
setState() I would recommend in smaller projects to give MobX a shot. Once the application grows in size and contributors, you should consider to enforce more restrictions in MobX or give Redux a shot. I enjoyed using both libraries. Even if you don’t use one of them after all, it makes sense to have seen an alternative way of doing state management.
Another state management solution?
You may already started to use one state management solution, but want to consider another one? You could compare both real world MobX and Redux applications. I made one big Pull Request to show all changes at one place. In the case of the PR, it is a refactoring from Redux to MobX. But you could apply it vice versa. I don’t think it is necessary coupled to Redux nor MobX, because most of the changes are very much decoupled from everything else.
Basically you have to exchange Redux Actions, Action Creator, Action Types, Reducer, Global Store with MobX Stores. Additionally the interface to connect React components changes from react-redux to mobx-react. The presenter + container pattern can still be applied. You would have to refactor only the container components. In MobX one could use
inject to get a store dependency. After that the store can pass a substate and actions to a component. MobX
observer makes sure that the component updates (render) after an
observable property in the store has changed.
import { observer, inject } from 'mobx-react'; ... const UserProfileContainer = inject( 'userStore' )(observer(({ id, userStore, }) => { return ( <UserProfile user={userStore.getUser(id)} onUpdateUser={userStore.updateUser} /> ); }));
In Redux you would use
mapStateToProps and
mapDispatchToProps to pass a substate and actions to a component.
import { connect } from 'react-redux'; import { bindActionCreators } from 'redux'; ... function mapStateToProps(state, props) { const { id } = props; const user = state.users[id]; return { user, }; } function mapDispatchToProps(dispatch) { return { onUpdateUser: bindActionCreators(actions.updateUser, dispatch), }; } const UserProfileContainer = connect(mapStateToProps, mapDispatchToProps)(UserProfile);
There exists a tutorial on how to refactor from Redux to MobX. But as I said, one could also apply the refactoring vice versa. Once you have chosen a state management library, you can see that there is no vendor lock-in. They are pretty much decoupled from your application and therefore exchangeable.
Final Thoughts
Whenever I read the comments in a Redux vs MobX discussion, there is always this one comment: “Redux has too much boilerplate, you should use MobX instead. I was able to remove XXX lines of code.” The comment might be true, but no one considers the trade off. Redux comes with more boilerplate as MobX, because it was added for specific design constraints. It allows you to reason about your application state even though it is on a larger scale. All the ceremony around state handling is there for a reason.
Redux library is pretty small. Most of the time you are dealing only with plain JavaScript objects and arrays. It is closer to vanilla JavaScript than MobX. In MobX one wraps the objects and arrays into observable objects which hide most of the boilerplate. It builds up on hidden abstractions. There the magic happens, but it is harder to understand the underlying mechanisms. In Redux it is easier to reason about it with plain JavaScript. It makes it easier for testing and easier for debugging your application.
Additionally one has again to consider where we came from in single page applications. A bunch of single page application frameworks and libraries had the same problems of state management, which eventually got solved by the overarching flux pattern. Redux is the successor of the pattern.
In MobX it goes the opposite direction again. Again we start to mutate state directly without embracing the advantages of functional programming. For some people it feels again closer to two-way data binding. After a while people might run again into the same problems before a state management library like Redux was introduced. The state management gets scattered across components and ends up in a mess.
While in Redux you have an established ceremony how to set up things, MobX is less opinionated. But it would be wise to embrace best practices in MobX. People need to know how to organize state management to improve the reasoning about it. Otherwise people tend to mutate state directly in components.
Both libraries are great. While Redux is already well established, MobX becomes an valid alternative for state management.
Read more: Learn plain React with setState and this.state
Read more: Implement your own SoundCloud Client with React + Redux
Read more: Refactor an application from Redux to MobX
Fact Sheet
Redux
- single store
- functional programming paradigm
- immutable
- pure
- explicit update logic
- plain JavaScript
- more boilerplate
- normalized state
- flat state
MobX
- multiple stores
- object-oriented programming and reactive programming paradigms
- mutable
- impure
- implicit update logic
- “magic” JavaScript
- less boilerplate
- denormalized state
- nested state
Key Takeaways
- learn React with setState and this.state to manage local state
- get comfortable with it
- experience the issues you run into without a state managament library like Redux or MobX
- learning recommendations
- setState -> MobX -> MobX more restricted (e.g. useStrict) -> Redux
- or stick to one solution after setState:
- use MobX over Redux:
- short learning curve
- simple to use (magic)
- quick start
- less opinionated
- minimal boilerplate
- used in lightweight applications
- mutable data
- object-oriented programming
- in a smaller size & few developers project
- but can be used in bigger size projects too, when used with explicit constraints
- use Redux over MobX:
- clear constraints
- testable lightweight parts
- opinionated state management architecture
- mature best practices
- used in complex applications
- immutable data
- functional programming
- in a bigger size & several developers / teams project
- testability, scaleability, maintainability
- container + presenter components is a valid pattern for both
- react-redux and mobx-react are exchangeable interfaces to React container components
- useStrict of MobX makes state changes more obvious in a scaling app and should be best practice
More Resources
- comparison by Michel Weststrate - the creator of MobX
- comparison by Preethi Kasireddy | https://www.robinwieruch.de/redux-mobx-confusion/ | CC-MAIN-2017-30 | refinedweb | 3,730 | 56.96 |
Hi,
I'm having a hard time getting my applet to even build right. Here are the instructions my instructor gave:
"Develop a Java applet that will help an elementary school student learn multiplication. Use the Math.random then)."
It seems that right now, no matter how I try to use the drawString() method or where I put it, I can't get this thing to build without errors. I've been working on this all week and have until 10:00pm tomorrow (Sunday) night to finish and submit this. Can someone PLEASE help me?
Here's what I have at the moment:
import java.awt.*; import java.awt.Graphics; import java.lang.Object; import java.awt.event.*; import javax.swing.*; import java.util.*; public class Mult1 extends JApplet implements ActionListener { paint(Graphics brush) { brush.setFont(font2); } public void actionPerformed(ActionEvent e) { int ans = Integer.parseInt(answer.getText()); if(ans == number1 * number2) { answer.setText(""); Random rand = new Random(); int number1 = rand.nextInt(9) + 1; int number2 = rand.nextInt(9) + 1; brush.drawString(right, 20, 80); repaint(); validate(); } else { answer.setText(""); brush.drawString(wrong, 20, 80); repaint(); validate(); } } } ); } @Override public void actionPerformed(ActionEvent e) { answer.setText(""); Random rand = new Random(); int number1 = rand.nextInt(10); int number2 = rand.nextInt(10); } } | https://www.daniweb.com/programming/software-development/threads/282242/please-help | CC-MAIN-2018-34 | refinedweb | 209 | 53.98 |
Cache::Memcached - client library for memcached (memory cache daemon)
use Cache::Memcached; $memd = new Cache::Memcached { 'servers' => [ "10.0.0.15:11211", "10.0.0.15:11212", "/var/sock/memcached", "10.0.0.17:11211", [ "10.0.0.17:11211", 3 ] ], 'debug' => allocates memory for bucket distribution proportional to the total host weights.
Use
compress_threshold to set a compression threshold, in bytes. Values larger than this threshold will be compressed by
set and decompressed by
get.
Use
no_rehash to disable finding a new memcached server when one goes down. Your application may or may not need this, depending on your expirations and key usage.
Use
readonly to disable writes to backend memcached servers. Only get and get_multi will work. This is useful in bizarre debug and profiling cases only.
Use
namespace to prefix all keys with the provided namespace value. That is, if you set namespace to "app1:" and later do a set of "foo" to "bar", memcached is actually seeing you set "app1:foo" to "bar".
The other useful key is
debug, which when set to true will produce diagnostics on STDERR.
set_servers
Sets the server list this module distributes key gets and sets between. The format is an arrayref of identical form as described in the
new constructor.
set_debug
Sets the
debug flag. See
new constructor for more information.
set_readonly
Sets the
readonly flag. See
new constructor for more information.
set_norehash
Sets the
no_rehash.
You may also use the alternate method name remove, so Cache::Memcached looks like the Cache::Cache API..
stats
$memd->stats([$keys]);
Returns a hashref of statistical data regarding the memcache server(s), the $memd object, or both. $keys can be an arrayref of keys wanted, a single key wanted, or absent (in which case the default value is malloc, sizes, self, and the empty string). These keys are the values passed to the 'stats' command issued to the memcached server(s), except for 'self' which is internal to the $memd object. Allowed values are:
misc
The stats returned by a 'stats' command: pid, uptime, version, bytes, get_hits, etc.
malloc
The stats returned by a 'stats malloc': total_alloc, arena_size, etc.
sizes
The stats returned by a 'stats sizes'.
self
The stats for the $memd object itself (a copy of $memd->{'stats'}).
maps
The stats returned by a 'stats maps'.
cachedump
The stats returned by a 'stats cachedump'.
slabs
The stats returned by a 'stats slabs'.
items
The stats returned by a 'stats items'.
disconnect_all
$memd->disconnect_all;
Closes all cached sockets to all memcached servers. You must do this if your program forks and the parent has used this module at all. Otherwise the children will try to use cached sockets and they'll fight (as children do) and garble the client/server protocol.
flush_all
$memd->flush_all;
Runs the memcached "flush_all" command on all configured hosts, emptying all their caches. (or rather, invalidating all items in the caches in an O(1) operation...) Running stats will still show the item existing, they're just be non-existent and lazily destroyed next time you try to detch any of them.
When a server goes down, this module does detect it, and re-hashes the request to the remaining servers, but the way it does it isn't very clean. The result may be that it gives up during its rehashing and refuses to get/set something it could've, had it been done right.
This module is Copyright (c) 2003 Brad Fitzpatrick. All rights reserved.
You may distribute under the terms of either the GNU General Public License or the Artistic License, as specified in the Perl README file.
This is free software. IT COMES WITHOUT WARRANTY OF ANY KIND.
See the memcached website:
Brad Fitzpatrick <brad@danga.com>
Anatoly Vorobey <mellon@pobox.com>
Brad Whitaker <whitaker@danga.com>
Jamie McCarthy <jamie@mccarthy.vg> | http://search.cpan.org/%7Ebradfitz/Cache-Memcached-1.24/lib/Cache/Memcached.pm | crawl-002 | refinedweb | 638 | 67.76 |
find
Visual Studio .NET 2003
Locates the position of the first occurrence of an element in a range that has a specified value.
Parameters
- _First
- An input iterator addressing the position of the first element in the range to be searched for the specified value.
- An input iterator addressing the position one past the final element in the range to be searched for the specified value.
- _Val
- The value to be searched for.
Return Value
An input iterator addressing the first occurrence of the specified value in the range being searched. If no such value exists in the range, the iterator returned addresses the last position of the range, one past the final element.
Remarks
The operator== used to determine the match between an element and the specified value must impose an equivalence relation between its operands.
Example
// alg_find.cpp // compile with: /EHsc #include <list> #include <algorithm> #include <iostream> int main( ) { using namespace std; list <int> L; list <int>::iterator Iter; list <int>::iterator result; L.push_back( 40 ); L.push_back( 20 ); L.push_back( 10 ); L.push_back( 40 ); L.push_back( 10 ); cout << "L = ( " ; for ( Iter = L.begin( ) ; Iter != L.end( ) ; Iter++ ) cout << *Iter << " "; cout << ")" << endl; result = find( L.begin( ), L.end( ), 10 ); if ( result == L.end( ) ) cout << "There is no 10 in list L." << endl; else result++; cout << "There is a 10 in list L and it is" << " followed by a " << *(result) << "." << endl; }
Output
See Also
<algorithm> Members | find Sample
Show: | http://msdn.microsoft.com/en-us/library/h64454kx(v=vs.71).aspx | CC-MAIN-2014-15 | refinedweb | 240 | 67.86 |
- NAME
- SYNOPSIS
- DESCRIPTION
- Choosing a C++ Compiler
- Using Inline::CPP
- C++ Configuration Options
- C++-Perl Bindings
- <iostream>, Standard Headers, Namespaces, and Portability Solutions
- EXAMPLES
- Minimum Perl version requirements
- SEE ALSO
- BUGS AND DEFICIENCIES
- AUTHOR
-.
Choosing a C++ Compiler on.
Here's the rule: use any.:, Perl's namespace (and anyway, C++ wouldn't let you do that either, since extrafield wasn't defined).
C++ Configuration Options
For information on how to specify Inline configuration options, see Inline. This section describes each of the configuration options available for C. Most of the options correspond either the MakeMaker or XS options of the same name. See ExtUtils::MakeMaker and perlxs.
ALTLIBS
Adds a new entry to the end of the list of alternative libraries to bind with. MakeMaker will search through this list and use the first entry where all the libraries are found.
use Inline Config => AUTO_INCLUDE => '#include "something.h"';
BOOT
Specifies code to be run when your code is loaded. May not contain any blank lines. See perlxs for more information.
use Inline Config => BOOT => 'foo();';
CC
Specifies which compiler to use.
CCFLAGS
Specifies extra compiler flags. Corresponds to the MakeMaker option. Config => FILTERS => [Strip_POD => \&myfilter];
The filter may do anything. The code is passed as the first argument, and it returns the filtered code.
INC
Specifies extra include directories.. Config => MYEXTLIB => '/your/path/something.o';
PREFIX
Specifies a prefix that will automatically be stripped from C++ functions when they are bound to Perl. Less useful than in C, because C++ mangles its function names to facilitate function overloading.
use Inline Config => PRESERVE_ELLIPSIS => 1; or use Inline it include
iostream, which is the ANSI-compliant version of the header. For most compilers the use of this configuration option should no longer be necessary. It is still included., Namespaces, and Portability Solutions
As mentioned earlier, fully support namespaces, these standard tools were not segregated into a separate namespace.
ANSI Standard C++ changed that. Headers were renamed without the '.h' suffix, and standard tools were placed in the '
std' namespace. The
using namespace construct was; }
Obviously the first snippet is going to be completely incompabible with the second, third or fourth snippets. This is no problem for a C++ developer who knows his target compiler. But Perl runs just about everywhere. If similar portability are a decade (or more) old. But if you do care (maybe you're basing a CPAN module on Inline::CPP), use these constant definitions as a tool in building a widely portable solution.
If you wish, you may
#undef either of those constants. The constants are defined before any
AUTO_INCLUDEs -- even <iostream>. Consequently, you may even list
#undef __INLINE_CPP_.... within an
AUTO_INCLUDE configuration directive. I'm not sure why it would be necessary, but could be useful in testing.; }.; };
Minimum Perl version requirements
As Inline currently requires Perl 5.6.0 or later. Since Inline::CPP depends on Inline, Perl 5.6.0 is also required for Inline::CPP. It's hard to imagine anyone still using a Perl older than 5.6.0.6.0 Perl..
For information on using C and C++ structs with Perl, see Inline::Struct.
User and development discussion for Inline modules, including Inline::CPP occurs on the inline.perl.org mailing list. See to learn how to subscribe.
BUGS AND DEFICIENCIES:
- 1 The grammar used for parsing C++ is still quite simple, and does not allow several features of C++:
- a Templates: You may use existing template libraries in your code, but Inline::CPP won't know how to parse and bind template definitions. Keep the templates encapsulated away from the interface that will be exposed to Perl.
-
- b Operator overloading
-
- c Function overloading
-
- d Multiple inheritance doesn't work right (yet).
-
- e Multi-dimensional arrays as member data aren't implemented (yet).
-
- f Declaring a paramater type of void isn't implemented (yet). Just use
int myfunc();instead of
int myfunc(void);. This is C++, not C.
-
Other grammar problems will probably be noticed quickly.
- 2
In order of relative importance, improvements planned in the near future are:
- a Work through issues relating to successful installation and use on as many platforms as possible. The goal is to achieve a smoke-test "pass" rate similar to Inline::C.
The current "smoke test" pass rate seems to be around 85%. I'm always working on chipping away at that last 15%. If you're one of the unfortunate 15% get in touch with me so we can try to figure out what the problem is.
- b Improvements to the test suite.
-
- c Address other bugs and deficiences mentioned above.
-
- d Binding to unions.
-
AUTHOR
Neil Watkiss <NEILW@cpan.org> was the original author.
David Oswald <DAVIDO@cpan.org> is the current maintainer. David Oswald's Inline::CPP githug repo is:
Brian Ingerson <INGY@cpan.org> is the author of
Inline,
Inline::C and
Inline::CPR. He is known in the innermost Inline circles as "Batman". ;)
LICENSE AND COPYRIGHT
Copyright (c) 2000 - 2003 Neil Watkiss. Copyright (c) 2011 - 2012 David Oswald.
All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the same terms as Perl itself.
See
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 791:
Expected text after =item, not a number | https://metacpan.org/pod/release/DAVIDO/Inline-CPP-0.38_004/lib/Inline/CPP.pod | CC-MAIN-2015-22 | refinedweb | 885 | 60.61 |
Hello,
I’ve just got my Hydra and done some (very simple) hello world tests.
One thing has surprised me, though. I have an eBlock expansion connected on connector 6 and a button eBlock on the expansion, pin 3. I’m using InterruptPort, here.
The program simply lights the Hydra’s led when button state == 0, which works fine. Of course :
Now, if I try to plug another device on the eBlock expansion, then the “button pressed” interrupt is fired :o
e.data1 contains the button pin number (113, here). It does the same thing on every eBlock pin.
So, my question is : is it intended behaviour ?
Here’s the code :
using System; using System.Threading; using Microsoft.SPOT; using Microsoft.SPOT.Hardware; namespace TestHydra101 { public class Program { public static OutputPort Led = new OutputPort((Cpu.Pin)114, false); public static InterruptPort Button= new InterruptPort((Cpu.Pin)113, true, Port.ResistorMode.PullUp, Port.InterruptMode.InterruptEdgeBoth); public static void Main() { Button.OnInterrupt += Button_OnInterrupt; Thread.Sleep(Timeout.Infinite); } static void Button_OnInterrupt(uint data1, uint data2, DateTime time) { Led.Write(data2 == 0); Debug.Print("Button : data1 = " + data1.ToString() + ", data2 = " + data2.ToString()); } } }
Result is the same if I choose Port.ResistorMode.Disabled (Port.ResistorMode.PullDown giving an argument exception, here).
Well, that’s not a big problem, I only wanted to see if I could plug/unplug blocks while the board was running.
Have a nice day,
Christophe | https://forums.ghielectronics.com/t/hydra-interrupt-ports/7847 | CC-MAIN-2019-22 | refinedweb | 233 | 55 |
Are you a JavaScript app developer who connects to a web service? If so, there’s a new HTTP API in Windows 8.1 that improves on the abilities of the WinJS.xhr and XMLHttpRequest functions. With the Windows.Web.HttpClient API you have access to cookies, control over caching, and can set and get headers using strong types. There’s also a powerful system for modularizing your network code, giving it access to the HTTP processing pipeline. If you’re a C++ or .NET developer, it’s available for your app, too. (And it’s not just for Windows Store apps: it works on the desktop, as well.) If you want to just jump into the documentation, there’s reference material that includes quick code snippets and a full sample.
In this blog post you’ll see an overview of HttpClient programming, get a walkthrough of converting a JavaScript program to use the new API, and read about the advantages of the new API. Let’s get started with an overview of the new classes!
An overview of HttpClient programming
The new HTTP classes are all in the Windows.Web.Http namespace and the two sub-namespaces Headers and Filters. These namespaces contain a family of classes that work together to give you an easy-to-use but powerful HTTP API, as shown below:
How your code, the HttpClient API, and the REST or Web Service fit together
In this diagram, borrowed from the HttpClient poster on the Microsoft Download center, light green is your code. On the left side is the business logic for your app. In the center are your filters: modular code that you can put into the middle of the HTTP processing pipeline. By moving code into filters, your business logic can focus on your app and not on low-level networking details. You can build your own filters, reuse them from our HttpClient and Web Authentication Broker samples, or find them on the internet.
You’ll start by making a call to one of the HttpClient object’s methods. A commonly used method is getStringAsync(Windows.Foundation.Uri); this method, given a Uri, returns the content as a string (or triggers an error). The most general method on the HttpClient class is sendRequestAsync (HttpRequestMessage); it returns an HttpResponseMessage with the server headers, status code, the content, and more.
No matter what method you call, the HttpClient object packages up your request into an HttpRequestMessage (in the case of sendRequestAsync, you provide one of these). The HttpRequestMessage object is then passed through each filter in the filter pipeline, if they exist, finally passing through the Windows.Web.Http.Filters.HttpBaseProtocolFilter, which actually sends your HTTP message out. The base protocol filter also contains a set of properties that let you influence how your HTTP requests and responses are handled. The actual filter pipeline is created by you, and passed into the HttpClient constructor. If you don’t specify a filter pipeline, a default pipeline (consisting of just a new HttpBaseProtocolFilter) is created for you.
The response from the web service is then packaged up by the HttpBaseProtocolFilter and passed back up the filter pipeline. Each of the filters has full control over the response that the filter returns: it can return the incoming response (possibly modifying it first), can synthesize a new response, or can retry the original request or a modification of the original request. The final response is then returned to you. The ‘get’ convenience routines like getStringAsync extract the content and return it to you as a string, input stream, or buffer.
Note that the HttpClient classes can always trigger an error! Your code needs to be able to handle network errors ranging from simple network connectivity issues to DNS failures, server errors, and SSL errors.
Once you have your response, you can use it just like you do today.
Converting JavaScript WinJS.xhr code to use HttpClient
Now that you’ve seen the overall HttpClient API, let’s look at how to convert your existing WinJS.xhr calls into HttpClient calls. We’ll do this by converting code inspired by the Windows 8 QuickStart: Downloading a file with WinJS.xhr. You’ll see that the changes are short and simple:
Original WinJS.xhr code
app.GetWithWinJSxhr = function () {
var xhrDiv = document.getElementById("xhrReport");
xhrDiv.style.color = "#000000";
xhrDiv.innerText = "Running...";
WinJS.xhr({ url: "" }).done(
function complete(result) {
xhrDiv.innerText += "nDownloaded pagenn" + result.responseText;
xhrDiv.style.backgroundColor = "#00FF00";
},
function error(result) {
xhrDiv.innerText += "nGot error: " + result.statusText;
xhrDiv.style.backgroundColor = "#FF0000";
},
function progress(progress) {
xhrDiv.innerText += "nReady state is " + progress.readyState;
xhrDiv.style.backgroundColor = "#0000FF";
}
);
}
app.GetWithHttpClient = function () {
var xhrDiv = document.getElementById("xhrReport");
xhrDiv.style.color = "#000000";
xhrDiv.innerText = "Running...";
var hc = new Windows.Web.Http.HttpClient();// Change #1
var uri = new Windows.Foundation.Uri("");// Change #1
hc.getStringAsync(uri).done( // Change #1
function complete(result) {
xhrDiv.innerText += "nDownloaded pagen" + result; // Change #2
xhrDiv.style.backgroundColor = "#00FF00";
},
function error(result) {
var webError = Windows.Web.WebError.getStatus(result.number);// Change #3
xhrDiv.innerText += "nError " + webError + ":" + result.message;// Change #3
xhrDiv.style.backgroundColor = "#FF0000";
},
function progress(progress) {
xhrDiv.innerText += "nReady state is " + progress.stage; // Change #4
xhrDiv.style.backgroundColor = "#0000FF";
}
);
}
Change #1: objects and parameter type
The first set of changes is that we replace the call to WinJS.xhr with the creation of a new HttpClient object and a call to getStringAsync. You can make as many (or few) HttpClient objects that your app needs. Because each HttpClient can be individually configured (e.g., for cache control), it often makes sense to make one HttpClient per general configuration. For example, an app that starts out only reading data from cache and then switches to reading from the internet might have two HttpClient objects, one for “reading from cache” and one for “reading fresh content.”
Secondly, the HttpClient always takes in Uri objects, not strings. You can easily make a Uri from a string; just construct one with:
new Windows.Foundation.Uri(string-parameter)
Change #2: success response
You’ll notice that the parameter passed to you in the complete function is the downloaded content string instead of the XMLHttpRequest that WinJS.xhr will pass you. If you do need precise information about the server response, call the getAsync() method instead of getStringAsync() as it provides an HttpResponseMessage object. That object includes full details on the original server response. You can also get a buffer or inputStream by calling getBufferAsync() or getInputStreamAsync(), respectively.
Change #3: the error callback
The error function for HttpClient is a standard WinRTError object that mimics the JavaScript ‘Error’ object. Useful fields include description, message, and number. The error.number is a windows HRESULT value which you can convert to a WebErrorStatus by calling:
Windows.Web.WebError.getStatus(hresult)
Change #4: the progress callback
The progress function for HttpClient gives you an HttpProgress object. Like the WinJS.xhr() progress calls, you can find out the overall progress of your HTTP call. The key difference is that the progress value is called ‘stage’ with HttpClient and ‘readyState’ in the WinJS.xhr progress calls. The values are different, too: HttpClient gives you a more fine-grained insight into the exact HTTP processing stage. This is listed in the following table.
Table: WinJS.xhr versus HttpProgress states
With this last change we’re done. Our code now uses the HttpClient API instead of WinJS.xhr.
Advantages of the HttpClient family of classes
Now let’s look at some of the advantages of the HttpClient APIs from the 2013 //Build/ talk “Five great reasons to use the new HttpClient API.” Only four of the reasons apply to JavaScript, so we’ll just discuss four reasons here.
Reason #1: Strongly typed headers
The WinJS.xhr function lets you set an HTTP header for a request. But the headers are specified just as strings: you need to be quite knowledgeable in the exact data format, and errors are hard to catch. The HttpClient API lets you specify HTTP header values using strong types that reduce errors and handle the correct header formatting for you.
For example, suppose you want to read the last-modified date sent by the server. With strongly typed headers, you just examine the response.content.headers.lastModified value as a JavaScript date object. You don’t have to loop through the different headers, doing string compares (case insensitive, per RFC 2616!) and then parsing the date field yourself. Instead the value is simply handed to you. If you need the original headers (as strings), they’re all available to you.
Reason #2: Access to cookies
The existing JavaScript HTTP APIs handle cookies automatically: cookies sent to your app by servers are parsed and stored as needed for your app, and are automatically formatted and sent back to the server as needed. But previously your app couldn’t participate in this code: you couldn’t set or delete or list those cookies. HttpClient includes easy-to-use APIs that let you access the cookie container. As you might expect, Windows Store apps can only access their own cookies; you aren’t allowed to examine, set, or delete cookies from other apps.
All access to cookies is from the CookieManager object that’s part of the HttpBaseProtocolFilter. The CookieManager has three methods: deleteCookie, getCookies, and setCookie. As an example, here’s how to set a cookie called ‘myCookieName’ that will be sent when you send a request to any path in any sub-domain of ‘example.com’:
var bpf = new HttpBaseProtocolFilter();
var cookieManager = bpf.CookieManager;
var cookie = new HttpCookie(“myCookieName”, “.example.com”, “/”);
cookie.Value = “myValue”;
cookieManager.SetCookie(cookie);
// Use this base protocol file with an HttpClient.
Var httpClient = new HttpClient(bpf);
In the sample, we first get a CookieManager from an HttpBaseProtocolFilter. Then we create a cookie, set its value, and then set the cookie into the CookieManager.
Reason #3: Control over caching
Normally you don’t need to worry about caching. The server generally sets the right kind of headers on the HTTP responses, and the stack returns either cached or non-cached data as appropriate. But sometimes you need more control. The Windows.Web.Http classes let you control both how data is read from the network cache and when the network cache is updated with server responses. Caching is controlled with the cacheControl sub-object in the HttpBaseProtocolFilter. Note that each instance of an HttpClient generally has its own HttpBaseProtocolFilter, each of which is individually controlled. Changing a setting for one won’t change the setting for another.
The Windows.Web.Http.Filter.HttpCacheReadBehavior enumeration has three settings for reading from the cache:
- default means to work like a web browser works: if the resource is in the network cache, and it’s not expired (based on expiration data originally provided by the server), the cached resource is returned; otherwise, the HttpBaseProtocolFilter calls out to the web service to get the resource.
- mostRecent automatically does an if-modified-since back and forth with the server. If the resource is in the cache, we’ll automatically ask the web server for the resource, but with an if-modified-since header that’s initialized from the cached resource information. If the server returns a new version of the resource, that new version is returned; otherwise the cached value is returned. If the resource wasn’t in the cache, it’s retrieved from the server.
This is a great option when you need the freshest possible data and can afford the additional delays from the extra network round-trips.
- onlyFromCache means that only data from the cache is returned; if the requested network data isn’t in the cache, the operation completes with an error (“The system cannot find the file specified”) . To help your app start faster: when the app starts, you can require all resources be read from the cache, which is much faster than reading from the network. After the app starts, you can re-get the data, only this time actually allowing network access.
If you combine this with the ContentPrefetcher in Windows 8.1, the user can get the best of both worlds: the app launch speed of showing cached content and the freshness of seeing new-to-them content right at startup. The ContentPrefetcher class provides a mechanism for specifying resources that Windows should try to download in advance of your app being launched by the user. For more info about the ContentPrefetcher, see Matt Merry and Suhail Khalid’s 2013 //build/ talk “Building Great Service Connected Apps.”
Reason #4: Place your code modules into the processing pipeline for cleaner, more modular code
It’s great when your business logic can just make simple HTTP requests to web services. But at the same time, your app needs to handle a variety of conditions: your code needs to handle authentication, work correctly for network retries, handle metered networks and roaming, and more. The HttpClient API lets you create filters — chunks of modular code written to a common interface — to handle these common cases, and lets you place them into the HTTP processing pipeline. Each filter sees the requests going out and the responses coming back, and can modify the requests and responses as needed.
Let’s demonstrate this by adding the HTTP sample 503 retry filter into the demo code. To do this, we need to add the HttpFilters project from the HttpClient sample as a new project in our solution, add the HttpFilters as a reference in our JavaScript project, and create a filter pipeline that uses the HttpRetryFilter and the HttpBaseProtocolFilter. You can get the HttpClient sample from the HttpClient sample (Windows 8.1). The 503 Retry filter, on getting a 503 error from a server, automatically retries the request.
Step 1: Download the HttpClient sample (the C++, JavaScript) into a new directory. Remember where you downloaded it to!
Step 2: Open your JavaScript app project file.
Step 3: Right-click the solution and click Add>Existing Project to add the downloaded HttpClient sample’s HttpFilters project.
Step 4: In the JavaScript code project, right-click References>Add Reference. In the resulting Reference Manager dialog box, pick the HttpFilters reference from the solution/projects tab
Step 5: Create a filter pipeline in your JavaScript code and pass the filter pipeline into your HttpClient object. Filter pipelines are generally constructed from the HttpBaseProtocolFilter side and work their way to the HttpClient object. Each filter commonly takes in the pipeline-so-far in its constructor.
In the JavaScript code, you’ll first create the HttpBaseProtocolFilter. This filter doesn’t take in any parameters. Then you’ll construct the HttpFilters.HttpRetryFilter. The constructor takes in a single parameter: an IHttpFilter. Pass in the HttpBaseProtocolFilter that you just constructed. Lastly, the new HttpClient constructor can also take in an IHttpFilter object; pass in the HttpRetryFilter that you just made. The result is an HttpClient with a filter pipeline consisting of two filters: the HttpRetryFilter and the HttpBaseProtocolFilter.
The code changes are below. After the HttpClient object is created with the new filter pipeline, the rest of the code is unchanged.
var bpf = new Windows.Web.Http.Filters.HttpBaseProtocolFilter();
var retryFilter = new HttpFilters.HttpRetryFilter(bpf);
var hc = new Windows.Web.Http.HttpClient(retryFilter);
If you need to debug the C++ filter code, set your project for “Native with Script” debugging. This is set in project properties, in the “Debugger Type” field of the debugger tab. If you need to just debug the native (C++) code and not JavaScript, you can set Debugger Type to Native Only.
That’s all you need to do for your code to handle server 503 retries correctly. And if our retry filter doesn’t fully meet your needs, you have the source code ready to be updated to your specifications.
In Closing
The Windows.Web.Http classes have powerful features including strongly typed headers, access to cookies, useful control over caching, and filters that let you inject your code into the HTTP processing pipeline. These classes let you connect your app to web services with a minimum of code and a maximum of power and flexibility.
-Peter Smith, Senior Program Manager
More info
Can’t get enough? Check out these great links:
- There’s a developer poster for the HttpClient API available at the Microsoft Download Center
- The documentation is at
- The HttpClient sample is at
- The Web Authentication Broker sample includes filters for OAuth and OAuth 2.0. Drop them into your filter pipeline and with a bit of configuration you’ll be able to access popular websites with ease! The sample is at
- There’s a Build talk about HttpClient:
- There’s another Build talk about other networking APIs including the ContentPrefetcher feature at
Join the conversation | http://blogs.windows.com/buildingapps/2013/09/06/updating-your-javascript-apps-to-use-the-new-windows-web-http-api/ | CC-MAIN-2015-06 | refinedweb | 2,791 | 56.55 |
functions inside switch cases
- Currently Being ModeratedJan 1, 2013 6:23 PM (in response to GeekPod42)
It's hardly correct.
Play is a primitive (integer); it's not a verb. An integer can be used within a function but not the way you're using it.
You really need to study up on language fundamentals.
- Currently Being ModeratedJan 2, 2013 6:25 AM (in response to Michael Superczynski)
please, expound. I cant find an example how to implement a function, all of the examples have the following:
...
case Play:{
//do something
break;
}
...
- Currently Being ModeratedJan 2, 2013 9:04 AM (in response to GeekPod42)
You'd do well to learn the difference between integers (type: int) and functions before trying to go any further.
If you don't know any C, you're headed for a world of pain. I'd suggest you put aside your attempts at Objective C and Cocoa and learn the fundamentals. Start here:
1
Then move on to
2
Programming in Objective C
Then start working through
3.
Cocoa Programming for Mac
I made the mistake of starting with 3, realised I needed to read 2 by about Chapter 5, then about a third of the way through 2 realised I needed 1.
Do it the right way round and save yourself both time and a lot of headaches.
- Currently Being ModeratedJan 2, 2013 1:20 PM (in response to softwater)
Sorry, I do not clearify, this is C++.
- Currently Being ModeratedJan 4, 2013 11:22 AM (in response to GeekPod42)
Sounds like a namespace collision. Rename the function.
- Currently Being ModeratedJan 4, 2013 1:06 PM (in response to Keith Barkley)
Thank you so much I would never have seen that! | https://discussions.apple.com/message/20769199 | CC-MAIN-2014-10 | refinedweb | 287 | 63.59 |
Prefix To Infix Conversion In C++
In this tutorial, we shall learn the easiest way to convert prefix expression to infix expression in C++. As we know that we humans are more used to infix expressions such as ( a + b ) rather than prefix expression such as (+ a b). Contrary to that the computer finds it easy to understand prefix and postfix rather than infix.
The reason is, the easy and fast implementation of prefix and postfix expression. For the problem, we shall be using STL stack, as we require some Last In First Out data structure.
Algorithm and the convert function
Initially, the user inputs integer ‘n’ the total number of strings to be inputted, and the next n lines have prefix strings. We have also used a stack of string named ‘ch’ to store the operands. The for loop goes from rightmost end to beginning. The reason is that if we encounter an operand ( i.e. a variable) then it will be pushed into the stack but if any operator is encountered then two operands will be popped.
The first one stored in op1 and the next in op2. After that, the function convert is called which converts the prefix expression to infix part by part. The final infix expression is stored at the top of the stack. Below is our C++ code for Prefix To Infix Conversion:
#include <bits/stdc++.h> typedef long long ll; // macro for long long using namespace std; string convert(string op1, string op, string op2); // Function to convert the value of part of expression int main() { ll n; // User inputs n no of prefix expressions cin >> n; for (ll t = 0; t < n; t++) { string s; cin >> s; stack<string> ch; for (ll i = s.length() - 1; i >= 0; i--) { if (s[i] == ',') // Condition to ignore comma { continue; } else if (isalpha(s[i])) // checks if character is operand(variable) or operator { string temp; temp=temp+s[i]; // Convert char to const char* ch.push(temp); // that is to covert character to string } else if (s[i] == '+' || (s[i] == '-' || (s[i] == '*' || (s[i] == '/')))) { string op1,op2,temp; temp=temp+s[i]; // Convert char to const char* op1 = ch.top();ch.pop(); op2 = ch.top();ch.pop(); string res = convert(op1,temp,op2); ch.push(res); } } cout << ch.top() << endl; // The top of stack holds the result } return 0; } string convert(string op1, string op, string op2){ if (op == "+") return "("+ op1 + "+" + op2 + ")"; else if (op == "-") return "("+ op1 + "-" + op2 + ")"; else if (op == "*") return "("+ op1 + "*" + op2 + ")"; else if (op == "/") return "("+ op1 + "/" + op2 + ")"; }
For the input:
5 +,+,A,*,B,C,D *+AB+CD +*AB*CD +++ABCD *,+,A,B,C
The output is:
((A+(B*C))+D) ((A+B)*(C+D)) ((A*B)+(C*D)) (((A+B)+C)+D) ((A+B)*C)
You may observe the code that it is written so, as to ignore the comma also if the input string has some. We also have added a parenthesis to each sub-expression of the infix string for clarity. Hence we get the infix expression finally in order of their execution.
You may also like to learn: | https://www.codespeedy.com/prefix-to-infix-conversion-in-c/ | CC-MAIN-2021-17 | refinedweb | 519 | 68.91 |
Haskell Weekly News: March 13, 2006 Greetings, and thanks for reading issue 28 of HWN, a weekly newsletter covering developments in the Haskell community. Each Monday, new editions are posted to [1]the Haskell mailing list and to [2]The Haskell Sequence. [3]RSS is also available. 1. 2. 3. Announcements * Alternative to Text.Regex. Chris Kuklewicz [4]announced an alternative to Text.Regex. While working on the [5]language shootout, Chris implemented a new efficient regex engine, using parsec. It contructs a parser from a string representation of a regular expression. 4. 5. * pass.net. S. Alexander Jacobson [6]launched Pass.net. Written in Haskell, using HAppS, Pass.net lets websites replace registration, confirmation mails, and multiple passwords with a single login, authenticating via their email domain. 6. Haskell' This section covers activity on [7]Haskell'. * [8]Partial application syntax * [9]Extending the `...` notation * [10]The dreaded offside rule * [11]Strictness standardization 7. 8. 9. 10. 11. Discussion * Non-trivial markup transformations. Further on last week's article on encoding markup in Haskell, Oleg Kiselyov [12]demonstrates non-trivial transformations of marked-up data, markup transformations by successive rewriting (aka, `higher-order tags') and the easy definition of new tags. 12. * Popular libraries and tools. John Hughes [13]posted (and [14]here) some interesting figures on the most important libraries and tools, based on the results of his survey of users earlier this year. 13. 14. * haskell-prime fun. Just for fun, Ross Paterson [15]posted, some thought-provoking [16]statistics on haskell-prime traffic. 15. 16. * New collections package. Jean-Philippe Bernardy [17]hinted that his new collections package is almost done. 17. * Is notMember not member? John Meacham [18]sparked a bit of a discussion on whether negated boolean functions are useful with a patch adding Data.Set and Data.Map.notMember. 18. * Namespace games. In a similar vein, Don Stewart [19]triggered discussion on how to sort the hierarchical namespace, when proposing alternatives to the longish Text.ParserCombinators module name. 19. Darcs Corner * Darcs-server. Unsatisified with the current techniques for centralised development with darcs, Daan Leijen went ahead and [20. 20. * darcsweb 0.15, by Alberto Bertogli, has been [21]released. 21. Contributing to HWN You can help us create new editions of this newsletter. Please see the [22]contributing information, send stories to dons -at- cse.unsw.edu.au. The darcs repository is available at darcs get 22. | http://www.haskell.org/pipermail/haskell/2006-March/017693.html | CC-MAIN-2014-42 | refinedweb | 404 | 51.24 |
I'm pretty new to python and coding in general, so sorry in advance for any dumb questions. My program needs to split an existing log file into several *.csv files (run1,.csv, run2.csv, ...) based on the keyword 'MYLOG'. If the keyword appears it should start copying the two desired columns into the new file till the keyword appears again. When finished there need to be as many csv files as there are keywords.
53.2436 EXP MYLOG: START RUN specs/run03_block_order.csv
53.2589 EXP TextStim: autoDraw = None
53.2589 EXP TextStim: autoDraw = None
55.2257 DATA Keypress: t
57.2412 DATA Keypress: t
59.2406 DATA Keypress: t
61.2400 DATA Keypress: t
63.2393 DATA Keypress: t
...
89.2314 EXP MYLOG: START BLOCK scene [specs/run03_block01.csv]
89.2336 EXP Imported specs/run03_block01.csv as conditions
89.2339 EXP Created sequence: sequential, trialTypes=9
...
onset type
53.2436 EXP
53.2589 EXP
53.2589 EXP
55.2257 DATA
57.2412 DATA
59.2406 DATA
61.2400 DATA
...
import csv
QUERY = 'MYLOG'
with open('localizer.log', 'rt') as log_input:
i = 0
for line in log_input:
if QUERY in line:
i = i + 1
with open('run' + str(i) + '.csv', 'w') as output:
reader = csv.reader(log_input, delimiter = ' ')
writer = csv.writer(output)
content_column_A = [0]
content_column_B = [1]
for row in reader:
content_A = list(row[j] for j in content_column_A)
content_B = list(row[k] for k in content_column_B)
writer.writerow(content_A)
writer.writerow(content_B)
Looking at the code there's a few things that are possibly wrong:
You may be looking at something like the code below (pending clarification in the question):
import csv NEW_LOG_DELIMITER = 'MYLOG' def write_buffer(_index, buffer): """ This function takes an index and a buffer. The buffer is just an iterable of iterables (ex a list of lists) Each buffer item is a row of values. """ filename = 'run{}.csv'.format(_index) with open(filename, 'w') as output: writer = csv.writer(output) writer.writerow(['onset', 'type']) # adding the heading writer.writerows(buffer) current_buffer = [] _index = 1 with open('localizer.log', 'rt') as log_input: for line in log_input: # will deal ok with multi-space as long as # you don't care about the last column fields = line.split()[:2] if not NEW_LOG_DELIMITER in line or not current_buffer: # If it's the first line (the current_buffer is empty) # or the line does NOT contain "MYLOG" then # collect it until it's time to write it to file. current_buffer.append(fields) else: write_buffer(_index, current_buffer) _index += 1 current_buffer = [fields] # EDIT: fixed bug, new buffer should not be empty if current_buffer: # We are now out of the loop, # if there's an unwritten buffer then write it to file. write_buffer(_index, current_buffer) | https://codedump.io/share/wdCSOn6QVg3v/1/how-to-split-a-log-file-into-several-csv-files-with-python | CC-MAIN-2018-26 | refinedweb | 448 | 69.79 |
30 August 2013 20:03 [Source: ICIS news]
HOUSTON (ICIS)--US Gulf to Asia chemical freight rates jumped $10/tonne (€7.60/tonne) this week on increased styrene traffic and tightening vessel space, shipping sources said on Friday.
Rates increased on shipments of 5,000 tonnes to $80-85/tonne from $70-75/tonne previously.
On 2,000-tonne shipments, rates rose to $100-105/tonne from $90-95/tonne previously.
Styrene traffic that pushed up rates on the Transatlantic route this week also had an impact on increases to the USG-Asia tradelane.
A 5,000-tonne styrene shipment from ?xml:namespace>
The latest report from Odin Marine Group said the
The most recent SSY Base Oil Report said heavy contract nominations for September had also tightened space and pushed freights | http://www.icis.com/Articles/2013/08/30/9702130/usg-asia-chem-freight-rates-jump-on-styrene-traffic-tight.html | CC-MAIN-2014-42 | refinedweb | 132 | 63.7 |
Key Concepts
Review core concepts you need to learn to master this subject
Java objects’ state and behavior
Constructor Method in Java.
Java instance
Creating a new Class instance in Java
Java dot notation
Java method signature
The body of a Java method
Java Variables Inside a Method
Java objects’ state and behavior
Java objects’ state and behavior
In Java, instances of a class are known as objects. Every object has state and behavior in the form of instance fields and methods respectively.
Constructor Method in Java.
Constructor Method in Java.
Java classes contain a constructor method which is used to create instances of the class.
The constructor is named after the class. If no constructor is defined, a default empty constructor is used.
Java instance.
Creating a new Class instance in Java
Creating a new Class instance in Java
In Java, we use the
new keyword followed by a call to the class constructor in order to create a new instance of a class.
The constructor can be used to provide initial values to instance fields.
Java dot notation
Java dot notation
In Java programming language, we use
. to access the variables and methods of an object or a Class.
This is known as dot notation and the structure looks like this-
instanceOrClassName.fieldOrMethodName
Java method signature
Java method signature
In Java, methods are defined with a method signature, which specifies the scope (private or public), return type, name of the method, and any parameters it receives.
The body of a Java method
The body of a Java method
In Java, we use curly brackets
{} to enclose the body of a method.
The statements written inside the
{} are executed when a method is called.
Java Variables Inside a Method
Java Variables Inside a Method
Java variables defined inside a method cannot be used outside the scope of that method.
Returning info from a Java method
Returning info from a Java method
A Java method can return any value that can be saved in a variable. The value returned must match with the return type specified in the method signature.
The value is returned using the
return keyword.
Method parameters in Java.
- 1All programs require one or more classes that act as a model for the world. For example, a program to track student test scores might have Student, Course, and Grade classes. Our real-world concer…
- 2The fundamental concept of object-oriented programming is the class. A class is the set of instructions that describe how an instance can behave and what information it contains. Java has pre-d…
- 3We create objects (instances of a class) using a constructor method. The constructor is defined within the class. Here’s the Car class with a constructor: public class Car { public Car() { /…
- 4Our last exercise ended with printing an instance of Store, which looked something like [email protected] The first part, Store, refers to the class, and the second part @6bc7c054 refers to the insta…
- 5We’ll use a combination of constructor method and instance field to create instances with individual state. We need to alter the constructor method because now it needs to access data we’re assig…
- 6Now that our constructor has a parameter, we must pass values into the method call. These values become the state of the instance. Here we create an instance, ferrari, in the main() method with “r…
- 7Objects are not limited to a single instance field. We can declare as many fields as are necessary for the requirements of our program. Let’s change Car instances so they have multiple fields. We…
- 8Java is an object-oriented programming language where every program has at least one class. Programs are often built from many classes and objects, which are the instances of a class. Classes def…
- 1In the last lesson, we created an instance of the Store class in the main method. We learned that objects have state and behavior: We have seen how to give objects state through instance fiel…
- 2Remember our Car example from the last lesson? Let’s add a method to this Car called startEngine() that will print: Starting the car! Vroom! This method looks like: public void startEngine() {…
- 3Great! When we add the startEngine() method to the Car class, it becomes available to use on any Car object. We can do this by calling the method on the Car object we created, for example. Here …
- 4A method is a task that an object of a class performs. We mark the domain of this task using curly braces: {, and }. Everything inside the curly braces is part of the task. This domain is called t…
- 5We saw how a method’s scope prevents us from using variables declared in one method in another method. What if we had some information in one method that we needed to pass into another method? Sim…
- 6Earlier, we thought about a Savings Account as a type of object we could represent in Java. Two of the methods we need are depositing and withdrawing: public SavingsAccount{ double balance; p…
- 7Remember, variables can only exist in the scope that they were declared in. We can use a value outside of the method it was created in if we return it from the method. We return a value by u…
- 8When we print out Objects, we often see a String that is not very helpful in determining what the Object represents. In the last lesson, we saw that when we printed our Store objects, we would see …
- 9Great work! Methods are a powerful way to abstract tasks away and make them repeatable. They allow us to define behavior for classes, so that the Objects we create can do the things we expect them …
What you'll create
Portfolio projects that showcase your new skills
A Basic Calculator
It's time to build fluency in Object Oriented Java. In this next Pro Project, we're going to practice Classes, Methods, Objects in Java so you can hone your skills and feel confident taking them to the real world. Why? It's vital that you get comfortable creating classes and writing methods that perform various operations. What's next? Arithmetic operations, divisibility rules, Java methods. You got this!
Build A Droid
Practice object-oriented Java by creating a `Droid` class and creating different instances of Droid. Droids are robots built to perform tasks. A droid can be built for any task so it's the perfect candidate for a Java class!
How you'll master it
Stress-test your knowledge with quizzes that help commit syntax to memory | https://www.codecademy.com/learn/learn-java/modules/learn-java-object-oriented-java-u | CC-MAIN-2020-05 | refinedweb | 1,097 | 61.97 |
Re: Two identical copies of an image mounted result in changes to both images if only one is modified
On Thu, Jun 20, 2013 at 3:47 PM, Clemens Eisserer linuxhi...@gmail.com wrote: Hi, I've observed a rather strange behaviour while trying to mount two identical copies of the same image to different mount points. Each modification to one image is also performed in the second one. Example: dd
Re: Scary OOPS when playing with --bind, --move, and friends
On Tue, Dec 21, 2010 at 10:51 AM, C Anthony Risinger anth...@extof.me wrote: in short, everything works fine until you --bind across a subvol via the special folders created when one takes a snapshot, # mount --bind root/subvol of my current root/home/anthony bind # touch bind/TEST you can
Re: Scary OOPS when playing with --bind, --move, and friends
On Tue, Dec 21, 2010 at 11:16 AM, Fajar A. Nugraha l...@fajar.net wrote: On Tue, Dec 21, 2010 at 10:51 AM, C Anthony Risinger anth...@extof.me wrote: i'm on 2.6.36.2 Try 2.6.35 or later. I tested something similar under ubuntu maverick (2.6.35-24-generic) and it works just fine. Sorry, hit
Re: Synching a Backup Server
On Fri, Jan 7, 2011 at 12:35 AM, Carl Cook cac...@quantum-sci.com wrote: I want to keep a duplicate copy of the HTPC data, on the backup server Is there a BTRFS tool that would do this? AFAIK zfs is the only opensource filesystem today that can transfer block-level delta between two snapshots,
Re: Synching a Backup Server
On Fri, Jan 7, 2011 at 5:26 AM, Carl Cook cac...@quantum-sci.com wrote: On Thu 06 January 2011 13:58:41 Freddie Cash wrote: Simplest solution is to write a script to create a mysqldump of all databases into a directory, add that to cron so that it runs at the same time everyday, 10-15 minutes
Re: btrfsck segmentation fault
On Sat, Jan 8, 2011 at 5:29 AM, cwillu cwi...@cwillu.com wrote: On Fri, Jan 7, 2011 at 3:15 PM, Andrew Schretter schr...@math.duke.edu wrote: I have a 10TB btrfs filesystem over iSCSI that is currently unmountable. I'm currently running Fedora 13 with a recent Fedora 14 kernel
Re: Synching a Backup Server
On Sun, Jan 9, 2011 at 6:46 PM, Alan Chandler a...@chandlerfamily.org.uk wrote: then create snapshots of these directories: /mnt/btrfs/ |- server-a |- server-b |- server-c |- snapshots-server-a |- @GMT-2010.12.21-16.48.09
Re: Synching a Backup Server
On Mon, Jan 10, 2011 at 5:01 AM, Hugo Mills hugo-l...@carfax.org.uk wrote: There is a root subvolume namespace (subvolid=0), which may contain files, directories, and other subvolumes. This root subvolume is what you see when you mount a newly-created btrfs filesystem. Is there a detailed
Re: Adding a disk fails
On Fri, Jan 21, 2011 at 2:00 PM, Helmut Hullen hul...@t-online.de wrote: Hallo, Carl, Du meintest am 20.01.11: If you shutdown the system, at the reboot you should scan all the device in order to find the btrfs ones # find the btrfs device btrfs device scan This must be done at
Re: Cannot Create Partition
On Mon, Jan 24, 2011 at 1:07 AM, cac...@quantum-sci.com wrote: On /dev/sda I have sda1 which is my / bootable filesystem for Debian formatted ext4. This is 256MB on a 2TB drive. Really? How do you know it's 256 MB? # fdisk /dev/sda WARNING: GPT (GUID Partition Table) detected on
Re: Btrfs system won't start on Ubuntu (relationship problems...)
On Sun, Mar 13, 2011 at 11:46 PM, Jérôme Poulin jeromepou...@gmail.com wrote: As a sidenote USB converters don't have low level access to the disk so it also makes smartctl and stuff not working at all. That depends on the disk and controller. I had an old USB controller with PATA disk,
Re: cloning single-device btrfs file system onto multi-device one
On Mon, Mar 21, 2011 at 11:24 PM, Stephane Chazelas stephane.chaze...@gmail.com wrote: AFAICT, compression is enabled at mount time and would only apply to newly created files. Is there a way to compress files already in a btrfs filesystem? You need to select the files manually (not possible
Re: read-only subvolumes?
On Wed, Mar 23, 2011 at 3:21 PM, Andreas Philipp philipp.andr...@gmail.com wrote: I think it is since I upgraded to kernel version 2.6.38 (I do not create subvolumes on a regular basis.). thor btrfs # btrfs subvolume create 123456789 Create subvolume './123456789' thor btrfs # touch
Re: [PATCH 0/2] btrfs: allow cross-subvolume BTRFS_IOC_CLONE
On Fri, Apr 1, 2011 at 8:40 PM, Chris Mason chris.ma...@oracle.com wrote: Excerpts from Christoph Hellwig's message of 2011-04-01 09:34:05 -0400: I don't think it's a good idea to introduce any user visible operations over subvolume boundaries. Currently we don't have any operations over
Re: How Snapshots Inter-relate?
On Fri, Apr 22, 2011 at 10:03 PM, cac...@quantum-sci.com wrote: Would it be good practice to say, once a year, do a completely new fresh snapshot? There's no such thing as new fresh snapshot. You can create a new, empty subvolume. Or you can create snapshot of existing root/subvolume, which
Re: Rename a btrfs filesystem?
On Sat, Apr 30, 2011 at 9:24 AM, Evert Vorster evors...@gmail.com wrote: Hi there. Just a quick question: How do I rename an existing btrfs filesystem without destroying all the subvolumes on it? From mkfs.btrfs it says -L sets the initial filesystem label. With ext2, 3 and 4 the
Re: Cannot Deinstall a Debian Package
On Wed, May 4, 2011 at 2:27 AM, cac...@quantum-sci.com wrote: Having a failure that may be because grub2 doesn't BTRFS. /boot is ext3 and / is BTRFS. Does Debian (or whatever distro you use) support BTRFS /? If yes, you should ask them. If no, then you should've already known that there's a
Re: btrfs csum failed
On Wed, May 4, 2011 at 7:44 AM, Martin Schitter m...@mur.at wrote: Am 2011-05-04 02:28, schrieb Josef Bacik: Wait why are you running with btrfs in production? do you know a better alternative for continuous snapshots? :) zfs :D it works surprisingly well since more than a year. well the
Re: Cannot Deinstall a Debian Package
On Wed, May 4, 2011 at 5:20 AM, cac...@quantum-sci.com wrote: On Tuesday 3 May, 2011 14:26:52 Fajar A. Nugraha wrote: Does Debian (or whatever distro you use) support BTRFS /? If yes, you should ask them. What do you mean 'does Debian support BTRFS'? The kernel supports it. Just because
Compression: per filesystem, or per subvolume?
Currently using Ubuntu Natty, kernel 2.6.38-9-generic, I have these mount points using btrs subvolumes $ mount -t btrfs /dev/sda2 on / type btrfs (rw,noatime,subvolid=256,compress-force=zlib) /dev/sda2 on /home type btrfs (rw,noatime,subvolid=258,compress=lzo) Yet dmesg seems to show only zlib
Re: Compression: per filesystem, or per subvolume?
, so right now I just use separate /boot/grub in ext4 to make it work correctly. -- Fajar On May 8, 2011 7:35 AM, Fajar A. Nugraha l...@fajar.net wrote: Currently using Ubuntu Natty, kernel 2.6.38-9-generic, I have these mount points using btrs subvolumes $ mount -t btrfs /dev/sda2
Re: BTRFS, encrypted LVM and disk write cache ?
On Fri, May 13, 2011 at 4:36 AM, Swâmi Petaramesh sw...@petaramesh.org wrote: However shifting from ext3 to BTRFS has been enough to turn my perfectly stable system into a perfectly unstable and crash-prone system :-/ Well, first of all, btrfs is still under heavy development. Add to that the
Re: BTRFS, encrypted LVM and disk write cache ?
On Fri, May 13, 2011 at 3:59 PM, Swâmi Petaramesh sw...@petaramesh.org wrote: Adding to the fact that it comes included with the stock and distro kernels... That gives a bit contradictory signals... Should I stay or should I go ? Looks a bit like legal babble boiling down to « Yes, it is
Re: BTRFS, encrypted LVM and disk write cache ?
On Fri, May 13, 2011 at 4:33 PM, Swâmi Petaramesh sw...@petaramesh.org wrote: # uname -a Linux tethys 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:50 UTC 2011 i686 i686 i386 GNU/Linux # mount | grep btrfs /dev/mapper/VG1-TETHYS on / type btrfs (rw,relatime,subvol=UBUNTU,compress=zlib)
Re: [PATCH] Btrfs: make lzo the default compression scheme
On Fri, May 27, 2011 at 2:32 PM, Sander san...@humilis.net wrote: Li Zefan wrote (ao): As the lzo compression feature has been established for quite a while, we are now ready to replace zlib with lzo as the default compression scheme. Please be aware that grub2 currently can't load files
btrfs error after using kernel 3.0-rc1
While using btrfs as root on kernel 3.0-rc1, there was some errors (I wasn't able to capture the error) that forced me to do hard reset. Now during startup system drops to busybox shell because it's unable to mount root partition. Is there a way to recover the data, as at least grub2 was still
Re: btrfs error after using kernel 3.0-rc1
On Wed, Jun 1, 2011 at 6:06 AM, Fajar A. Nugraha l...@fajar.net wrote: While using btrfs as root on kernel 3.0-rc1, there was some errors (I wasn't able to capture the error) that forced me to do hard reset. Now during startup system drops to busybox shell because it's unable to mount root
btrfs-progs-unstable tmp branch build error
When building from tmp branch I got this error: mkfs.c: In function ‘main’: mkfs.c:730:6: error: ‘ret’ may be used uninitialized in this function mkfs.c:841:43: error: ‘parent_dir_entry’ may be used uninitialized in this function make: *** [mkfs.o] Error 1 git blame shows the last commit for
[PATCH] make btrfs filesystem label command actually work
new #=== Not sure if you need if you need a signoff for something as trivial as this, but here it is just in case. Signed-off-by: Fajar A. Nugraha l...@fajar.net --- btrfs.c |6 ++ 1 files changed, 6 insertions(+), 0 deletions(-) diff
Re: Announcing btrfs-gui
On Thu, Jun 2, 2011 at 6:20 AM, Hugo Mills h...@carfax.org.uk wrote:
Re: kernel BUG at fs/btrfs/tree-log.c:820!
On Mon, Jun 6, 2011 at 1:34 AM, Thierry Noret tno...@yahoo.fr wrote: Hello, Since my computer has switch off with hard reset, I can't mount my home directory. / is btrfs too and there is no problem Kernel-2.6.38-R6 I've try with 2.6.39 and same problem Thanks T.Noret [ cut
Re: Can anyone boot a system using btrfs root with linux 3.14 or newer?
On Thu, Apr 24, 2014 at 10:23 AM, Chris Murphy li...@colorremedies.com wrote: It sounds like either a grub.cfg misconfiguration, or a failure to correctly build the initrd/initramfs. So I'd post the grub.cfg kernel command line for the boot entry that works and the entry that fails, for
Re: Which companies contribute to Btrfs?
On Thu, Apr 24, 2014 at 6:39 PM, David Sterba dste...@suse.cz wrote: On Wed, Apr 23, 2014 at 06:18:34PM -0700, Marc MERLIN wrote: I writing slides about btrfs for an upcoming talk (at linuxcon) and I was trying to gather a list of companies that contribute code to btrfs.
Re: Convert btrfs software code to ASIC
On Mon, May 19, 2014 at 3:40 PM, Le Nguyen Tran lntran...@gmail.com wrote: Hi, I am Nguyen. I am not a software development engineer but an IC (chip) development engineer. I have a plan to develop an IC controller for Network Attached Storage (NAS). The main idea is converting software code
Re: Convert btrfs software code to ASIC
On Mon, May 19, 2014 at 8:09 PM, Le Nguyen Tran lntran...@gmail.com wrote: I now need to understand the operation of btrfs source code to determine. I hope that one of you can help me Have you read the wiki link? -- Fajar -- To unsubscribe from this list: send the line unsubscribe
Re: Very slow filesystem
On Thu, Jun 5, 2014 at 5:15 AM, Igor M igor...@gmail.com wrote: Hello, Why btrfs becames EXTREMELY slow after some time (months) of usage ? # btrfs fi show Label: none uuid: b367812a-b91a-4fb2-a839-a3a153312eba Total devices 1 FS bytes used 2.36TiB devid1 size 2.73TiB
Re: Very slow filesystem
(resending to the list as plain text, the original reply was rejected due to HTML format) On Thu, Jun 5, 2014 at 10:05 AM, Duncan 1i5t5.dun...@cox.net wrote: Igor M posted on Thu, 05 Jun 2014 00:15:31 +0200 as excerpted: Why btrfs becames EXTREMELY slow after some time (months) of usage ?
Re: latest btrfs-progs and asciidoc dependency
On Thu, Jun 5, 2014 at 9:41 PM, Marc MERLIN m...@merlins.org wrote: On Thu, Jun 05, 2014 at 12:52:04PM +0100, Tomasz Chmielewski wrote: And it looks the dependency is ~1 GB of new packages? O_o That seems painful, but at the same time, the alternative, nroff/troff sucks. Part ofyour problem
Re: Filesystem corrupted, is there any hope?
On Fri, Jun 24, 2011 at 5:16 PM, Michael Stephenson mickstephen...@googlemail.com wrote: Hello, I formatted my home partition with btrfs, not realising that the fsck tool can't actually fix errors, as I have just discovered on your wiki. Had I knew this I would have not used it so early, this
Re: will mkfs.btrfs do an initial pre-discard for SSDs like mke2fs does for Ext4?
On Sun, Jul 3, 2011 at 6:00 PM, Werner Fischer devli...@wefi.net wrote: Hi all, are there any plans that future versions of mkfs.btrfs will do an initial pre-discard for SSDs? (AFAIK mkfs.btrfs does not do this currently) It should already have it. That is, if you look in the right place
Re: TRIM support article from 2009 explains why it can be problematic (especially on SATA drives
Re: TRIM support
. -- Fajar On Sun, Jul 10, 2011 at 10:59 PM, Fajar A. Nugraha l...@fajar.net wrote:
Re: corruption. notreelog has no effect? anything else to try?
On Sat, Jul 16, 2011 at 3:51 AM, mck m...@wever.org wrote: My laptop btrfs partition has become corrupt after a power+battery outage. # btrfs-show Label: none uuid: e7b37e5d-c704-4ca8-ae7e-f22dd063e165 Total devices 1 FS bytes used 116.33GB devid 1 size 226.66GB used
Re: corruption. notreelog has no effect? anything else to try?
On Sun, Jul 17, 2011 at 7:28 AM, Mck m...@wever.org wrote: Knowing very little about zero-log and select-super should i continue using my laptop like normal now? Or is this filesystem still considered corrupt and i should backup and format it all from scratch? This is my guess: - since you
Re: Emergency - Can't Boot
On Sun, Jul 31, 2011 at 4:12 AM, cac...@quantum-sci.com wrote: On Saturday 30 July, 2011 13:46:21 Hugo Mills wrote: On Sat, Jul 30, 2011 at 12:51:51PM -0700, . wrote: I just did my monthly dist-upgrade and rebooted, only to have it stall at Control D. It tried to automatically run
Re: corrupted btrfs volume: parent transid verify failed
On Mon, Aug 15, 2011 at 4:13 AM, Yalonda Gishtaka yalonda.gisht...@gmail.com wrote: Halp! I was recently forced to power cycle my desktop PC, and upon restart, the btrfs /home volume would no longer mount, citing the error BUG: scheduling while atomic: mount /5584/0x2. I retrieved
Re: Rename BTRfs to MuchSlowerFS ?
On Tue, Sep 6, 2011 at 10:30 PM, Swâmi Petaramesh sw...@petaramesh.org wrote: On Monday 5 September 2011 22:25:23 Sergei Trofimovich wrote: I've seen similar problem on Ubuntu-11 + Aspire One (8GB of slow SSD). More specifically half of ubuntu install went very fast and when disk was ~50% free
Re: Rename BTRfs to MuchSlowerFS ?
On Fri, Sep 16, 2011 at 2:37 AM, Felix Blanke felixbla...@gmail.com wrote: I'm using btrfs since one year now and it's quite fast. I don't feel any differences to other filesystems. Never tried a benchmark but for my daily work it's nice. Your workload must be light :) I also never had any
Re: Rename BTRfs to MuchSlowerFS ?
On Fri, Sep 16, 2011 at 1:21 PM, Maciej Marcin Piechotka uzytkown...@gmail.com wrote: On Fri, 2011-09-16 at 05:16 +0700, Fajar A. Nugraha wrote: On Fri, Sep 16, 2011 at 2:37 AM, Felix Blanke felixbla...@gmail.com wrote: I'm using btrfs since one year now and it's quite fast. I don't feel any
Re: Honest timeline for btrfsck
On Sun, Oct 9, 2011 at 4:13 AM, Asdo a...@shiftmail.org wrote: On 10/07/11 22:19, Diego Calleja wrote: On Viernes, 7 de Octubre de 2011 21:10:33 Asdo escribió: failures, but you can always mount by rolling back to a previous uberblock, showing an earlier view of the filesystem, which would
btrfs root + mount subvolid=0 problem
Hi I have a system with Ubuntu natty i386 which uses btrfs root. It has worked mostly well, but I have a problem when I want to create new snapshot. Current layout looks something like this $ mount | grep btrfs /dev/sda6 on / type btrfs (rw,noatime,subvolid=258,compress-force=lzo) /dev/sda6 on
Re: btrfs root + mount subvolid=0 problem oneiric). Does anyone know if this is a know problem, or how to get further
Re: btrfs root + mount subvolid=0 problem
On Mon, Oct 10, 2011 at 7:09 PM, Fajar A. Nugraha l...@fajar.net wrote:
Re: btrfs-progs: new integration branch out
On Wed, Oct 12, 2011 at 7:34 PM, Hugo Mills h...@carfax.org.uk wrote: All - After a long wait (sorry about that, things have been busy for me lately), I've managed to pull together a new integration branch for btrfs-progs. This can be pulled from:
Re: btrfs-progs: new integration branch out
On Wed, Oct 12, 2011 at 7:34 PM, Hugo Mills h...@carfax.org.uk wrote: Fixes or updated patches for any problems you may find are welcomed, of course. I noticed that btrfs subvolume snapshot is now broken. It keeps on saying Invalid arguments for subvolume snapshot. Further checking shows
Re: btrfs-progs: new integration branch out
On Wed, Oct 12, 2011 at 11:50 PM, Mitch Harder mitch.har...@sabayonlinux.org wrote: On Wed, Oct 12, 2011 at 10:22 AM, Fajar A. Nugraha l...@fajar.net wrote: I noticed that btrfs subvolume snapshot is now broken. It keeps on saying Invalid arguments for subvolume snapshot. Further checking
Re: Could I create volumes on one device ?
On Wed, Oct 12, 2011 at 11:51 PM, bbsposters bbspost...@yahoo.com.tw wrote: Hi list, I want to create volumes (not subvolumes) on one device. Could it work? If it works, how can I do by btrfs tools ? If it can't, is there any way to create subvolumes which have their independent space? For
Re: Snapshot rollback
On Mon, Oct 24, 2011 at 12:45 PM, dima dole...@parallels.com wrote: Phillip Susi psusi at cfl.rr.com writes: I created a snapshot of my root subvol, then used btrfs-subvolume set-default to make the snapshot the default subvol and rebooted. This seems to have correctly gotten the system to
Re: Snapshot rollback
On Mon, Oct 24, 2011 at 3:24 PM, dima dole...@parallels.com wrote: Fajar A. Nugraha list at fajar.net writes: A problem with that, though, if you decide to put /boot on btrfs as well. Grub uses the default subvolume to determine paths (for kernel, initrd, etc). A workaround is to manually
Re: Snapshot rollback
On Tue, Oct 25, 2011 at 9:00 AM, dima dole...@parallels.com wrote: Fajar A. Nugraha list at fajar.net writes: AFAIK you have three possible ways to use /boot on btrfs: (1) put /boot on subvolid=0, don't change the default subvolume. That works, but all your snapshot/subvols will be visible
Re: Snapshot rollback
On Tue, Oct 25, 2011 at 3:54 PM, dima dole...@parallels.com wrote: Hi Fajar, I think I am doing just this, but my subvolumes are not visible under /boot. I have all my subvolumes set up like this: /path/to/subvolid_0/boot a simple directory bind-mounted to / /path/to/subvolid_0/__active my
Re: Unable to mount (or, why not to work late at night).
On Thu, Oct 27, 2011 at 10:22 PM, Ken D'Ambrosio k...@jots.org wrote: So, I was trying to downgrade my Ubuntu last night, and, before doing anything risky like that, I backed up my disk via dd to an image on an external disk. some of us make use of snapshot/clone, whether it's using btrfs or
Re: Unable to mount (or, why not to work late at night).
On Fri, Oct 28, 2011 at 7:32 AM, Ken D'Ambrosio k...@jots.org wrote: some of us make use of snapshot/clone, whether it's using btrfs or zfs :) No, this is just flat my fault: it doesn't matter what backup method you use if you do it wrong. (I actually have three snapshots of each of my two
Re: How to remount btrfs without compression?
On Tue, Nov 8, 2011 at 8:06 AM, Eric Griffith egriffit...@gmail.com wrote: Edit your fstab, remove the compress flag, reboot. Tell btrfs to rebalance the system, reboot again. And I -THINK- that'll decompress all the files I think the original question was how to force uncompressed mode,
Re: How to remount btrfs without compression?
On Wed, Nov 9, 2011 at 2:48 PM, Lubos Kolouch lubos.kolo...@gmail.com wrote: Sorry for possibly OT question - when I have historical btrfs system mounted with zlib compression, can I remount it with lzo ? yes What will happen? Will the COW be broken and the files taking duplicate space? Or
Re: fsck with err is 1
On Wed, Nov 23, 2011 at 12:33 PM, Blair Zajac bl...@orcaware.com wrote: Hello, I'm trying btrfs in a VirtualBox VM running Ubuntu 11.10 with kernel 3.0.0. Running fsck I get a message with err is 1. Does this mean there's an error? Is err either always 0 or 1, or does err increment
Re: btrfs and load (sys)
On Thu, Nov 24, 2011 at 8:00 AM, Chris Samuel ch...@csamuel.org wrote: Another possibility I *think* is that you could try 3.1 with Chris Mason's for-linus git branch pulled into it. Hopefully someone who knows the procedure better than I can correct me on this! :-) My method is: - use 3.1.1
Re: btrfs/git question.
On Tue, Nov 29, 2011 at 8:58 AM, Phillip Susi ps...@cfl.rr.com wrote: On 11/28/2011 12:53 PM, Ken D'Ambrosio wrote: Seems I've picked up a wireless regression, and randomly drop my WiFi connection with more recent kernels. While I'd love to try to track down the issue, the sporadic nature
Re: btrfs/git question.
On Tue, Nov 29, 2011 at 10:22 PM, Chris Mason chris.ma...@oracle.com wrote: On Tue, Nov 29, 2011 at 09:33:37AM +0700, Fajar A. Nugraha wrote: On Tue, Nov 29, 2011 at 8:58 AM, Phillip Susi ps...@cfl.rr.com wrote: On 11/28/2011 12:53 PM, Ken D'Ambrosio wrote: Seems I've picked up a wireless
Re: btrfs errors
On Fri, Dec 2, 2011 at 7:34 PM, Mike Thomas bt...@thomii.com wrote: Hi, I've been using btrfs for a while now, I've been utilizing snapshotting nightly/weekly/monthly. During the weekly I also do a backup of the filesystem to an ext4 filesystem. My storage is a linux md raid 5 volume. I've
Re: Filesystem acting up during balance
2011/12/9 Ricardo Bánffy rban...@gmail.com: Dec 9 01:06:21 adams kernel: [ 207.912535] usb 1-2.1: reset high speed USB device number 7 using ehci_hcd That's usually a REALLY bad sign. If you can remove the drive from the USB enclosure, I suggest you plug it to onboard SATA port. That way at
Re: btrfs encryption problems
On Thu, Dec 1, 2011 at 5:15 AM, 810d4rk 810d...@gmail.com wrote: I plugged it directly by sata and this is what I get from the 3.1 kernel: [ 581.921417] sdb: sdb1 [ 581.921642] sd 2:0:0:0: [sdb] Attached SCSI disk [ 660.040263] EXT4-fs (dm-4): VFS: Can't find ext4 filesystem ... and then
Re: What is best practice when partitioning a device that holds one or more btr-filesystems
On Thu, Dec 15, 2011 at 4:42 AM, Wilfred van Velzen wvvel...@gmail.com wrote: On Wed, Dec 14, 2011 at 9:56 PM, Gareth Pye gar...@cerberos.id.au wrote: On Thu, Dec 15, 2011 at 5:51 AM, Wilfred van Velzen wvvel...@gmail.com wrote: (I'm not interested in what early adopter users do when they are
Re: Extreme slowdown
On Fri, Dec 16, 2011 at 1:49 AM, Tobias tra...@robotech.de wrote: Hi all! My BTRFS-FS ist getting really slow. Reading is ok, writing is slow and deleting is horrible slow. There are many files and many links on the FS. # btrfs filesystem df /srv/storage Data: total=3.09TB, used=3.07TB
Re: BTRFS fsck apparent errors
On Wed, Jul 4, 2012 at 8:42 PM, David Sterba d...@jikos.cz wrote: On Wed, Jul 04, 2012 at 07:40:05AM +0700, Fajar A. Nugraha wrote: Are there any known btrfs regression in 3.4? I'm using 3.4.0-3-generic from a ppa, but a normal mount - umount cycle seems MUCH longer compared to how
Re: file system corruption removal / documentation quandry
On Thu, Jul 12, 2012 at 12:13 PM, eric gisse jowr...@gmail.com wrote: Basically, phoronix showed there is a --repair option. After enabling snapshotting and playing around with the various discussed options, I discovered that --repair and no special mount options was sufficient to get the
Re: brtfs on top of dmcrypt with SSD - Trim or no Trim
On Thu, Jul 19, 2012 at 1:13 AM, Marc MERLIN m...@merlins.org wrote: TL;DR: I'm going to change the FAQ to say people should use TRIM with dmcrypt because not doing so definitely causes some lesser SSDs to suck, or possibly even fail and lose our data. Longer version: Ok, so several months
Re: Very slow samba file transfer speed... any ideas ?
On Thu, Jul 19, 2012 at 7:39 PM, Shavi N shav...@gmail.com wrote: So btrfs gives a massive difference locally, but that still doesn't explain the slow transfer speeds. Is there a way to test this? I'd try with real data, not /dev/zero. e.g: dd_rescue -b 1M -m 1.4G /dev/sda testfile.img ... or
Re: Very slow samba file transfer speed... any ideas ?
On Fri, Jul 20, 2012 at 5:23 PM, Shavi N shav...@gmail.com wrote: Hence I'm asking.. I know that I get fast copy/write speeds on the btrfs volume from real life situations, How did you know that? So far none of your posted test result have shown that btrfs vol in your system is FAST. -- Fajar
Re: Upgrading from 2.6.38, how?
On Wed, Jul 25, 2012 at 11:39 AM, Gareth Pye gar...@cerberos.id.au wrote: My proposed upgrade method is: Boot from a live CD with the latest kernel I can find so I can do a few tests: A - run the fsck in read only mode to confirm things look good B - mount read only, confirm that I can read
Re: How can btrfs take 23sec to stat 23K files from an SSD?
On Wed, Aug 1, 2012 at 1:01 PM, Marc MERLIN m...@merlins.org wrote: So, clearly, there is something wrong with the samsung 830 SSD with linux It it were a random crappy SSD from a random vendor, I'd blame the SSD, but I have a hard time believing that samsung is selling SSDs that are slower
Re: raw partition or LV for btrfs?
On Sun, Aug 12, 2012 at 11:46 PM, Daniel Pocock dan...@pocock.com.au wrote: I notice this question on the wiki/faq: and as it hasn't been answered, can
Re: I want to try something on the BTR file system,...
On Mon, Aug 13, 2012 at 8:22 AM, Ben Leverett ben...@live.com wrote: could you please send me a copy of the btr driver/kernel? I wonder if using live.com email has something to do with how you ask that question :P Anyway, depending on what you want to use it for, you might find it easier to
Re: raw partition or LV for btrfs?
On Mon, Aug 13, 2012 at 11:19 AM, Kyle Gates kylega...@hotmail.com wrote: Also, I think the current grub2 has lzo support. You're right grub2 (1.99-18) unstable; urgency=low [ Colin Watson ] ... * Backport from upstream: - Add support for LZO compression in btrfs (LP: #727535). so
Re: raw partition or LV for btrfs?
On Tue, Aug 14, 2012 at 8:28 PM, Daniel Pocock dan...@pocock.com.au wrote: Can you just elaborate on the qgroups feature? - Does this just mean I can make the subvolume sizes rigid, like LV sizes? Pretty much. - Or is it per-user restrictions or some other more elaborate solution? No If I
Re: raw partition or LV for btrfs?
On Tue, Aug 14, 2012 at 9:09 PM, cwillu cwi...@cwillu.com wrote: If I understand correctly, if I don't use LVM, then such move and resize operations can't be done for an online filesystem and it has more risk. You can resize, add, and remove devices from btrfs online without the need for LVM.
oops with btrfs on zvol
Hi, I'm experimenting with btrfs on top of zvol block device (using zfsonlinux), and got oops on a simple mount test. While I'm sure that zfsonlinux is somehow also at fault here (since the same test with zram works fine), the oops only shows things btrfs-related without any usable mention of
Re: enquiry about defrag
On Sun, Sep 9, 2012 at 2:49 PM, ching lschin...@gmail.com wrote: On 09/09/2012 08:30 AM, Jan Steffens wrote: On Sun, Sep 9, 2012 at 2:03 AM, ching lschin...@gmail.com wrote: 2. Is there any command for the fragmentation status of a file/dir ? e.g. fragment size, number of fragments. Use the
Re: Workaround for hardlink count problem?
On Mon, Sep 10, 2012 at 4:12 PM, Martin Steigerwald mar...@lichtvoll.de wrote: Am Samstag, 8. September 2012 schrieb Marc MERLIN: I was migrating a backup disk to a new btrfs disk, and the backup had a lot of hardlinks to collapse identical files to cut down on inode count and disk space.
Re: specify UUID for btrfs
On Thu, Sep 13, 2012 at 1:07 PM, ching lu lschin...@gmail.com wrote: Is it possible to specify UUID for btrfs when creating the filesystem? Not that I know of or changing it when it is offline? This one is a definite no. i have several script/setting file which have hardcoded UUID and i do
Re: Experiences: Why BTRFS had to yield for ZFS
On Wed, Sep 19, 2012 at 2:28 PM, Casper Bang casper.b...@gmail.com wrote: Anand Jain Anand.Jain at oracle.com writes: archive-log-apply script - if you could, can you share the script itself ? or provide more details about the script. (It will help to understand the work-load in
Re: Tunning - cache write (database)
On Mon, Oct 1, 2012 at 8:27 PM, Cesar Inacio Martins cesar_inacio_mart...@yahoo.com.br wrote: My problem: * Using btrfs + compression , flush of 60 MB/s take 4 minutes (on this 4 minutes they keep constatly I/O of +- 4MB/s no disks) (flush from Informix database) * OpenSuse 12.1
Re: Tunning - cache write (database)
On Tue, Oct 2, 2012 at 3:16 AM, Clemens Eisserer linuxhi...@gmail.com wrote: I suggest you start by reading After that, PROBABLY start your database by preloading libeatmydata to disable fsync completely. Which will cure
Re: btrfs causing reboots and kernel oops on SL 6 (RHEL 6)
On Sat, Jun 4, 2011 at 11:33 AM, Joel Pearson japear...@agiledigital.com.au wrote: Hi, I'm using SL 6 (RHEL 6) and I've been playing around with running PostgreSQL on btrfs. Snapshotting works ok, but the computer keeps rebooting without warning (can be 5 mins or 1.5 hours), finally I
Re: Naming of subvolumes
On Sat, Oct 27, 2012 at 8:58 AM, cwillu cwi...@cwillu.com wrote: I haven't tried btrfs send/receive for this purpose, so I can't compare. But btrfs subvolume set-default is faster than the release of my finger from the return key. And it's easy enough the user could do it themselves if they
Re: Naming of (bootable) subvolumes
On Sun, Oct 28, 2012 at 12:22 AM, Chris Murphy li...@colorremedies.com wrote: On Oct 26, 2012, at 9:03 PM, Fajar A. Nugraha l...@fajar.net wrote: So back to the original question, I'd suggest NOT to use either send/receive or set-default. Instead, setup multiple boot environment (e.g. old
Re: [Request for review] [RFC] Add label support for snapshots and subvols
On Fri, Nov 2, 2012 at 5:16 AM, cwillu cwi...@cwillu.com wrote: btrfs fi label -t /btrfs/snap1-sv1 Prod-DB-sand-box-testing Why is this better than: # btrfs su snap /btrfs/Prod-DB /btrfs/Prod-DB-sand-box-testing # mv /btrfs/Prod-DB-sand-box-testing /btrfs/Prod-DB-production-test # ls
Re: [Request for review] [RFC] Add label support for snapshots and subvols
On Fri, Nov 2, 2012 at 5:32 AM, Hugo Mills h...@carfax.org.uk wrote: On Fri, Nov 02, 2012 at 05:28:01AM +0700, Fajar A. Nugraha wrote: On Fri, Nov 2, 2012 at 5:16 AM, cwillu cwi...@cwillu.com wrote: btrfs fi label -t /btrfs/snap1-sv1 Prod-DB-sand-box-testing Why is this better than
Re: Production use with vanilla 3.6.6
On Mon, Nov 5, 2012 at 7:07 PM, Stefan Priebe - Profihost AG s.pri...@profihost.ag wrote: Hello list, is btrfs ready for production use in 3.6.6? Or should i backport fixes from 3.7-rc? Is it planned to have a stable kernel which will get all btrfs fixes backported? I would say no to both,
Re: fstrim on BTRFS
On Thu, Dec 29, 2011 at 11:37 AM, Roman Mamedov r...@romanrm.ru wrote: On Thu, 29 Dec 2011 11:21:14 +0700 Fajar A. Nugraha l...@fajar.net wrote: I'm trying fstrim and my disk is now pegged at write IOPS. Just wondering if maybe a btrfs fi balance would be more useful, since: Modern | https://www.mail-archive.com/search?l=linux-btrfs@vger.kernel.org&q=from:%22Fajar+A.+Nugraha%22 | CC-MAIN-2020-45 | refinedweb | 6,028 | 73.37 |
Python os.remove fails to remove
I want to remove files as follows:
path = "username/hw/01/" file_list = ["main.cc", "makefile"] files = os.listdir(path) del_files = list(set(files) - set(file_list)) for del_file in del_files: try: os.remove(path + del_file) except FileNotFoundError as e: print("\t" + e.strerror) except OSError as e: print("\t" + e.strerror)
Which is not working. If I try running
.... try: os.remove(path + del_file) os.remove(path + del_file) except ...
the exception fires. However, if checked after with ls or nautilus, for example, the files are still there.
What works is
files = os.listdir(path) del_files = list(set(files) - set(file_list)) while (del_files): for del_file in del_files: try: os.remove(path + del_file) time.sleep(0.5) print("\t\tRemoving " + path + del_file) except FileNotFoundError as e: print("\t" + e.strerror) except OSError as e: print("\t" + e.strerror) files = os.listdir(path) del_files = list(set(files) - set(file_list))
This is incredibly ugly. When print statements are included, it will run more than once to get all of the requested files. What am I missing?
If it matters,
$ python3 --version Python 3.4.3
1 answer
- answered 2018-02-13 02:57 Adam Schettenhelm
You might need to use
os.remove(os.path.join(path, del_file))instead of
os.remove(path + del_file)if path doesn't end with a path separator. Docs: os.path?
- How to get python script file path that has been compiled in binary .exe?
I have python script
myscript.pythat I compiled with pyinstaller with the following command:
pyinstaller -F myscript.py. Now I get a file called
myscript.exe. In my script, there are line that I wrote to get the path of this file using the following code:
this_file = os.path.realpath(__file__) src = this_file filenameOnly, file_extension = os.path.splitext(src) exeFile = filenameOnly+'.exe' print ('exe file to check', exeFile) if os.path.exists(exeFile): src = exeFile print ('Binary file', src)
But this works well only if the
.exefile is having the same name as the initial
.pyfile. If I rename the binary file, my script will not detect that change
- rename the files with same names in different folders
I want to copy all the images in different folders into one folder. But the issue I am facing is files in different folders have same names e.g
Folder: A123 Front
A123 Black.jpg , A123 Pink.jpg , A123 Red.jpg
Folder: A123 Back
A123 Black.jpg , A123 Pink.jpg , A123 Red.jpg
What I want to achieve is all files in one folder and named something like,
A123_1.jpg ,A123_2.jpg , A123_3.jpg , A123_4.jpg , A123_5.jpg , A123_6.jpg
Note, A123 is product code and so I want Product code with numbr of images with that product code appended with underscore.
These are in 1000s and in sub sub directories, I have simplified it for convenience.
I have written following code , to go into directories and sub directories.
import os def replace(folder_path): for path, subdirs, files in os.walk(folder_path): c=len(files) for name in files: if 'Thumb' in name: continue file_path = os.path.join(path,name) print(file_path) new_name = os.path.join(path,strname) os.rename(file_path,strname) c-=1 print('Starting . . . ') replace('files/GILDAN')
But I am not sure how should be renaming.
- Finding the exact directory of the file in Python
Have a Pandas dataframe with the list of file names:
Files 3003.txt 3000.txt
Also having three directories in which I'd like to make search for these files:
dirnames=[ '/Users/User/Desktop/base/reports1', '/Users/User/Desktop/base/reports2', '/Users/User/Desktop/base/reports3']
Doing like this:
for dirname in dirnames: for f in os.listdir(dirname): with open (os.path.join(dirname,f), "r", encoding="latin-1") as infile: if f==df.Files: text = infile.read() print(f,dirname)
Gives the error
The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
How is it possible to do properly-rewrite the if conditions or make the for loop in another way?
Thanks! | http://quabr.com/48758703/python-os-remove-fails-to-remove | CC-MAIN-2018-34 | refinedweb | 680 | 70.6 |
, Oct 27, 2004 at 11:15:29AM +0200, Ksenia Marasanova wrote:
> Op 24-okt-04 om 14:04 heeft Oleg Broytmann het volgende geschreven:
> > People, how do you handle forks and transactions?
>
> I am on the same path (SQLObject + Quixote + SCGI), but earlier -
> worring about more primitive things yet :) But I share your concern.
> There is a mention of preferred connection approach in Quixote on this
> page:, but it's all I can
I saw it already, of course. Actually, I have searched SQLObject and
Quixote mailing lists using Google and Gmane.
> find. Maybe it's related to the forking problem?
> More comments are greatly appreciated...
Well, now I do as follows. Firest, transactions. I initialize
transactions in my DB.py, where I declare all my tables:
from Cfg import dbName, dbUser, dbPassword, dbHost
dbConn = PostgresConnection(db=dbName, user=dbUser, passwd=dbPassword, host=dbHost)
dbConn.debug = True
transaction = dbConn.transaction()
transaction._makeObsolete() # prepare for .begin()
def transaction_begin():
transaction.begin()
def transaction_commit():
transaction.commit()
def transaction_rollback():
transaction.rollback()
# the parent class for all my tables
class Table(SQLObject):
_connection = transaction
In the main script:
class MyPublisher(Publisher):
def process_request(self, request, env):
from DB import transaction_begin, transaction_commit, transaction_rollback
transaction_begin()
try:
output = Publisher.process_request(self, request, env)
except:
transaction_rollback()
raise
else:
transaction_commit()
return output
Second, forking. I have to delay importing DB until after forking.
Actually, I import it now only when it is required. In main script right
in process_request(), in other modules - in appropriate methods. This
late import delays creating the connection, so the real connection to
the database is being created late enough - after SCGI has forked.
Oleg.
--
Oleg Broytmann phd@...
Programmers don't die, they just GOSUB without RETURN.
Op 24-okt-04 om 14:04 heeft Oleg Broytmann het volgende geschreven:
> People, how do you handle forks and transactions?
>
> I use Quixote+SCGI(+Apache); my program is a long-living (Quixote)
> forking (SCGI) process. The thing that is worrying me is that I import
> a
> module that defines my tables almost before anything else. The module
> opens a connection to a Postgres database, and then SCGI forks the
> program.
> Should I worry? Shoud I close and reopen the connection? What about
> transactions?
>
I am on the same path (SQLObject + Quixote + SCGI), but earlier -
worring about more primitive things yet :) But I share your concern.
There is a mention of preferred connection approach in Quixote on this
page:, but it's all I can
find. Maybe it's related to the forking problem?
More comments are greatly appreciated...
Ksenia.
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/sqlobject/mailman/sqlobject-discuss/?viewmonth=200410&viewday=27 | CC-MAIN-2016-30 | refinedweb | 464 | 51.24 |
Namespaces
Introduction
In all of the XML documents we created so far, we knew all
the names of elements we were using and only we were using them. It is typical
that someone else would use the same names for another application, probably in
the same order of the same arrangements. If an XML document we created had to be
used in the same application with another XML document but if both documents use
the same names of elements, there would be name conflicts, making it difficult
to distinguish what name is used and when. A namespace can be used to solve such
a problem.
An XML namespace is a document that makes it possible to
distinguish the names of XML elements in a document. This makes it possible that
two similar names be used in the same XML document but having different meanings.
The Name of a Namespace
To make a namespace useful, you must create it in a document
that defines its members. To specify the location of that document, in the top
section of the XML file that will use it, create an XML tag that contains an
attribute named xmlns. The formula to follow is:
<Name xmlns="Value"></Name>
Like every XML tag, you start with a name. Like every
attribute, the xmlns name must have a value. The value must be
the address URL of the document that contains the definition of the namespace.
Of course, the tag must be closed. Here is an example:
<a xmlns:=''>
</a>
. Here is an example:
<?x | http://www.functionx.com/xml/Lesson06.htm | CC-MAIN-2016-30 | refinedweb | 258 | 69.82 |
This article explains the C++ Syntax.
Properly the Syntax of the language before you begin will save you from alot of confusion and syntax errors in the future. The Syntax that we explain below is code that you’re going to be repeating in almost every single C++ program that you make.
Code Sample
We’ll be using the code sample below across the article.
#include <iostream> using namespace std; int main() { cout << "Hello World!"; return 0; }
From the above code, we’ll be discussing the following Syntax related topics.
- Libraries and Namespace
- Curly Brackets
- Main Function
- Semi-Colons
Libraries and Namespace
#include <iostream> using namespace std;
The
#include <iostream> line is a Header File Library that brings in several functions and commands related to input and output objects.
The
using namespace std is related to variable naming, namespace and to some degree classes as well. It’s OK if you don’t understand this right now, but we’ll try to explain it through the following example.
This is what the following line looks like with the namespace line.
cout << "Hello World!";
Without the namespace import, this is what it looks like.
std:: cout << "Hello World";
Curly Brackets
Some language like Python rely on Indentation to define code blocks. Other languages like Java and C++ rely on the use of Curly brackets to “enclose” a block of code. You will typically see these in functions and loops.
Below is a short demonstration of a simple function.
int add(int x, int y){ return x + y; }
Anything within the two curly brackets is part of the function. Curly brackets also create their own local namespace. Any variable declared between the curly brackets will only be accessible within the curly brackets.
Semi-colons
int main() { cout << "Hello World!"; return 0; }
C++ requires that you add a Semi Colon at the end of every statement. This is to declare that the line has finished. In the code sample above, there are only two statements, hence both have a semi-colon at the end.
As you begin coding, you’ll gain a sense of where to add a semi-colon and where not to. Just remember, never add a semi-colon at the end of a bracket.
Main() function
The Main function in your code is always the most important. It’s quite literally the “main” function which always executes when your program is run. You can think of it as the entry point for the program, from where the code begins executing.
In case you didn’t already understand, it’s necessary for each (C++) program to have a main function. If you read the examples above again, you’ll notice this.
This marks the end of the C++ Syntax article. Any suggestions or contributions for CodersLegacy are mire than welcome. Questions regarding the article content can be asked in the comments section below. | https://coderslegacy.com/c/c-syntax/ | CC-MAIN-2021-21 | refinedweb | 482 | 74.08 |
NAME
cr_cansee - determine visibility of objects given their user credentials
SYNOPSIS
#include <sys/param.h> #include <sys/systm.h> #include <sys/ucred.h> int cr_cansee(struct ucred *u1, struct ucred *u2);
DESCRIPTION
This function determines the visibility of objects in the kernel based on the real user IDs and group IDs in the credentials u1 and u2 associated with them. The visibility of objects is influenced by the sysctl(8) variables security.bsd.see_other_gids and security.bsd.see_other_uids, as per the description in cr_seeothergids(9) and cr_seeotheruids(9) respectively.
RETURN VALUES
This function returns zero if the object with credential u1 can “see” the object with credential u2, or ESRCH otherwise.
ERRORS
[ESRCH] The object with credential u1 cannot “see” the object with credential u2. [ESRCH] The object with credential u1 has been jailed and the object with credential u2 does not belong to the same jail as u1. [ESRCH] The MAC subsystem denied visibility.
SEE ALSO
cr_seeothergids(9), cr_seeotheruids(9), mac(9), p_cansee(9) | http://manpages.ubuntu.com/manpages/lucid/man9/cr_cansee.9freebsd.html | CC-MAIN-2015-11 | refinedweb | 165 | 50.53 |
numpy.nanvar() method in Python
In this article, we will be learning about numpy.nanvar() method in Python. nanvar() is a function in NumPy module.
Definition:- the nanvar() function calculates the variance of the given data or an array data structure along with the specified axis, either row or column, by ignoring all NaN values.
To clarify, the variance is the average of the squared deviations from the mean, i.e., var =mean(abs(x-x.mean())**2).
Syntax:- numpy.nanvar(a, axis = None, dtype = None, out = None, ddof = 0, keepdims=<no value>)
Parameters:-
- a = array_like — Given data in array form.
- axis = int, a tuple of ints, None – optional — Axis or axes along which variance is computed.
- dtype = data type -optional — Type of data to be used in variance calculations. By default, it is float64.
- out = ndarray -optional — Alternate array to store the output. It must have the same shape as the initial array.
- ddof = int -optional –Delta Degrees Of Freedom: divisor is used in N – ddof, where N is the number of non – NaN values.
- keepdims = bool -optional — If true, the reduced axes are left in output array with size one dimension. The result is broadcasted correctly against the initial array.
Consequently, it returns:- variance of the input array.
Examples of numpy.nanvar() method in Python
Firstly, let us find the variance of a 1d array with and without NaN values:-
import numpy as np a = np.array([12,25,np.nan,55]) print(np.var(a),np.nanvar(a))
As a result, the following output is generated:-
nan 324.22222222222223
Secondly, let us find the variance of a 2d array on various axes with var() and nanvar():-
import numpy as np b = np.array([[1,2,3],[4,np.nan,5],[np.nan,7,8]]) print(np.var(b),np.nanvar(b)) print(np.nanvar(b,axis = 0)) print(np.nanvar(b,axis = 1)) print(np.var(b,axis=0)) print(np.var(b,axis=1))
Consequently, the output is:-
nan 5.63265306122449 [2.25 , 6.25 ,4.22222222] [0.66666667, 0.25 ,0.25 ] [ nan, nan, 4.22222222] [0.66666667, nan, nan]
As you can see above, we get different results when we change the axis. | https://www.codespeedy.com/numpy-nanvar-method-in-python/ | CC-MAIN-2022-27 | refinedweb | 369 | 58.79 |
django-helpscout 0.5.0
Help Scout integration for Django
Help Scout integration for Django.
Introduction
If you are using Help Scout to handle support tickets from your users for your Django web application, you can use Help Scout’s custom app feature to provide information on the user, such as the following:
This project provides a Django app which allows you to integrate Custom App into your Django web application and easily customize the output HTML.
Installation
You can install this library via pip:
pip install django-helpscout
Once installed, add django_helpscout to your INSTALLED_APPS:
INSTALLED_APPS = ( ..., 'django_helpscout' ..., )
Getting Started
A Django view is provided to make it easy for you to get started. First, add the view to your urls.py:from django_helpscout.views import helpscout_user urlpatterns = patterns('', # Your URL definitions url(r'^helpscout-user/$', helpscout_user), )
Once done, deploy your web application to production and point your Help Scout custom app URL to the url you have configured above and you should see a simple HTML output on Help Scout with the user’s username and date joined.
Customizing the HTML Output
You will most likely want to customize the HTML output to add in additional information related to the user. This library provides an easy way for you to override the templates that are used.
In your templates folder, create the following structure:
templates/ |- django_helpscout |- 404.html |- helpscout.html
Details on the two templates:
- 404.html: Used when a user with the given email address is not found
- helpscout.html: Used when a user is found
By adding your own templates and effectively overriding the library’s built-in templates, you can customize the output to suit your needs.
Further Customizations
You might want to use select_related to prefetch related models for a particular user, or you have complicated processing involved when loading a user. A helper decorator is available if you wish to use your own views.
The decorator helps you deal with verifying Help Scout’s signature when a request is made from their side. You can use the decorator in the following manner:
from django_helpscout.helpers import helpscout_request # your view @helpscout_request def load_user_for_helpscout(request): ... code.
History
0.5.0 (2014-08-06)
- PyPI release.
0.0.1 (2014-08-01)
- Initial release on GitHub.
- Author: Victor Neo
- Keywords: Django-Helpscout,Django,Help Scout
- License: Apache License V2
- Categories
- Package Index Owner: victorneo
- DOAP record: django-helpscout-0.5.0.xml | https://pypi.python.org/pypi/django-helpscout/0.5.0 | CC-MAIN-2017-39 | refinedweb | 405 | 52.8 |
A queue is an order collection of items from which items may be deleted at one end (called front or head of the queue) and into which items may be inserted at the other end (called the rear end or tail of the queue). It is First-in-First-out (FIFO) type of data structure. Operations on queue are: Create Queue, insert items , remove items, display etc.
Algorithm for Implementation of Queue in C++
1. Declare and initialize neccessary variables, front = 0, rear = -1 etc.
2. For enque operation,
If rear >= MAXSIZE - 1
print "Queue is full"
Else
- Increment rear by 1 i.e. rear = rear + 1;
- queue[rear] = item;
3. For next enqueue operation, goto step 2.
4. For dequeue operation
If front > rear
print "Queue is Empty"
Else
- item = queue[front]
- increment front by 1 i.e. front = front + 1
5. For dequeue next data items, goto step 4.
6. Stop
Source Code:
#include<iostream>#include<cstdlib>#define MAX_SIZE 10using namespace std;class Queue{private:int item[MAX_SIZE];int rear;int front;public:Queue();void enqueue(int);int dequeue();int size();void display();bool isEmpty();bool isFull();};Queue::Queue(){rear = -1;front = 0;}void Queue::enqueue(int data){item[++rear] = data;}int Queue::dequeue(){return item[front++];}void Queue::display(){if(!this->isEmpty()){for(int i=front; i<=rear; i++)cout<<item[i]<<endl;}else{cout<<"Queue Underflow"<<endl;}}int Queue::size(){return (rear - front + 1);}bool Queue::isEmpty(){if(front>rear){return true;}else{return false;}}bool Queue::isFull(){if(this->size()>=MAX_SIZE){return true;}else{return false;}}int main(){Queue queue;int choice, data;while(1){cout<<"\n1. Enqueue\n2. Dequeue\n3. Size\n4. Display all element\n5. Quit";cout<<"\nEnter your choice: ";cin>>choice;switch(choice){case 1:if(!queue.isFull()){cout<<"\nEnter data: ";cin>>data;queue.enqueue(data);}else{cout<<"Queue is Full"<<endl;}break;case 2:if(!queue.isEmpty()){cout<<"The data dequeued is :"<<queue.dequeue();}else{cout<<"Queue is Emtpy"<<endl;}break;case 3:cout<<"Size of Queue is "<<queue.size();break;case 4:queue.display();break;case 5:exit(0);break;}}return 0;}
this program is not excuteable on the quincy
(Used the given source code)
After I enqueued data, I dequeued and then enqueued a new one. Then select Display all elements. The last data dequeued was added at the bottom of the list. Is this an error on the code?
p.s.
I'm new to programming. | http://www.programming-techniques.com/2011/11/queue-is-order-collection-of-items-from.html | CC-MAIN-2016-50 | refinedweb | 408 | 51.44 |
Archive.org Hosts Massive Collection of MAME ROMs 193."
Excuse me if I'm just not getting it but isn't this copyright infirngement?
ROMs have always been a gray area... (Score:3, Interesting)
On one hand, it's copyrighted content, but on the other, it's ~20 year old content, and not freely available in the public marketplace (or at least, not very affordably). Most manufacturers have chosen not to pursue copyright claims against anything that is not current-gen.
Re:ROMs have always been a gray area... (Score:5, Insightful)
Legally, it isn't a grey area: It's civil infringement at the very least. The only area in which the 'not freely available' may come into play would be deciding upon the damages. If there is any copy-prevention technology involved or if you accept payment in any manner for distributing the roms, including accepting other infringing data in return (ie, using a torrent client) then it's also a criminal offense in the US under the DMCA and NET Act respectively.
On the other hand, screw the law. It's an unfair, counterproductive, rampantly abused law resulting only from a century of corporate lobbying and I have no respect for it whatsoever.
Re:ROMs have always been a gray area... (Score:5, Informative)
They seem to have an exemption.
Re:ROMs have always been a gray area... (Score:5, Informative)
That only exempts them from the anti-circumvention provisions. Plain old copyright law still applies.
A lot of the old games will have effectively lapsed now simply because their owning legal entities ceased to exist, but confirming that poses quite a challenge itsself. Just because the publisher is out of business doesn't mean the game is in the public domain - there may well have been a selling-off of rights during bankruptcy, or another company may have aquired the defunct publisher.
How hard? Well, let us say you have a game called The Lords of Midnight, published by Beyond Software. You look it up, and Beyond Software is long defunct. Game good for the taking, right? Well, no: Beyond Software was aquired by Telecomsoft, so you need to look them up too. Also defunct. Good? No, because Telecomsoft (Better known as 'Firebird') was actually owned by BT, the British telephone company, who (AFAIK) still retain the copyright. That was an easy case, it was all documented on wikipedia and the companies involved are very well-known. Identifying the true owner of something more obscure is a much more difficult prospect.
Re:ROMs have always been a gray area... (Score:5, Informative)
I can vouch for this as me and a programmer friend looked into recreating the days of shareware for the current gen. What we found was a minefield where even if the company closed its doors you had pieces of the company going here and there and nobody knew who the fuck, what the fuck, or where the fuck some 20+ year old game went. The few we did find wanted more money for the rights to distribute the SHAREWARE version of their game than a triple A title from the period could ever hope to make, we are talking about $100K+ for just the limited locked shareware even though we were doing it non profit. That is of course if they would even speak to you, we got many that were like "Oh we have zero plans for it but we might do something someday" so they refused to allow anybody to sell or distribute the shareware version.
The saddest part? We were told flat footed if we would just make it in China all our problems would go away. this is why i think China will be the next hotbed of innovation, as unlike the USA you can actually make things without having to spend the majority of your capital on lawyers.
Re: (Score:2)
As I recall most shareware came with explicit rights to redistribute granted on the splash screen, right alongside all the splashy ads for the great features you were missing out on in the full version.
Re: (Score:3)
Problem is those rights were given by company A which no longer exists and company B, which may or may not own the game, refuses to honor that agreement.
Again if we just made it in China? All our problems would have went away. Its just sad that the USA is simply unsuitable for anything other than lawsuits
:-(
Re: (Score:2)
Can they do that? Seems an awful lot like retroactively changing a license to me. Sort of like if some company bought out an open source project and tried to revoke GPL rights - if they dotted all the right T's they could do it for all future versions, but the stuff already released can't be clawed back unless they can show they didn't actually have the right to give you the license in the first place.
Re: ROMs have always been a gray area... (Score:2)
Re: (Score:2)
Take a closer look at those shareware licenses friend, because as someone who studied them closely, even talked to a lawyer or two, I can tell you that while they give you the right to USE the software almost none of them gave rights to redistribute.
With the GPL you have the exact opposite of the shareware scene, pretty much the first thing written was redistribution rights so while a company could refuse to allow future versions under GPL there would be no way to stop you from forking the code. That said
Re: (Score:2)
A lot of the old games will have effectively lapsed now simply because their owning legal entities ceased to exist,
That isn't true. ownership doesn't "ceased to exist" When a company goes bankrupt, it has assets that pass on to someone. No computer software copyright has "effectively lapsed" Of course that doesn't mean we know who owns the copyright. Many times a corporation might even know they own the copyright.
Re: (Score:2)
Read the rest of the comment. That was the point. It isn't always the case that someone else aquired those assets, but it very often is, and it can take a lot of research to determine who they ended up with after thirty years of business dealings.
Extortion (Score:2)
If [the current owners of copyright have] a problem they can say something to IA.
They would likely say it RIAA style: by suing for a large amount and, along with the service of the suit, offering to settle for a far smaller amount.
Re: (Score:3)
Couldn't the Internet Archive argue that it's in the same category as e.g. Youtube and therefore not liable unless it fails to respond to a takedown notice?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I'm all for archiving the software. And the whole abondonware sucks, but just blindly copying and sharing isn't the answer.
Re: (Score:3)
Re: (Score:2)
copyright needs a clause that says that if the copyright holder is unable or unwilling to make the work available for a reasonable price then it should fall into public domain.
Public Domain Enhancement Act [wikipedia.org]
Summary: A bill is proposed to create a $1.00 per decade tax to maintain copyright on a work starting 50 years after publication. It was opposed by the usual suspects and defeated.
Re:ROMs have always been a gray area... (Score:4, Funny)
So it definitely is illegal, but very obviously does zero damages to victim.
So, would that mean your punishment would be zero dollars?
Re: (Score:2, Insightful)
My worry is that archive.org might suffer the same fate as mp3.com. Damn good service, but they decided to dip their toe into uncharted waters, and got torn to pieces by the armies of RIAA lawyers. Hell, the RIAA has been doing DRM for over a century.
Re: (Score:2)
Nope. In such an event, the copyright holder simply sues for statutory damages instead.
Re: (Score:3)
It isn't illegal.
There are exceptions to the DMCA for:
Computer programs protected by dongles that prevent access due to malfunction or damage and which are obsolete.
Computer programs and video games distributed in formats that have become obsolete and which require the original media or hardware as a condition of access.
Therefore MAME and pretty much any emulator of abandon-ware including the software is legal to own, copy and distribute.
smf (Score:2, Informative)
As has previously been explained, a DMCA exemption allows you to bypass the DRM on something you legally own. You still have to abide by copyright law.
Also the exemptions are re-assessed annually and they decided not to keep the DMCA exemption in place for old computer games.
Re: (Score:2)
What? No. An exemption to the DMCA means you are allowed to legally bypass the copy protection, NOTHING else. The DMCA says nothing about distribution, that's the domain of copyright law. The DMCA is only an added restriction making it illegal to bypass what had previously been purely technological additional restrictions on copying. Of course so long as the software was published at least 100 years go it is no longer under copyright and you are free to distribute it, and in another few decades that wil
Re: (Score:2)
assuming Disney hasn't managed to buy another retroactive extension to the law.
Which, let's be realistic, they will do.
Re: (Score:2)
Well, they'll certainly try. But why do you suppose the current duration is 99 years and not 100? Or why things get sold for $299.99 instead of $300? Those big ol' zeros play funny funny games with the human psyche, letting us make much more rational decisions. I suspect that the effort necessary to push past 100 will be almost as much to get to 200, or for that matter "forever".
Re: (Score:2)
I thought the DMCA was the Digital Millennium COPYRIGHT Act - therefore doesn't it logically follow that it simply states and supersedes copyright issues over digital media? (I'm not a US Citizen)
Also, if nobody is around to claim copyright, how will anyone go to court over the issue? Also many copyrights from the era between 1978 and 1989, published without registration (many small-time developers) are currently in the public domain.
Re: (Score:2)
...
On the other hand, screw the law.
...
When the lawmakers and the government aren't following it, why should we?
Re: (Score:2)
Well, yeah, it is copyright infringement...and I can imagine they're gonna get creamed hard for it, given that there's a lot of stuff from big companies among the MAME romsets.
OTOH, I'm of a mind that copyright is just too damn long, so when it comes to stuff of the age of most of the classic arcade games, I just don't give.
Re: (Score:3)
Never mind that, I finally got to play ET on the Atari VCS. It's awesome!
Re: (Score:2)
If you're just being sarcastic, you should know that there's a patch [neocomputer.org] available.
42.8GB ZIP (Score:5, Informative)
Unfortunately, the only format they released the ROMs in is one huge ZIP file. Even the torrent, where torrent software might have allowed picking-and-choosing individual ROM files, is only the ridiculous 42.8GB ZIP.
I'm still looking for a list of files, but for that size, it might be EVERY MAME ROM in the MAME database of over 7000 ROMS.
Re: (Score:3).
Re:42.8GB ZIP (Score:5, Insightful).
Be patient.
They probably want to get it all out fast. By releasing it like this people will re-seed it. Had they sorted through all of it, created all the torrent seeds for it, we'd be waiting another month.
Plus, it's a lot harder to stop once the whole thing is out and about. Some of those vendors _are_ going to have a problem with this even though they have no interest in monetizing the things themselves, they'll get instantly jealous and go after them.
If you absolutely need re-packaged versions, just wait a while. Someone else will do the work for you and convenient little theme-based sections or company based sections will be released during the time you spend whining about it.
Re: (Score:2)
I think what he/she is complaining about is that the files are zipped together when they could have easily been zipped individually or in small groups.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Is that you, Nancy?
Re: (Score:2)
I guess you have to download the whole damned thing
Not necessarily. [loadscout.com]
Re: 42.8GB ZIP (Score:1)
It's like offering Slashdot as a compiled zip of all articles ever published. Download the whole lot and then see if there's an article you might want to read. Simple eh? No?
Re: (Score:2)
Re:42.8GB ZIP (Score:5, Informative)
Seem you can download indivual zips from the big zip file from and then clicking on an individual file. Seems they forgot to include a link in the description.
Re: (Score:2)
Re: (Score:3)
Here you are: [google.com]
Sorry about the formatting, but I'm not going to fix tabdamage on 28740 lines.
Re: (Score:2)
I'm still looking for a list of files, but for that size, it might be EVERY MAME ROM in the MAME database of over 7000 ROMS.
What I've got that I can find quickly, these will even show you how to build the arcade cabinets for individual ROMs. [googleusercontent.com] (Italian)
Same link English [mamechannel.it] [emulator-zone.com]
Re: (Score:2)
First problem - not everyone has a fiberoptic cable coming into their homes. This is going to take days to download.
Second problem, no on can browse the file to see if he even wants it.
Torrents are usually made up of a directory, rather than a zip file which hides the contents. I might want to download entire groups of these ROMS, and leave other groups on the server where I found them. Or, I might have wanted to browse through, and only download a dozen, or a hundred of them.
Re: (Score:2)
Re: (Score:3)
Even if IA has some bizarre exception to copyright law, you don't, so seeding that embedded copy of MK4 or Time Crisis is not completely without risk.
Re: (Score:2)
well they also have a 46 gb mess archive and likely most of us have dusty broken consoles in a closet somewhere, but have to download the whole thing since the links for individual roms is broken last time i checked for it. in the usa at least it is legal to download roms for cartridges you already own for backup usage. so extract your roms then delete the huge file or put it on a bluray 50 and hide it for when these files are legal.
Re: (Score:2) [loadscout.com]
Re:42.8GB ZIP (Score:4, Insightful)
What is the problem with a 43GB file? I have several USB flash drives laying on my desk that can hold that.
Confirmation bias. Because it's not a problem for you, it's not a problem for anyone.
Just trying to understand, I'd personally much prefer a single huge file..
Use a shared folder to copy files to a Nexus (Score:2)
There is no way whatsoever for me to download that file to my Nexus 16GB, especially since I can't seem to get USB OTG working.
Go to Google Play Store and download Rhythm Software File Manager to your Nexus device. While you're doing that, download this file on a desktop computer. Once the download finishes, possibly months later if your connection is metered, unzip this file to a folder and share the folder using FTP or SMB. On your Nexus device, open Rhythm Software File Manager, tap Network, scan your subnet for shared folders, and copy the ROM from the shared folder to the device.
Re: Use a shared folder to copy files to a Nexus (Score:3)
in this hypothetical scenario I'm working without a desktop computer. in the real world my wisp doesn't allow me to torrent. either way I'm boned. there is no way for me to access this archive.
Re: (Score:2).
Pulling this out of my rear as I'ts been so long (not sure if it's winodws only), but for you would download
cmpro for mame (Google: mame cmpro -download PDF). You can then make a request
for missing or wanted ROMs on alt.binaries.emulators.mame it's very active
a few people will jump on your request, filling (uploading) it.
Not sure just which program makes a list of your missing ROMs to upload to the group;
I would just downloaded others request.
cmpro is short for Clrmamepro
Re: (Score:2)
I think it's a bunch of dirs full of rom images with the proper names, and a simple for loop would put the individual directories into archives.
Re: (Score:2)
How about they have made 100% sure they are gonna get their collective asses sued? At THAT size we aren't talking about just the small fry here, you can bet your last buck there will be some Sega and Nintendo ROMs and they sue at the drop of a hat!
That said its still a dickish move as there is a lot of folks that have bandwidth caps, probably more folks in the world with caps than without. Hell even I would hesitate at a 43gb ROMset without having a list of what is inside, I'd hate to waste that much band
Re: (Score:3)
Re: (Score:3)
I have the torrent running. It's doing over 100KB/s, and is expected to finish in 4-5 days.
It's not our fault you're on dial-up.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Funny)
> It takes about the same amount to whine on here as it does to play a quick game of Galaga.
You suck at Galaga then!
Re: (Score:2)
Re: (Score:3)
Uh, not open source, requires windows and only works with Internet explorer? No thanks.
Re: (Score:2, Flamebait)
Oh, I'm sorry, I forgot that if Charliemopps doesn't personally approve of something, it must therefore be disregarded by the rest of the universe. Sorry everyone!
Uh, not open source
Neither are the games that are on offer, so you won't be interested in those either.
requires windows
Works fine under Wine.
only works with Internet explorer?
It integrates with Internet Explorer, to the extent of adding a launcher to a link's context menu. It also works perfectly happily on its own (see above).
Damm spam! (Score:1)
And this reader has been crossposting in how many threads already?
Re: (Score:3)
There is no way this is legal. (Score:5, Insightful)).
Re: (Score:3)
Just give it a few days and there will be delicious drama all over the place.
Re: (Score:2)
Re: (Score:3)).
Only if a ROM hoarder didn't use google, you can get all these files from various websites on the internet for the last decade. No one has shut them down, and it isn't like they have been hiding.
It's like you are new to the internet and computers.
Re: (Score:2)
DMCA takedown of Internet Archive in 3 2 1 ... (Score:1)
Even better if the entire Internet is shut down, not just the Archive.
MAME for Linux? (Score:1, Interesting)
Re:MAME for Linux? (Score:5, Informative)
WTF? Baseline MAME will compile on Linux or OSX now, using SDL bindings and a Qt or Cocoa debugger UI. It's even in the repos for some popular Linux distros.
Re: (Score:2, Interesting)
Have you tried QMC2?
Namco Bandai will sue (Score:2)
pacman -Si sdlmame
But does it run Pac-Man?
LoadScout (Score:3, Informative)
This little freeware program allows you to not only see what's in an archive shortly after you begin to D/L it, you can prioritize individual files inside it or pick and choose any number of them to D/L or not. Also to get bits and pieces of the archive in truncated form, still retaining the format container. I haven't used it but maybe 3 times, but these situations are perfect for it: this huge-ass, inconvenient HTTP grab of over 40 damn gigs. There's a portable version available somewhere but I can't locate it ATM. [loadscout.com]
related note: pinball (Score:3)
Us pinfans have been happily using VisualPinball & PinMAME for ages now. The VP team negotiated terms of usage with the owners of pinball ROMS (Stern, Bally, and other defunct-ish companies) which included a flatout promise not to design or publish pinball sims for games less than a year old. It seems to have worked well, in the sense that I know of no attempt either to ban distribution of the ROM files or to sue any designer or user of VP files.
Official Torrent (Score:2)
It's still One Big File, but at least you might reduce the load on archive.org. Neighborly, y'know?
Or you could always donate (3 to 1 match until EOY) [archive.org] to help with the upcoming lawsuit. (Oh there'll be one, well, just because. These bits USED to be owned, and I'm sure there are some people who still think they are -- whether they truly are or not.)
Well this is nice, maybe... (Score:2)
I've been piecing the Mame ROM collection together from Alt.binarier.emulators.mame
I admit I haven't worked on it for a year or so, I have 26 Gigs worth of ROMS, and
my UseNet isn't that quick. The version I was working on was 37 Gigs, this at 43 Gig
has grown a bit.
I like Moon Patrol if your my age it's one of the popular stand up arcades of the time
a moon buggy you jumped craters and boulders then the addition of space craft you shot at.
It's got four keys forward, backwards, jump, and fire. So would work an
Re: (Score:2)
I'd like to say !Score! but this Torrent could take a very long time, I'm uploading 31 KB/s
.5 to 1.1 K/s we'll just download it and see what's there. I knocked the upload
downloading
down to 5 KB/s could be a junk file.
Been 10 hours and I've got 1.7% at least it says 1.5 weeks to finish now it was infinite all day long. I've got 700 MB out of 43 Gigs, wish me luck.
Increased to 30 MB/s upload - breaking even...
Re: (Score:2)
Been 10 hours and I've got 1.7% at least it says 1.5 weeks to finish now it was infinite all day long. I've got 700 MB out of 43 Gigs, wish me luck.
Increased to 30 MB/s upload - breaking even...
I got it, left the torrent running all night, I'm a happy camper...
They added a warning that the zip file had changed, I took a screen shot of it and pasted it to IrfanView, saved then forgot it.
This morning I looked at the screen and see a 50% torrent and was rightly ticked, something was odd, ah ha, closed IrfanView to see the torrent at 100% so leaving it online for awhile if not longer.
Thank you Archive.org I've been after a complete collection for a long long time. BTW this is version
.151 think I was
Re: (Score:2, Insightful)
To clarfy:
These games still have commercial value. If rights holders turned a blind eye, they would be effectively permitting commercial exploitation of the ROMs (and yes, people still pay to play them). Good news for some, perhaps, but bad for the few remaining amusement companies operating licensed machines, and bad for the rights holders who will find themselves facing competition from their own games. Also, if they don't defend the trademark violations they could find their properties in the public doma
Re: (Score:2)
Re:I am an author of one of these games (Score:5, Insightful)
I think you have to put this in context. Were you expecting to get any more money from the work you put into that product? I don't think it would be reasonable to expect that these games (or at least the vast majority of them) would ever make money again. (If you think otherwise, it sounds like you *have* legal recourse here because the games are not out of copyright.) If I were in your position, though (which I kind of have been a number of times now, except most of my games were non-commercial) I would just be glad that someone gave them new life for another generation. Otherwise it would have faded into obscurity, giving you even less than you have now.
Take a step back and see that they are not trying to insult the authors as you suggest, but benefit everyone and honor the authors by propagating the work that would otherwise have faded away. I suspect (just a guess) you might be surprised at how accommodating and respectful these folks would be toward original authors if you approach them as a friend. You see them as an enemy, but really I think they are just trying to save and re-popularize something worth saving and appreciating for a bit longer, and couldn't find a practical way to contact a zillion non-existent authors in the process.
Re: (Score:2)
Copyright controls the right to copy. You focused on money and fame in your post, but gp began with right to copy.
Arguing different sides using different pieces of the argument almost never works. More so because of the personal investment here. So argue the point.
If gp is a copyright holder, and objects to the copying even without the possibility of future income, what argument is to be made to that person?
It is not merely academic. Disney and Conan Doyle both fought on creative control grounds. With Disne
Re: (Score:2)
Re:I am an author of one of these games (Score:5, Interesting)
I'm also an author of one of these games. No one asked me my permission either. Of course they didn't have to, I'm not the copyright holder. The company I worked for at the time is. I doubt they asked them either, though.
But good for Archive.org! I'm glad to see an easy way to get this collection. I'm downloading it and will be seeding it. And when I get around to overhauling my MAME cabinet I'll be using it as my source of ROMs.
Re: (Score:2)
Which game is that?
:P
Re: (Score:2)
And no one ask me for permission to copy my work. This is a fuck you to creative people who actually spent time in their lives to realize a new idea.
Troll
Re: (Score:2)
And no one ask me for permission to copy my work. This is a fuck you to creative people who actually spent time in their lives to realize a new idea.
Were you paid to do your work? Yes? Ok, then we're good. Your mechanic doesn't charge you for each time you drive your car does he?
Re: (Score:2, Insightful)
Were you paid to do your work?
I wasn't. I'm an independent musician who financed my own album and am now out several thousand dollars because of pirates. Please tell me where your argument stands on this. By the way, your arrogant comment of "Okay, then we're good" would be more accurately written as "Okay, then I'm good because I get what I want for free, and you ought to be good because, even if you didn't agree with my conditions and breaking of your contract, well, you know, I'm better and I get to choose what makes you feel good".
Y
Re: (Score:2)
At the the very least, it's all marketing.
Re: (Score:2)
Which game is that? We would like to know!
:P
Re: (Score:2)
Go cry us a river. We really don't care.
Re: (Score:2)
It is most likely compiled from C/C++ original using asm.js as the abstraction layer. Unreal plays in the browser reasonably, and Mozilla is working on speedups still.
I'm going to assume I'm right without confirming, but feel free to read more about it yourself and come up with details to complain about rather than js is bad.
Re: (Score:2)
Javascript?!?!?!
No thanks. I'd rather emulate an emulator using javascript whilst emulating windows, just to be on your level.
I love the youth of today for taking priceless optimized stuff and waving your "i'am a lazy fuck, who pisses on hard work" in its face. Nice job.
I just use Mame, [mamedev.org] I see it's up to
.152 :} the torrent (.151) is outdated already.
Re: (Score:2)
You never reboot your phone? Causes problems if you don't every so often.
Just search play.google.com or your supplier for MAME there's a player version for you. | https://games.slashdot.org/story/13/12/28/0351206/archiveorg-hosts-massive-collection-of-mame-roms | CC-MAIN-2018-05 | refinedweb | 4,984 | 71.65 |
class Foo {
private _xxx
public void setBar(b) {
println "before setting"
_xxx = b
println "after setting"
}
}
def x = new Foo()
x.bar = 1
x.bar = 2
When run, the output is:
C:\Temp>groovy settertest.groovy
before setting
after setting
before setting
after setting
Caught: groovy.lang.GroovyRuntimeException: Cannot read property: bar
even though the property bar was never read. If you add e.g.
println "goodbye"
as the last line of the script, the exception does not occur.
def foo() {
x
}
x is returned here
your example is put into a method by groovy. This means x.bar=2 is transformed into:
x.bar=2
return x.bar
so it's no wonder that you will get this error message
¹ It's even more likely to have unwanted side-effects when a method is involved, for example in "x.bar().foo = 2". | http://jira.codehaus.org/browse/GROOVY-1150 | crawl-002 | refinedweb | 144 | 69.38 |
Subject: Re: [Boost-users] [boost] [Fit]Â formal review starts today
From: paul Fultz (pfultz2_at_[hidden])
Date: 2016-03-13 21:47:21
On Sunday, March 13, 2016 5:17 PM, Lee Clagett <forum_at_[hidden]> wrote:
>
>
>
>
>> - Your knowledge of the problem domain.
>
>I have used and grokked the boost::phoenix implementation extensively.
>I do not have many other qualification in this area.
>
>
>> You are strongly encouraged to also provide additional information:
>> - What is your evaluation of the library's:
>> * Design
>
>- Gcc 5.0+
> Why are the newer compiler versions not supported? I thought C++11
> support been improving in Gcc?
Gcc 5.1 is the only one compiler that will never be supported as it has way
too many regressions. The tests may very well pass with Gcc 5.2 or 5.3. They
have in the past. However, I don't have the testing infrastructure in place to
test Gcc 5.2 or 5.3 on a regular basis, so I can't declare it officially
supported.
>-).
This is a good point and should be documented.
>-.
>- fit::detail::make and fit::construct
> Aren't these doing the same thing? `detail::make` is slightly more
> "stripped" down, but is it necessary? Also, should this be
> exposed/documented to users who wish to extend the library in some
> way?
detail::make is used to declare the functions for the adaptors. It might be possible to merge this in the future as the default behavior for fit::construct will become by value.
>- Placeholders
> If they were put in their own namespace, all of them could be
> brought into the current namespace with a single `using` without
> bringing in the remainder of the Fit library. Phoenix does this.
This is a good idea. I can do that.
>- Alias
> Why is this provided as a utility? What does this have to do with
> functions? This seems like an implementation detail.
It doesn't have to do with functions. However, I do use this in other
libraries as well. I may as well leave it undocumented for now.
>
>
>> *.
> - Why does the function get_base exist as a public function? This
> library uses inheritance liberally, so I suppose its easier to
> retrieve one if its bases?
Yes it makes it easier to retrieve the base function.
>-?
The same as I explained above.
>-?
I used to have a static_default_function in the library, that needs to be
removed.
>-?
No, I should add a more general way of adding the pipable operators to
functions.
>.
>- Macro specializations
> There are already a lot of specializations for various compiler
> versions that do not fully implement C++11. I think the fit
> implementation would be more clear if support for these
> compilers was removed or at least reduced.
I would like a large amount of portability.
> For example, there are a> number of specializations to support Gcc 4.6 specifically, but yet
> Gcc 5.0+ is not supported.
Not true. Gcc 5.1 is not supported, does not mean that gcc 5.0+ is not.
>-.
That would be a good idea.
>
>
>> * Documentation
>
>- IntegralConstant
> Needs to be defined in the Concept section.
Alright, will add it.
>-_`.
>- fit::is_callable
> Is this trait simply an alias for std::is_callable, or the Callable
> concept defined by Fit?
I don't think it is an alias for std::is_callable, because I think it works
like std::is_callable<F(T..)> whereas in Fit, it works like
fit::is_callable<F, T...>. I don't support the function signature because it
is problematic and unnecessary.
And Callable should be the same definition as in the standard.
>.
That is good idea to link Callable and is_callable.
>- fit::partial
> I think its worth mentioning the behavior with optional arguments
> (default or variadic). When the function underlying function is
> actually invoked depends on this.
Yes
>- fit::limit
> Does the implementation internally use the annotation? What
> advantage is there to this and the associated metafunction?
This can be used with fit::pipable and fit::partial to improve the errors. I
also make the limit publicly available through the metafunction so other users
can take advantage of the annotation in a similar fashion.
>- fit::tap
> Is `tee` more appropriate?
Perhaps, I got the name tap from underscore.js.
>- Extending Library
> I think it might be useful to discuss how to write your own adaptor
> or decorator.
This is a good point. I would like to expand on that at some point.
>
>
>> *.
Yes, I'll try to add some compile-time failures.
>
>
>> *.
And thanks Lee for your review.
>
>
>Lee
>
>_______________________________________________
| https://lists.boost.org/boost-users/2016/03/85872.php | CC-MAIN-2020-24 | refinedweb | 753 | 68.97 |
Video How To - Monitor your Refrigerator
2016-12-31 Edit: Updated code to MySensors Version 2.1 and added LEDs for door status (optional).
Recently we have been having problems with our refrigerator door not fully closing. Usually it's a result of something not pushed fully in and it stops the door from closing. Our fridge is old and doesn't have any built in alarms so I thought I'd "MySensorize" it so we get alerts if the door says open. I also added some Dallas Temp sensors to monitor the temperature. Nothing to advanced or sophisticated but it gets the job done.
Refrigerator Monitoring with Arduino and MySensors – 09:23
— Pete B
Parts List
- 4.7 uf Capacitor - Assorted Capacitors in the MySensors store
- Pro Mini (3.3v) -
- NRF24L01+ Radio -
- 2x DS18B20 Dallas Temperature Sensors -
- Copper Tape -
- Female Dupont Cables -
- Cat5/6 cable
- Old USB cable (use the individual wires inside for the Dallas temp sensors)
- Old phone charger (or some sort of 5v power supply)
Here is the code to find your Dallas Temp Sensor addresses. I chose to find the addresses and program them in based on recommendations from the DS18B20 datasheet. You could change the Refrigerator Monitoring code to have it automatically find them each time the device is powered up if you prefer.
#include <OneWire.h> #include <DallasTemperature.h> // Data wire is plugged into port 2 on the Arduino #define ONE_WIRE_BUS 3 //Pin where Dallas sensor is connected // Setup a oneWire instance to communicate with any OneWire devices (not just Maxim/Dallas temperature ICs) OneWire oneWire(ONE_WIRE_BUS); // Pass our oneWire reference to Dallas Temperature. DallasTemperature dallasTemp(&oneWire); // arrays to hold device addresses DeviceAddress tempAddress[7]; void setup(void) { // start serial port Serial.begin(115200); // Start up the library dallasTemp.begin(); // show the addresses we found on the bus for (uint8_t i = 0; i < dallasTemp.getDeviceCount(); i++) { if (!dallasTemp.getAddress(tempAddress[i], i)) { Serial.print("Unable to find address for Device "); Serial.println(i); Serial.println(); } Serial.print("Device "); Serial.print(i); Serial.print(" Address: "); printAddress(tempAddress[i]); Serial.println(); } } void printAddress(DeviceAddress deviceAddress) { for (uint8_t i = 0; i < 8; i++) { // zero pad the address if necessary //if (deviceAddress[i] < 16) Serial.print("0"); Serial.print("0x"); Serial.print(deviceAddress[i], HEX); if (i < 7) { Serial.print(", "); } } } void loop(void) { }
And here is the code for the Fridge monitoring
/* - PeteWill 2016-12-29 Version 1.1 - PeteWill Updated to MySensors 2.1 and added status LEDs for the doors DESCRIPTION This sketch is used to monitor your refrigerator temperature and door states. You will need to find the addresses for your Dallas temp sensors and change them in the dallasAddresses array Watch the How To video here: */ //#include <SPI.h> #include <DallasTemperature.h> #include <OneWire.h> #include <Bounce2.h> //MySensors configuration options //#define MY_DEBUG //Uncomment to enable MySensors related debug messages (additional debug options are below) #define MY_RADIO_NRF24 // Enable and select radio type attached //#define MY_NODE_ID 1 //Manually set the node ID here. Comment out to auto assign #include <MySensors.h> #define SKETCH_NAME "Refrigerator Monitor" #define SKETCH_VERSION "1.1" #define DWELL_TIME 70 //value used in all wait calls (in milliseconds) this allows for radio to come back to power after a transmission, ideally 0 #define ONE_WIRE_BUS 3 // Pin where dallas sensors are connected #define TEMPERATURE_PRECISION 12 //The resolution of the sensor OneWire oneWire(ONE_WIRE_BUS); // Setup a oneWire instance to communicate with any OneWire devices (not just Maxim/Dallas temperature ICs) DallasTemperature dallasTemp(&oneWire); // Pass our oneWire reference to Dallas Temperature. //MySensor gw; unsigned long tempDelay = 425000; float lastTemperature[2]; unsigned long tempMillis; bool metric = false; // arrays to hold device addresses DeviceAddress dallasAddresses[] = { {0x28, 0xD0, 0xD3, 0x41, 0x7, 0x0, 0x0, 0xDF}, //Freezer Address -- Modify for your sensors {0x28, 0xFF, 0x22, 0xA0, 0x68, 0x14, 0x3, 0x2F} //Fridge Address -- Modify for your sensors }; //Set up debouncer (used for door sensors) Bounce debouncer[] = { Bounce(), Bounce() }; //Make sure to match the order of doorPins to doorChildren. //The pins on your Arduino int doorPins[] = {4, 5}; //The child ID that will be sent to your controller int doorChildren[] = {2, 3}; //Freezer temp will be Child 0 and Fridge temp will be Child 1 //used to keep track of previous values contact sensor values uint8_t oldValueContact[] = {1, 1}; uint8_t doorLedPins[] = {6, 7}; // Initialize temperature message MyMessage dallasMsg(0, V_TEMP); MyMessage doorMsg(0, V_TRIPPED); void presentation() { // Send the sketch version information to the gateway sendSketchInfo(SKETCH_NAME, SKETCH_VERSION); // Register all sensors to gw (they will be created as child devices) // Present temp sensors to controller for (uint8_t i = 0; i < 2; i++) { present(i, S_TEMP); wait(DWELL_TIME); } // Present door sensors to controller for (uint8_t i = 0; i < 2; i++) { present(doorChildren[i], S_DOOR); wait(DWELL_TIME); } } void setup() { // Startup OneWire dallasTemp.begin(); // set the temp resolution for (uint8_t i = 0; i < 2; i++) { dallasTemp.setResolution(dallasAddresses[i], TEMPERATURE_PRECISION); } // // Startup and initialize MySensors library. Set callback for incoming messages. // gw.begin(NULL, NODE_ID); // // // Send the sketch version information to the gateway and Controller // gw.sendSketchInfo(SKETCH_NAME, SKETCH_VERSION); //Set up door contacts & LEDs for (uint8_t i = 0; i < 2; i++) { // Setup the pins & activate internal pull-up pinMode(doorPins[i], INPUT_PULLUP); // Activate internal pull-up //digitalWrite(doorPins[i], HIGH); // After setting up the button, setup debouncer debouncer[i].attach(doorPins[i]); debouncer[i].interval(700); //This is set fairly high because when my door was shut hard it caused the other door to bounce slightly and trigger open. //Set up LEDs pinMode(doorLedPins[i], OUTPUT); digitalWrite(doorLedPins[i], LOW); } } void loop() { unsigned long currentMillis = millis(); if (currentMillis - tempMillis > tempDelay) { // Fetch temperatures from Dallas sensors dallasTemp.requestTemperatures(); // Read temperatures and send them to controller for (uint8_t i = 0; i < 2; i++) { // Fetch and round temperature to one decimal float temperature = static_cast<float>(static_cast<int>((metric ? dallasTemp.getTempC(dallasAddresses[i]) : dallasTemp.getTempF(dallasAddresses[i])) * 10.)) / 10.; // Only send data if temperature has changed and no error if (lastTemperature[i] != temperature && temperature != -127.00) { // Send in the new temperature send(dallasMsg.setSensor(i).set(temperature, 1)); lastTemperature[i] = temperature; } } tempMillis = currentMillis; } for (uint8_t i = 0; i < 2; i++) { debouncer[i].update(); // Get the update value uint8_t value = debouncer[i].read(); if (value != oldValueContact[i]) { // Send in the new value send(doorMsg.setSensor(doorChildren[i]).set(value == HIGH ? "1" : "0")); digitalWrite(doorLedPins[i], value); oldValueContact[i] = value; } } }
- robosensor last edited by
@petewill every 425 seconds your door sensors will be blind up to 750 ms because of you using blocking call to
dallasTemp.requestTemperatures(). It is very unlikely that this will happen, but it can be
Safer way is to use non-blocking access to dallas sensors:
@robosensor Cool, thanks for pointing that out. I'll check it out!
Another nice project, you to also put in a check for millis() rolling over to zero otherwise in about 50 days the temperature will no longer be updated as the current millis() will be less then your temp stored last time check.
put in a check for millis() rolling over to zero
Wrong, not required. Pete's code is essentially:
unsigned long tempMillis; unsigned long currentMillis = millis(); if (currentMillis - tempMillis > tempDelay) { // .. do something.. tempMillis = currentMillis; }
Substracting two unsigned type values will return the modulo of the maximum value of that type, e.g.:
#include <iostream> using namespace std; int main() { unsigned long a = 0xFFFFFF10; unsigned long b = 0x00000010; cout << b - a << endl; return 0; }
Will print 256. Pete will be safe when the millis() counter wraps.
Try it here if you don't believe it
- bruster999 last edited by
Nice project! I particularly like the switch you designed. Very simple and yet functional.
Do you have a video/instruction on how to set up the phone app part of this project. I'm new to MySensors but have been looking for a way to monitor/control sensors from my iPhone for quite some time without success.
Thanks.
B
@bruster999 said:
Do you have a video/instruction on how to set up the phone app part of this project. I'm new to MySensors but have been looking for a way to monitor/control sensors from my iPhone for quite some time without success.
The app is actually part of Vera (my home automation controller). What controller are you using?
Well done, thanks Pete!
The freezer actually was one of my reasons for starting with the homeautomation stuff: Left the door open twice and was literally "fed up" with having to eat all the stuff
But guess which project I haven't finished yet. Yeah, You've got it ...
One thing I realized with the tests I've done so far: even the small wires I've used for
the internal temperature sensors will keep the silicone isolation in the door frame from sealing off the freezer. So after some time there's always some condensation around the entry point of the wires which can't be good. Something I'll have to look into yet. And I still haven't quite reached the cellar with my MySensors network (maximum distance, 2 and a half floors below)
Christoph
@hyla Thankfully our freezer wasn't left open that long but it was long enough to warrant a sensor
I have had my internal temp sensors installed for over a month and haven't had any condensation yet. Maybe I just had lucky placement of the wires? I used the wires from a usb cable (I actually cut it off a broken computer mouse). They are incredibly thin. Maybe you can try that?
As for the distance, can you place a relay (repeater) node somewhere in the middle?
- sneaksneak last edited by
Thanks for a nice project.
I am trying this but I always get Fahrenheit temperatuers in Domoticz.
I have change this line in the code.
From
boolean metric = false;
To
boolean metric = true;
Something else I should do?
That should be enough. What value does the gatway log say it received from the sensor?
Thank you for one more interesting solution. Just want to suggest useful thing which might be more simple and durable as contact pair. I mean "Switch Reed"
Best regards!
@Igor The reason I didn't use these is because most of the time my fridge door would stay open just slightly but enough that a reed switch would still register it as closed. Normally I like to use reed switches though.
A nice project.
I had a similar problem with a freezer located in the garage.
As I already had other temperature sensors and an energy meter running there (based on MySensors stuff) I just added one of these waterproof DS18B to the setup, drilled a hole through the fridge wall and stuck it in.
I use OpenHab and can see the temperature on any browser or at the cell phone client. OpenHab also emails me when temperature raises above a certain temperature.
Not as sophisticated as your solution but it works
- sneaksneak last edited by
I have now solved this problem with Fahrenheit instead of Celcius.
I had to change to this --->
float temperature = static_cast<float>(static_cast<int>((dallasTemp.getTempCByIndex(i)) * 10.)) / 10.;
From this --->
float temperature = static_cast<float>(static_cast<int>((gw.getConfig().isMetric ? dallasTemp.getTempC(dallasAddresses[i]) : dallasTemp.getTempF(dallasAddresses[i])) * 10.)) / 10.;
Now it gives me celcius.
Thanks for nice example.
@mbj Cool, thanks for sharing. I'd love to check out OpenHAB some day.
@sneaksneak Interesting. I wonder why you had to do that...? Oh well, glad it's working for you now!
When I arrived at work today and started checking mails, I had received a bunch of them saying that the temperature in the fridge was too high, so I had to drive home and close the door. I would have gotten a notification earlier through jabber if the server software we use for jabber wasn't acting up :(.
MySensors stuff was not involved in that particular sensor (esic sensor + tellstick + perl + perl), but the need to monitor this stuff was definitely there.
@Stric Dang. Glad you caught it in time! The need to monitor is definitely real. I also want to get my house outfitted with leak detectors at some point...
I'm sorry to ask a seemingly dumb question, but what do I need to do to get a third Dallas sensor to work? (I have a top/bottom fridge and a deep freezer).
I edited the following lines:
DeviceAddress dallasAddresses[] = {
{0x28, 0x29, 0x4F, 0x1, 0x0, 0x0, 0x80, 0xBB}, //Freezer Address -- Modify for your sensors
{0x28, 0x22, 0x53, 0x1, 0x0, 0x0, 0x80, 0x1E}, //Fridge Address -- Modify for your sensors
** {0x28, 0xA6, 0x58, 0x1, 0x0, 0x0, 0x80, 0x58} //Deep Freezer I added**
// set the temp resolution
for (uint8_t i = 0; i < 3; i++) { //i changed from 2
// Present temp sensors to controller
** for (int i = 0; i < 3; i++) { // i changed from 2**
gw.present(i, S_TEMP);
// Read temperatures and send them to controller
** for (int i = 0; i < 3; i++) { //i changed from 2**
I'm not sure what I'm missing. Thank you in advance for your help and great project!!!
@Krieghund
Ok, I think I found it. I needed to have the door sensors start at a higher ID:
int doorChildren[] = {3, 4}; //i changed from 2,3
Thank you
@Krieghund Glad you found it! Sorry, I couldn't respond sooner.
any chance this has been updated to 2.0? I am having issues getting the internal pull up for the temp sensors working.
@Jason-Brunk Sorry for the delay. What's the problem? I don't think that would be related to v2.0 but I could be wrong. Have you tried an external resistor?
my 4.7k resistors showed up today. So i will try with the external.
I have been able to get basic arduino sketches to work with the internal pull up and the temp sensor. but as soon as I put mysensors on it, it doesn't pick it up any more. That's why i figured i would see if you updated your sketch to 2 without the need for the external
@Jason-Brunk Interesting. I haven't updated it yet. I need to get a spare weekend to make the jump to 2.0 but I don't know when that will be. I have so many sensors in my house it will take me a while to get everything up to date.
I can only imagine. I have seen your youtube channel. jealous
This is my first sensor so be nice if I'm missing something obvious.
Your sketch has #include <MySensor.h>
In the mysensors library it is MySensors.h with an s
If I change it to #include <MySensors.h> with the s then on the line with MySensor gw; I get the error: 'MySensor' does not have a name type.
I'm trying to figure out how everything works together but can't figure this out.
For reference, my programming background consists of an 8th grade computer class where we learned to make something go from one side of the monitor to the other and yes, it was all green.
@TXSpazz This difference depends on what version you are using. For version 2 it is Mysensors.h . Earlier versions used Mysensor.h.
@mbj Thank you, I was wondering if that was it. I started trying to convert it but since I haven't slept in 23 hours it made my brains hurt, but at least I know what direction to go now.
@TXSpazz Yeah, @mbj is right, it needs to be updated to v2.0. Still haven't had a chance to do it yet
Hello,
Great work. I've watched almost all your videos. Very instructive.
One thing I am wondering about are the gray connectors 'knob like' to quickly wire your cables together. What are they? Where can I buy me some? :bowtie:
@jmkhael Are you talking about wire nuts perhaps (at least that's what I call them)? I get mine from a local store. but it looks like ebay has some too.
Yes. those! thank you
HI
Do you think you can use the PT1000 two wire temp sensor for this project? I have a number of them and they are water proof and very durable.
@Newzwaver If you Google on this subject you will find threads like this one which may help you understand the issue:
Basically I think the answer is that if you already have the electronics to read the signal from the PT1000 and can communicate that to a Mysensors sensor it should be fairly easy. If you have to build it all by yourself it is a bit more work to do like the thread above shows.
On the other hand, buying a few DS18B20 is dirt cheap and and then everything you need is already available. They come in waterproof versions as well if you feel you need this. One example from Ebay is
Hi
And thanks for the reply, I do have the PT1000 as well as those sensors. The only thing is I am trying to find a use for the PT1000. I have already placed one in my deep freeze and one out door but have several more that I just want to use. It's not the cost as most of the projects on this are cheap, it's the challenge of getting it done.
Thanks again, I | https://forum.mysensors.org/topic/2607/video-how-to-monitor-your-refrigerator/23 | CC-MAIN-2019-39 | refinedweb | 2,895 | 63.8 |
On Wed, Dec 8, 2010 at 8:10 PM, Barry Warsaw <barry at python.org> wrote: > Why do we have symlinks in the first place? It's because Debian and Ubuntu > support multiple active versions of Python at the same time. Once Python 2 is > killed off and we're all using >= Python 3.2, we can get rid of even this > cruft. That's the major reason why I worked on PEPs 3147 and 3149. It might be a long time indeed. Python 2.7 will probably keep up for several years. > I'm using zc.buildout 1.5.2 on Ubuntu 10.10 for Mailman 3. There is a known > bug related to namespace packages, which force you to use > include-site-packages=false in your buildout.cfg. Here's the bug report: > > > > Gary is assigned to the bug but there is as yet no resolution. It's the very same error and the same workaround is suggested, but I don't see enough proof that the reason behind such error lies within namespace_packages. I'll make some tests in the next days and I'll check whether it's the case. -- Alan Franzoni -- contact me at public@[mysurname].eu | https://mail.python.org/pipermail/distutils-sig/2010-December/017148.html | CC-MAIN-2016-36 | refinedweb | 202 | 85.59 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
This program acts as a download handler for ROX-Filer, fetching files dropped
from web browsers. This version can use a separate program called
DownloadManager which uses DBUS to schedule multiple downloads simultaneously.
This program is being reviewed and will be uploaded soon.
Created an attachment (id=68883) [edit]
rox-extra/fetch-0.1.1.ebuild
Python modules aren't compiled. Need updated eclass
great. added to CVS.
Another version bump. Now 0.1.2
Created an attachment (id=70173) [edit]
fetch-0.1.2.ebuild
Simple renaming of file.
Added this to portage. apparently now it requires dbus. even worse dbus with
python support :P
No, Fetch does not require dbus. Download Manager does, but Fetch can run fine
without it. Fetch is a dumb dl manager only. It has one python module which will
import dbus, but that is only called IF the user has chosen to use the Download
Manager. So technically, dbus is NOT a Depend or an Rdepend UNLESS the user
chooses the above option.
Version Upgrade. Also, removed dbus requirement since it really is _NOT_
required unless user wishes to use downloadmanager (see bug #108541) which is
not yet in portage. Moved warning messages to post inst which will show if dbus
or python are not installed.
Created an attachment (id=73104) [edit]
rox-extra/fetch-0.1.3.ebuild
Created an attachment (id=77703) [edit]
rox-extra/fetch/fetch-0.3.0.ebuild
version upgrade. If used with downloadmanager, will support new versions of
dbus >= 0.3. Dbus is NOT required for this to run, however.
6 months no action. I'm not a dev, so can't fix.
dead | http://bugs.gentoo.org/106719 | crawl-002 | refinedweb | 307 | 69.99 |
This is a 2-hour crash course on React & Node JS, meant for anyone looking to get started with web development & React.
This is a 2-hour incident course on React & Node JS, for anyone who wants to get started with web & React development. It was originally done as a workshop for the technical team at Jovian.ml. You can track online using the launch code links below.
Code:
TOPIC INDEX
Part 1 - Introduction to React Components
Part 2 - Building Interactive UIs with Props & State
Part 3 - Local Development with Node JS & Create React App
Prerequisites:
Upload page reloads on submitting a file for upload. Are you a newbie to React, and using this generic style to upload files on the web? There’s a better way to handle uploads in React. **This tutorial** **is the answer!** Today, it’ll change...
Upload page reloads on submitting a file for upload. Are you a newbie to React, and using this generic style to upload files on the web?
There’s a better way to handle uploads in React.
This tutorial is the answer!
Today, it’ll change forever if you go through this tutorial and implement it on your site.
We’ll use Node with React to upload multiple files at once. As we go along, there will be simple client-side validation and finally with uploaded notification can be shown with react-toastify.
Like always, start a react app with
create-react-app
Include the bootstrap CDN in index.html.
In contrast to creating the form from scratch, grab this snippet from bootsnipp.
This is our beautiful upload form to work with.Single React file upload
Let’s start with a simple one, a single file upload.
Capture selected file
Add a change handler in to
app.js pick the file on change.
<input type="file" name="file" onChange={this.onChangeHandler}/>
Log
event.target.files , it is an array of all stored files.
target.files[0]holds the actual file and its details.
onChangeHandler=event=>{ console.log(event.target.files[0]) }
On saving, create-react-app will instantly refresh the browser.
Store the file in state, and only upload when a user clicks the upload button.
Initially, the
selectedFilestate is set to null
constructor(props) { super(props); this.state = { selectedFile: null } }
To pass the file to the state, set
selectedFile state to
event.target.files[0].
onChangeHandler=event=>{ this.setState({ selectedFile: event.target.files[0], loaded: 0, }) }
Check the state variable again with react-devtools to verify.
Again, create-react-app will instantly refresh the browser and you’ll see the result
We have a state of files to upload.
We definitely need an upload button, upload is handled with
onClick event handler.
<button type="button" class="btn btn-success btn-block" onClick={this.onClickHandler}>Upload</button>
onClickhandle will execute
onClickHandler which sends a request to the server. The file from a state is appended as a file to FormData.
onClickHandler = () => { const data = new FormData() data.append('file', this.state.selectedFile) }
We’ll use
axios to send AJAX requests.
Install and import
axios.
import axios from 'axios';
Create form object and create
POST request with
axios. It needs endpoint URL and form data.
axios.post("", data, { // receive two parameter endpoint url ,form data }) .then(res => { // then print response status console.log(res.statusText) })
Here’s final,
onClickhandler with
axios
POST request. It sends
POST request to
[]() and gets response.
onClickHandler = () => { const data = new FormData() data.append('file', this.state.selectedFile) axios.post("", data, { // receive two parameter endpoint url ,form data })
.then(res => { // then print response status console.log(res.statusText) }) }
The file type attached is set as a state and needs to be checked. As a result, it’s a binary file.
Axios will send a request to the endpoint with a binary file in Form Data.
To receive the uploaded file, implement a backend server. It’ll receive the file sent from front-end.
Create
server.js file in the root directory
Install
express,
multer, and
cors.
npm i express multer cors nodemon –save
We’ll use express to create a server,
multer to handle files. Cors will be used to enable cross-origin request to this server.
Nodemon to monitor the changes and auto-reload, it is optional and you’ll have to restart the server manually in it’s absence.
In,
server.js initiate an express instance
var express = require('express'); var app = express(); var multer = require('multer') var cors = require('cors');
Don’t forget CORS middleware.
app.use(cors())
Create a
multer instance and set the destination folder. The code below uses /public folder. You can also assign a new file name upon upload. The code below uses
‘originalfilename’as the file name.
var storage = multer.diskStorage({ destination: function (req, file, cb) { cb(null, 'public') }, filename: function (req, file, cb) { cb(null, Date.now() + '-' +file.originalname ) } })
Create an upload instance and receive a single file
var upload = multer({ storage: storage }).single('file')
Setup the
POSTroute to upload a file
app.post('/upload',function(req, res) { upload(req, res, function (err) { if (err instanceof multer.MulterError) { return res.status(500).json(err) } else if (err) { return res.status(500).json(err) } return res.status(200).send(req.file) }) });
Start an upload object and handle an error, check for
multererror before general errors. Status OK (200) with metadata is sent back to the client on successful upload.
Make the server listen on port 8000.
app.listen(8000, function() { console.log('App running on port 8000'); });
Run
nodemon server.js in a terminal to start this server
Upload a file, you will see the file appear in the public directory.
It’s working, congratulations!Uploading multiple files in React
It’s time for uploading multiple files at once.
Add
multiplein the input field to accept multiple files in the form.
<input type="file" class="form-control" multiple onChange={this.onChangeHandler}/>
Update and
onChangeHandler remove zero indexes, it’s just event.target.files.
onChangeHandler=event=>{ this.setState({ selectedFile: event.target.files, }) }
Also, update function
onClickHandler to loop through the attached files.
onClickHandler = () => { const data = new FormData() for(var x = 0; x<this.state.selectedFile.length; x++) { data.append('file', this.state.selectedFile[x]) } axios.post("", data, { // receive two parameter endpoint url ,form data }) .then(res => { // then print response status console.log(res.statusText) }) }
In
server.js update multer upload instance to accept an array of files.
var upload = multer({ storage: storage }).array('file')
Reload the server and upload multiple files this time.
Is it working for you as well? Let us know if it isn’t.Handling Validation
Until now, nothing has gone wrong but it doesn’t mean it never will.
Here are situations where this application can crash:
Client-side validation doesn’t secure the application but can throw errors early to the user and improves the user experience.
Create a separate function named
maxSelectedFile and pass event object.
Use length to check a number of files attached. The code below returns false when a number of files reach 3.
maxSelectFile=(event)=>{ let files = event.target.files // create file object if (files.length > 3) { const msg = 'Only 3 images can be uploaded at a time' event.target.value = null // discard selected file console.log(msg) return false; } return true; }
Update
onChangeHandler to only set state when the maxSelectFile returns, that is when a number of files are less than 3.
onChangeHandler=event=>{ var files = event.target.files if(this.maxSelectFile(event)){ // if return true allow to setState this.setState({ selectedFile: files }) } }
The result
Create a
checkMimeType function and pass an event object
checkMimeType=(event)=>{ //getting file object let files = event.target.files //define message container let err = '' // list allow mime type const types = ['image/png', 'image/jpeg', 'image/gif'] // loop access array for(var x = 0; x<files.length; x++) { // compare file type find doesn't matach if (types.every(type => files[x].type !== type)) { // create error message and assign to container err += files[x].type+' is not a supported format\n'; } }; if (err !== '') { // if message not same old that mean has error event.target.value = null // discard selected file console.log(err) return false; } return true; }
Update
onChangeHandler again to include
checkMimeType.
onChangeHandler=event=>{ var files = event.target.files if(this.maxSelectFile(event) && this.checkMimeType(event))){ // if return true allow to setState this.setState({ selectedFile: files }) } }
See the output again.
Create another function
checkFileSize to check the file size. Define your limiting size and return false if the uploaded file size is greater.
checkFileSize=(event)=>{ let files = event.target.files let size = 15000 let err = ""; for(var x = 0; x<files.length; x++) { if (files[x].size > size) { err += files[x].type+'is too large, please pick a smaller file\n'; } }; if (err !== '') { event.target.value = null console.log(err) return false } return true; }
Update
onChangeHandler again to handle
checkFileSize.
onChangeHandler=event=>{ var files = event.target.files if(this.maxSelectFile(event) && this.checkMimeType(event) && this.checkMimeType(event)){ // if return true allow to setState this.setState({ selectedFile: files }) } }
The output thereafter…
That’s all on client-side validation.Improve UX with progress bar and Toastify
Letting the user know the happening is a lot better than having them stare at the screen until the upload is finished.
To improve the user experience, we can insert progress bar and a popup message
Use state variable
loaded to update real-time values.
Update the state, add
loaded: 0
constructor(props) { super(props); this.state = { selectedFile: null, loaded:0 } }
The loaded state is changed from progressEvent of the POST request.
axios.post("", data, { onUploadProgress: ProgressEvent => { this.setState({ loaded: (ProgressEvent.loaded / ProgressEvent.total*100), }) }, })
For progress bar, we use reactstrap.
Install and import progress bar from reactstrap
import {Progress} from 'reactstrap';
Add a progress bar after the file picker.
<div class="form-group"> <Progress max="100" color="success" value={this.state.loaded} >{Math.round(this.state.loaded,2) }%</Progress> </div>
See the result in action.
Beautiful, ain’t it?
Install
react-toastify and import the following:
import { ToastContainer, toast } from 'react-toastify'; import 'react-toastify/dist/ReactToastify.css';
Put the container somewhere
<div class="form-group"> <ToastContainer /> </div>
Use toast wherever you want to display a message.
First of all, place upload result
.then(res => { toast.success('upload success') }) .catch(err => { toast.error('upload fail') })
See the result.
Also, place validation result.
Update
checkMimeType function for validation.
checkMimeType=(event)=>{ let files = event.target.files let err = [] // create empty array const types = ['image/png', 'image/jpeg', 'image/gif'] for(var x = 0; x<files.length; x++) { if (types.every(type => files[x].type !== type)) { err[x] = files[x].type+' is not a supported format\n'; // assign message to array } }; for(var z = 0; z<err.length; z++) { // loop create toast massage event.target.value = null toast.error(err[z]) } return true; }
You’ve the result
Also, add
toast.warn(msg)
Include the
checkFileSizeand changes from
checkMimeType function
checkFileSize=(event)=>{ let files = event.target.files let size = 2000000 let err = []; for(var x = 0; x<files.length; x++) { if (files[x].size > size) { err[x] = files[x].type+'is too large, please pick a smaller file\n'; } }; for(var z = 0; z<err.length; z++) { toast.error(err[z]) event.target.value = null } return true; }
Change err variable to array and loop to create toast message from it.
Our react file upload is working fine, but we can have a lot of improvements like uploading to cloud providers , also use of third-party plugins for other services to improve upload experience are some possible additions.
Before we end of this tutorial, you can contribute to improve and refactor code from this tutorial send your PR to this repository.
If you loved the tutorial, you might also want to check out Mosh’s Complete React course. | https://morioh.com/p/55563f16fe26 | CC-MAIN-2020-10 | refinedweb | 1,961 | 52.87 |
PyPylon: Cannot establish IEEE1394 Transport Layer
Hi!
I am using Basler A102f camera using FireWire (IEEE 1394) for connection and pypylon (from StudentCV) to control it. I have a problem to establish Transport Layer (at least this seems to me to be a problem). The camera works fin in pylon Viewer. When running this code
import pypylon.pylon as pylon import sys; tlfactory = pylon.TlFactory.GetInstance() print(tlfactory) ptl = tlfactory.CreateTl('BaslerUsb') print(ptl) detected\_devices = ptl.EnumerateDevices() print('%i devices detected:' % len(detected\_devices))
I get till the end with printing out "0 devices detected". That's fine because I don't have a USB camera attached. But when I change the CreateTl argument to "Basler1394", which seems to be according to pylon SDK documentation, ptl is a 'NoneType' object and Error is returned.
Has anyone tried IEEE1394 cameras with pypylon and managed to run it successfully?
Thanks,
Peter | https://imaginghub.com/forum/posts/800-pypylon-cannot-establish-ieee1394-transport-layer | CC-MAIN-2022-40 | refinedweb | 150 | 51.55 |
Introduction to Programming Languages/Template Oriented Programming
Template Oriented Programming[edit]
High-order functions foster a programming style that we call template oriented. A template is an algorithm with "wholes". These wholes must be filled with operations of the correct type. The skeleton of the algorithm is fixed; however, by using different operations, we can obtain very different behaviors. Lets consider, for instance, the SML implementation of filter:
fun filter _ nil = nil | filter p (h::t) = if p h then h :: (filter p t) else filter p t
An algorithm that implements filter must apply a given unary operator p on each element of a list. Nevertheless, independent on the operation, the procedure that must be followed is always the same: traverse the list applying p on each of its elements. This procedure is a template, a skeleton that must be filled with actual operations to work. This skeleton can be used with a vast suite of different operators; thus, it is very reusable.
This programming style adheres to the open-closed principle that is typically mentioned in the context of object oriented programming. The implementation of filter is closed for use. In other words, it can be linked with other modules and used without any modification. However, this implementation is also open for extension. New operations can be passed to this algorithm as long as these operations obey the typing discipline enforced by filter. Filter can be used without modification, even if we assume that new operations may be implemented in the future, as long as these operations fit into the typing contract imposed by filter.
The combination of templates and partial application gives the programmer the means to create some very elegant code. As an example, below we see an implementation of the quicksort algorithm in SML. In this example, the function
grt is used as a function factory. Each time a new pivot must be handled, i.e., the first element of the list, we create a new comparison function via the call
grt h. We could make this function even more general, had we let the comparison operation, in this case greater than, open. By passing a different operator, say, less than, we would have an algorithm that sorts integers in a descending order, instead of in the ascending order.
fun grt a b = a > b fun qsort nil = nil | qsort (h::t) = (qsort (filter (grt h) t)) @ [h] @ (qsort (filter (leq h) t))
Templates without High-Order Functions[edit]
Some programming languages do not provide high-order functions. The most illustrious member of this family is Java. Nevertheless, templates can also be implemented in this language. Java compensates for the lack of high-order functions with the powerful combination of inheritance and subtype polymorphism. As an example, we will show how to implement the map skeleton in Java. The code below is an abstract class.
Mapper defines a method
apply that must be implemented by the classes that extend it.
Mapper also defines a concrete method
map. This method fully implements the mapping algorithm, and calls
apply inside its body. However, the implementation of
apply is left open.
import java.util.Iterator; import java.util.LinkedList; import java.util.List; public abstract class Mapper<A, B> { public abstract B apply(A e); public final List<B> map(final List<A> l) { List<B> retList = new LinkedList<B>(); Iterator<A> it = l.iterator(); while (it.hasNext()) { retList.add(apply(it.next())); } return retList; } }
In order to use this skeleton, the developer must extend it through a mechanism known as Inheritance. If a class A extends another class B, then we call A a subclass of B. As an example, the class below, a subclass of
Mapper, implements a function that increments the elements of a list:
public class Incrementer extends Mapper<Integer, Integer> { @Override public Integer apply(Integer e) { return e + 1; } }
The class
Incrementer maps a list of integers into a new list of integers. The code snippet below demonstrates how we can use instances of this class to increment every element of a list of integers. As we can see, the overall process of emulating templates in a language without high-order functions is rather lengthy.
List<Integer> l0 = new LinkedList<Integer>(); for (int i = 0; i < 16384; i++) { l0.add(i); } Mapper<Integer, Integer> m = new Incrementer(); List<Integer> l1 = m.map(l0); | https://en.wikibooks.org/wiki/Introduction_to_Programming_Languages/Template_Oriented_Programming | CC-MAIN-2017-26 | refinedweb | 731 | 54.83 |
Same Five Digits
April 19, 2011
I chose to use Python to solve this problem.
The brute force solution is to enumerate all the perfect squares with five digits. Then try them in all combinations taken three at a time.
from itertools import count, takewhile
squares = filter(lambda s: len(s) == 5,
takewhile(lambda s: len(s) <= 5,
(str(s**2) for s in count())))
for a, b, c in combinations(squares, 3):
# ...
Each trio of numbers has to meet three criteria.
- C1. five different digits occur in the trio.
- C2. each digit occurs a different number of times (has a different count).
- C3. the five counts are the same as the five digits.
A digit count histogram would help with all of those. Let’s define a function to build it.
from collections import defaultdict
def histogram(s):
d = defaultdict(int)
for c in s:
d[c] += 1
return d
That returns a dictionary mapping each digit to the number of times it occurs. For example,
hist('31415') would return
{1: 2, 3: 1, 4: 1, 5: 1}, meaning 1 occurs twice and 3, 4 and 5 each occur once.
Now we can test for the three criteria like this.
hist = histogram(a + b + c)
digits = set(hist.keys())
counts = set(hist.values())
if (len(digits) == 5 and # five different digits
digits == counts and # digits == counts
not any(hist[k] == k for k in hist)): # k does not occur k times.
# then a, b, c meet the criteria.
We need to find all trios that meet the criteria, then find the one whose singleton digit is unique. So we’ll build a map from singleton digit to the set of trios with that singleton.
matches = defaultdict(list)
for a, b, c in combinations(squares, 3):
# ...
if «criteria met»:
inverse_hist = dict((hist[k], k) for k in hist)
singleton = inverse_hist[1]
matches[singleton].append((a, b, c))
Now we need to find the entry in matches that has length one. We’ll do that by creating yet another map, this time from number of matches to the list of matches. Then we can extract the answer directly.
match_counts = dict((len(ls), ls) for ls in matches.values())
print(*match_counts[1][0])
On my computer, this solution runs for about 30 seconds. It is a brute force solution, and was coded with the barest minimum amount of analysis of the problem. So 30 seconds is not unreasonable. But we can do much better.
We can determine what digits the solution will use. Since the three numbers have 15 digits total, the five counts must sum to 15. Since the digits match the counts, the digits must sum to 15 as well. The only possible set of five unique digits is {1, 2, 3, 4, 5}. So let’s discard all the candidate squares that have digits outside that set. We can refine the way we collect the list of squares like this.
squares = filter(lambda s: len(s) == 5 and all(c in '12345' for c in s),
takewhile(lambda s: len(s) <= 5,
(str(s**2) for s in count())))
That small change reduces the number of candidate squares from 217 to 9, and the number of combinations from 1,679,580 to 84. The program now runs in under 20 milliseconds, which is more than 10,000 times faster.
To run either program, save the program text into a file
enigma_1638.py and say:
$ python enigma_1638.py
12321 33124 34225
You can run the program at, where a definition of the
combinations function is provided for backward compatibility to versions of Python prior to 2.6.
[…]”, “2”=>”5″, “3”=>”1″, “5”=>”4″, “4”=>”2″}
Solution: [12321, 33124, 34225] {“1″=>”3”, “2”=>”5″, “3”=>”4″, “4”=>”2″, “5”=>”1″}
Solution: [12321, 44521, 55225] {“1″=>”3”, “2”=>”5″, “3”=>”1″, “4”=>”2″, “5”=>”4″}
Solution: [12321, 52441, 55225] {“1″=>”3”, “2”=>”5″, “3”=>”1″, “5”=>”4″, “4”=>”2″}
Solution: [12544, 34225, 44521] {“1″=>”2”, “2”=>”4″, “5”=>”3″, “4”=>”5″, “3”=>”1″}
Solution: [12544, 34225, 52441] {“1″=>”2”, “2”=>”4″, “5”=>”3″, “4”=>”5″, “3”=>”1″}
Solution: [34225, 44521, 52441] {“3″=>”1”, . | https://programmingpraxis.com/2011/04/19/same-five-digits/2/ | CC-MAIN-2016-44 | refinedweb | 691 | 73.27 |
You trust. This way, you would have more control on your friends, as well as they could have more restrictions on you as a friend.
How to Define Friend Modifier
The following are few situations where you could use friend modifier:
- It could be used in a stand alone functions, methods of different class, complete class, template function or even template class.
- You could also have non-member function with friend modifier. In that case, that function will not have “this” as a pointer, and that function would have access to all the data from your class.
- If you only like to restrict one method (or few selective methods) to use data from other class, you would not need to call that class a friend class, which is left for more extreme situations when you could call whole class a friend.
- Also, template functions and classes are similar to usual functions and classes, but they don’t care about the type of data they are handling, and they could have friends too.
In a way, you could say that friend overpowers modifiers like private, or public, or protected. In another words friend modifier nullifies restrictions gained from already mentioned access restrictions.
So, how do we implement a friend modifier?
class CSomeClass { ... friend someType FriendFunction( SomeArguments); ... };
In the above code snippet, you use “friend” modifier to inform your compiler that you will trust FriendFunction. In this case, you should inform your compiler about the function name, return data type, and arguments you are using.
After that, you implement your stand alone function as a side of class implementation, but you don’t use friend modifier:
someType FriendFunction( SomeArguments);
If you would like to have just one method as a friend to your class, you would call it as mentioned below.
class CSomeClass { ... friend Return_Type CSomeOtherClass::SomeMethod(Different_Data_Types as arguments); ... };
For extreme situations, you could call whole class a friend class, that way friend class will have access to data that is usually not visible by other entities, and thereupon hidden data might be unobtainable.
To implement this, you could use the following code snippet:
class CSomeClass; ... class CFriendClass { ... void SomeMethod( CSomeClass object); ... };
Next, you create a class that will have CFriendClass as a friend.
class CSomeClass { ... friend class CFriendCalss; ... };
Finally, you would go into implementation of your method:
void CFriendClass::SomeMethod( CSomeClass object) {...}
It might be a good idea to create few simple examples that will clear up some syntax issues you might have.
If you decide to practice, I would recommend that you to create class CDot with two values: x and y, and after you create a non-member function double distance( CDot a, CDot b);. This would calculate distance from first to second dot.
For a friend class, I would recommend you to use same class CDot and its friend class CLineSegment to create one line segment from two CDot objects.
Now, we will consider few properties that friend classes have.
First one is very easy to understand. If class A is friend of class B, it does not mean that class B will be friend of class A, without some extra coding. If you really need A to be friend of B as well, you would need to state that also.
Next interesting property is sometimes called transitivity. For example, let’s take a situation where you are facing three classes: Class A, B and C.
If you have B as a friend of A, and you have C as the friend of B, it might be reasonable to expect from C to be friend of A. This time, friend of your friend is not your friend. As you might conclude, you would need to state that C is a friend of A as well.
Friend Modifier Example Code – Problem Definition
In order to explain friend modifier we will create an example. This will illustrate how you could overload operators, and we will also explain how to use ostream and istream as objects that will present and import data from user to our class.
For our exercise, our task is to create class CComplexNumber.
- Just to refresh your math memory, the following are some properties of complex number:
- This problem will help you solve something like this: ax*x + b*x + c =0.
- Complex number has two parts: real and imaginary. That imaginary part is multiple of square root of -1.
- For this, it is usually denoted like this: z = x + i*y.
- Apart from this, you also have polar form of complex number and exponential form as well.
Friend Modifier Example Code – Solution
The following is the example C++ code that uses friend modifier to solve our problem.
#include <iostream> using namespace std; class CKomplexNumber { private: double dX,dY; public: CKomplexNumber(const double x, const double y) {dX=x;dY=y;} CKomplexNumber(){;//This is not a smiley} CKomplexNumber operator+(const CKomplexNumber& z) { CKomplexNumber temp=*this; temp.dX += z.dX; temp.dY += z.dY; return temp; } friend ostream& operator<<(ostream& out, const CKomplexNumber z); friend istream& operator>>(istream& in, CKomplexNumber& z); }; ostream& operator<<(ostream& out, const CKomplexNumber z) { cout<<"Complex number is"<<endl; out<<z.dX<<" + "<<z.dY<<" i"<<endl; return out; } istream& operator>>(istream& in, CKomplexNumber& z) { cout<<"Imput real and imaginary part"<<endl; in>>z.dX>>z.dY; return in; } int main(void) { CKomplexNumber Z1; cout<<"First complex number is="<<endl; cin>>Z1; cout<<Z1; CKomplexNumber Z2; cout<<"Second complex number is="<<endl; cin>>Z2; cout<<Z2; CKomplexNumber Z3; cout<<"Third complex number is="<<endl; cin>>Z3; cout<<Z3; CKomplexNumber Zr(0,0); Zr = Z1 + Z2 + Z3; cout<<Zr; return EXIT_SUCCESS; }
Friend Modifier Example Code – Explanation
In the above sample code:
- In CComplexNumber class we have data that is used to describe values of complex number. This is dX and dY and they are of double data type.
- We have constructors as well, you might even add few additional constructor and destructor of your own.
- In order to enable most logical syntax you would use operator +. Just to be clear, you don’t need to type something like this: Zr.AddComplexNumbers(Z1,Z2);
- Instead, it will be better if you do something simple like this: Zr = Z1 + Z2;
- We have two overloaded operators: “>>” and “<<". You could say that we will not need our set and get methods, but they have their place as well. Or, you could say that you use methods get and set very seldom.
- Now we will analyse code in the main function. First, we instantiate one object called Z1 then we input its values, that are real and imaginary part.
- After that Z1 is presented to user. Next few steps are pretty similar therefore we would not need to go into the details all over again.
Finally, we add those three complex number and store result into Zr, and we present our results to user.
Suggested Improvements to the Code
The following are few things you can do to improve the above code to learn more about how to use friend modifier:
- Broaden the solution with support to polar and exponential form of complex numbers.
- Next thing you could do is to have inherited classes, and also you could have three types of complex numbers and then you try to have three classes as parents. You could put your friend functions to transform those complex numbers from one form to another. If you are new to inheritance, this might help: How to Use C++ Inheritance and Abstract Class with Code Examples
- We have overloaded only three operators: operator+, operator>> and operator<<. You could add few more overloaded operators too.
- Now, you might start to think about: overflow, underflow and bad inputs, as some bad things that could happen with your code, and if you wish to use your class in real life situations, that would probably be ultimate goal for most of us, you should find ways to make your code more robust.
- On a related note, you might find this helpful to make your code robust: 10 Tips for C and C++ Performance Improvement Code Optimization
- Create an user-friendly complex number calculator by using the above code snippet as a base.
Relationship to Encapsulation and Inheritance
After you have understood how friend modifier works and you start to create practical rules, you might ask you self how is it related to encapsulation?
Encapsulation is one of the major principles of OOP. Some might think that friend modifier is ruining concept of OOP. But it does not, it will allow exception that is needed and that way it would preserve encapsulation, with minimum divergence, due to technical issues.
Sometimes it is good to think of it, as interface to a class. That is reason why you could say that classes have some relationship in that case.
Placing your data under public modifier would be example that works against encapsulation.
Another question you might ask is: Do I inherit friends from parent class?
We have explained inheritance. In most situations, you have need for public inheritance, which means that you are broadening base class with new features and this excludes the private members.
The answer to this question is no. We do not inherit friends from our parent class.
Final Thoughts on Friend Method, Operator, Class
- Friend modifier is useful, and it has a place in Object Oriented Programming. We would also need to say that friend functions would be very useful in situations when you are trying to avoid placing your data to public.
- One example is application of operators: “>>” and “<<“. It could be applied with some other operators, but you should avoid it if possible.
Sometimes this will reduce the complexity in the amount of code you have to write to solve certain problems.
- It could be used when you have some relationships between two objects of same kind, or even two or more objects of different type. For example, you would need to compare them, or create new object from those few objects you have.
- One of situations when you could deploy this is when you need to transform object of one type to another type.
- In my opinion, it might be a good idea to create friend part of class where you would state friends of a class, that way code would be more organized and systematic. It might be a good idea to have same thing like that with virtual methods as well.
Additional Exercise to Practice Friend Modifier
The following are few additional exercise for you to use Friend modifier and solve these specific problems.
- Create solution for 2 dimensional vector, 3 dimensional vector, n dimensional vector using friend modifier. If you are new to vector, this might help: STL Tutorial: How to use C++ Vector
- Create class CDot, with int coordinates and two data, one for each of the projections. Don’t forget to use friend functions. Create non member function, which will calculate distance among two dots.
- To measure temperature you have: Kelvin, Celsius, Fahrenheit. Convert the temperature between these three types. This means that you could create abstract class CTemprerature, and use it as a base class for: CKelvin, CCelsius and CFarenhite. In order to convert those objects, you could use stand alone functions as friends.
- Create class CCalendarDate. That could be done if you have three classes: CDay, CMonth, CYear. After, you have created class CCalendarDate, you could create non member function that will calculate how many days is difference among two calendar dates.
- For time measurement, your task is to create class CTime. You need to consider both 12 and 24 hour format.
Create template class CMatrix with adequate friends.
- If you like math and studied it, or if you just like games of luck, this might be your favorite. You are required to model two classes: CCup and CBall. In one cup you would place small balls that are colored. Colors could be different. You could have more cups with small balls and you should calculate the probability to pick one of the small balls from each of the cups you have. You should have ability to create solution that will allow you to pick small ball from one cup and place it into other cups.
{ 5 comments… add one }
It looks like <> are changed with < and >.
Okay, could you fix that or we need to wait for another>
This article is kinda needed in this line of articles in order to bring bigger picture to C++ line of articles.
Any way, it should been presented before C++11 article, this way it would be good to add some features from C++11 and soon it would be time for C++17.
I guess that is reason why bloogs exist any way, and there is some place for ther people too.That way we don’t have one man show all time.
Have fun, but…
If for real this is very interesting thing
worth noticing! | http://www.thegeekstuff.com/2016/06/friend-class-cpp/ | CC-MAIN-2017-04 | refinedweb | 2,156 | 61.36 |
HttpRequest Class
.NET Framework 1.1
Enables ASP.NET to read the HTTP values sent by a client during a Web request.
For a list of all members of this type, see HttpRequest Members.
System.Object
System.Web.HttpRequest
[Visual Basic] NotInheritable Public Class HttpRequest [C#] public sealed class HttpRequest [C++] public __gc __sealed class HttpRequest [JScript] public class HttpRequest
Thread Safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
Remarks
The methods and properties of the HttpRequest class are exposed through the Request property of the HttpApplication, HttpContext, Page, and UserControl classes.
Requirements
Namespace: System.Web
Platforms: Windows 2000, Windows XP Professional, Windows Server 2003 family
Assembly: System.Web (in System.Web.dll)
See Also
HttpRequest Members | System.Web Namespace
Show: | https://msdn.microsoft.com/en-us/library/system.web.httprequest(d=printer,v=vs.71).aspx | CC-MAIN-2015-22 | refinedweb | 138 | 60.61 |
Nov 07, 2014 01:22 PM|NoBullMan|LINK
Hi,
I was wondering if someone can point me in the right direction for creating a web crawler, using C#, to be used in an intranet environment.
I have a set of IP addresses that I will use to create a URL to go to a specific page. On that page there are only three lines of text that display some stats and I need to be able to grab them (preferably a multi-threaded application since there are about 4000 IP addresses to check). If you can show how to get started or know of any documentation/samples on how to achieve this, it'd be greatly appreciated.
Star
14297 Points
Nov 07, 2014 01:35 PM|gerrylowry|LINK
see my answer regarding chat here: which should help you find examples that meet your need at "Developer code samples" and other sites like CodeProject, et cetera.
Nov 10, 2014 02:18 AM|Summer - MSFT|LINK
Hi web crawler,
Welcome to the ASP.NET forum.
If you can show how to get started or know of any documentation/samples on how to achieve this,
About this issue, please refer to the link below that tell you how to build a basic Web Crawler to Pull Information from a Website.
Or in this article about How to Write a Web Crawler in C#.
More another information about the Web Crawler, please refer to the links below and hope it could helpful for you.
Best Regards,
Summer
Nov 11, 2014 08:18 PM|NoBullMan|LINK
Thank you Summer. the first link is in PHP; the second one was of some use but I already figured request/response sequence.
What i am struggling with right now is to make this multi-threaded since I have to deal with 4000 URLs and single thread would take some time. Assuming I have the list of URLs in a list or array of strings; do you know how I can set up the threads?
Assuming I have a function that processes the response, say, "ProcessResponse(string s)", and want to start with 10 threads, can I start with something like:
Thread[] tr = new Thread[10]; for (int i = 0; i < 10; i++) { tr[i] = new Thread(new ThreadStart(ProcessResponse)); //tr[i].Name = String.Format("Working Thread: {0}", i); } //Start each thread foreach (Thread x in tr) { x.Start(); }
I have not used multi threading but looked around and got some ideas just not sure how best to set up my scenarion.
Thanks.
Nov 12, 2014 04:33 AM|Summer - MSFT|LINK
I think you could achieve it by using Multi Threading.
There is an article about the Multi Threading, you could refer to it and learn something from it
Best Regards,
Summer
Nov 12, 2014 11:18 AM|NoBullMan|LINK
This is what I tried so far. For testing I use three IP addresses and three threads. It seems it runs all the threads but displays/processes data from first thread only:
public class PASSServer { private string _ip; public string IPAddress { get; set; } public PASSServer() { } } static void Main(string[] args) { int iNumThreads = 3; Thread[] threads = new Thread[iNumThreads]; string[] sIPs = { "192.168.10.20", "192.168.10.21", "192.168.10.22" }; for (int i = 0; i < threads.Length; i++) { ParameterizedThreadStart start = new ParameterizedThreadStart(Start); threads[i] = new Thread(start); PASSServer pserver = new PASSServer(); pserver.IPAddress = sIPs[i]; threads[i].Start(pserver); } Console.WriteLine("DONE"); Console.ReadKey(); } static void Start(object info) { PASSServer pserver = (PASSServer)info; crawl(pserver.IPAddress); } private static void crawl(string sUrl) { PASSData cData = new PASSData(); string sRequestUrl = "http://" + sUrl.Trim() + "/cgi-bin/sysstat?"; string sEncodingType = "utf-8"; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(sRequestUrl); request.KeepAlive = true; request.Timeout = 15 * 1000; System.Net.HttpWebResponse response = (HttpWebResponse)request.GetResponse(); string sStatus = ((HttpWebResponse)response).StatusDescription; sEncodingType = GetEncodingType(response); System.IO.StreamReader reader = new System.IO.StreamReader(response.GetResponseStream(), Encoding.GetEncoding(sEncodingType)); // Read the content. string responseFromServer = reader.ReadToEnd(); Console.WriteLine(sRequestUrl + "\r\n" + responseFromServer); }
Nov 12, 2014 08:44 PM|Summer - MSFT|LINK
About the Thread problems, you could access the forum below and it will give you more professional solutions.
Best Regards,
Summer
7 replies
Last post Nov 12, 2014 08:44 PM by Summer - MSFT | https://forums.asp.net/t/2017559.aspx?web+crawler+question | CC-MAIN-2017-34 | refinedweb | 710 | 54.12 |
Here is my code:
import java.util.*; public class ReverseNames { public static void main(String[] args) { int count = 0; int index = 0; Scanner scan = new Scanner (System.in); String prompt = ("Enter next name: "); System.out.print("This program will ask you to enter some names. How many do you" + " have?"); int amount = scan.nextInt(); String[] name = new String[amount]; String[] reverse = new String[amount]; System.out.print("You entered " +amount+ " as the size of your name list."); System.out.println(" "); for (index = 0; index < amount; index++) { System.out.print("Enter next name: "); name[index] = scan.next(); } System.out.println(" "); System.out.println("The names in reverse and original order: "); System.out.println(" "); for (index = name.length - 1; index >= 0; index--) { } System.out.println(" "); for (String names1 : name) { System.out.println(names1+" "); } } } /** This program will ask you to enter some names. How many do you have? 3 You entered 3 as the size of your name list. Enter next name: Taylor Enter next name: Zack Enter next name: Sally The names in reverse and original order: Sally Zack Taylor Taylor Zack Sally */
What do I need to do to have it like this:
Zack Taylor
Sally Sally
Taylor Zack | http://www.dreamincode.net/forums/topic/70000-java-program-printing-names/ | CC-MAIN-2016-30 | refinedweb | 198 | 62.14 |
In the past few months, we have covered Class Designer features in the upcoming Visual Studio 2005 release. After reading these articles, you have probably noticed that the serialization format for the diagram file (.cd file extension) is in plain XML format. In this article, I will describe the Class Designer file format in detail.
As you know, the diagram file uses the .cd file extension. If you use the Notepad to open this file, you will notice it is in simple plain XML format. As opposed to a proprietary binary format, we decide to serialize the diagram in the XML format. The reason to use the XML format is for its brevity and simplicity. You can easily understand the element/attribute describes the shape rendering behavior in the diagram. Another benefit is for you to easily diff the delta between iterations. Imagine you are working in a collaborative environment and the .cd file is also checked into the source code control system. You can easily diff the diagram file versions and understand the changes between them. If we’re to store the diagram in the binary proprietary format, you will not be able to make sense out of the deltas.
Consider your have the following code snipped in your class library project.
namespace CDFileFormat
{
public delegate void OrderShipped( Order order);
public class Customer
{
public string FirstName;
public string LastName;
public Address Address;
public void OrderGoods(int productID) { }
public IList<Order> currentOrders;
public event OrderShipped OrderShipped;
}
public class Address
public string Street;
public int ZipCode;
public class Order
}
And you create the following diagram
If you open the actual .cd file from the solution explorer using Notepad, you will see the following XML content.
<?xml version="1.0" encoding="utf-8"?>
<ClassDiagram MajorVersion="1" MinorVersion="1" MembersFormat="FullSignature">
<Font Name="Tahoma" Size="8.51" />
<Class Name="CDFileFormat.Customer">
<Position X="0.5" Y="1.5" Width="2.5" />
<TypeIdentifier>
<FileName>Class1.cs</FileName>
<HashCode>AAAAAGAAAAAAAAAAAAAEAAAAAAAAAAASAAAAAAAAAAg=</HashCode>
</TypeIdentifier>
<ShowAsAssociation>
<Field Name="Address" />
</ShowAsAssociation>
<ShowAsCollectionAssociation>
<Field Name="currentOrders" />
</ShowAsCollectionAssociation>
<Compartments>
<Compartment Name="Events" Collapsed="true" />
</Compartments>
</Class>
<Class Name="CDFileFormat.Order" Collapsed="true" HideInheritanceLine="true">
<Position X="5" Y="2.5" Width="1.5" />
<HashCode>AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=</HashCode>
<Class Name="CDFileFormat.Address" Collapsed="true" HideInheritanceLine="true">
<Position X="5.25" Y="1" Width="1.5" />
<HashCode>AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAIAAAAAAAA=</HashCode>
<Class Name="System.Object" Collapsed="true">
<Position X="1" Y="0.5" Width="1.5" />
<TypeIdentifier />
</ClassDiagram>
Here’re the element descriptions:
ClassDiagram: This is the top level element which describes the setting applicable to the whole diagram. Attribute MembersFormat indicates whether we’re displaying class member in name, name_and_type, or full_signature format. In our case, we have value FullSignature which indicates showing member in its full signature form.
Font: Indicates the font used throughout the whole diagram. Attribute Name and size describe the font and size used in the diagram.
Class: For each class (box) shape in the diagram, it has a corresponding element Class in the file. It describes the rendering information for your class. The Name attribute described the fully qualified name for this class. During the file open phase, we’re using this full name to find the type in your project. Attribute Collapsed indicates whether the member compartments are hidden or not. In the above example, member compartments in Address, Order and System.Object classes are hidden from the user. Attribue HideInheritanceLine indicates whether the inheritance line is hidden or not. In our example above, both Address’ and Order’s parent class (System.Object) are present in the diagram. However, notice that the inheritance line is not being drawn because I chose to hide it for both the Address and Order classes.
Position: The shape position in the diagram. The (X, Y) coordinates are relative to the upper left corner of the diagram. The unit is in inches.
TypeIdentifier: This is used to identify the class in your actual project. E.g. Let’s say you have Address class shown in the diagram and you close the diagram. The next day, your co-worker changes the class name to LocalAddress in the project. When you open the diagram again at later time, we cannot find class Address in the project anymore. In this case, we will be using the FileName and HashCode with the heuristic algorithm to determine shape Address is now class LocalAddress is the project. If we cannot determine where the class is, the Address shape will be shown in red color. This is what we referred to as Orphan shape in the diagram. For details on orphan shape, please refer to the previous blog article.
ShowAsAssociation and ShowAsCollectionAssociation: These two elements represent which field/property is being shown as association/collection_association to other class in the diagram. If you look at the example above, you will notice that field Address and currentOrders are rendered as association lines in the diagram.
Members: This element holds a collection of hidden members in the shape. In our example above, field Customer.FirstName is hidden in the diagram.
Compartments: This element contains a collection of member compartments which are hidden in the diagram. In our example above, we see the Events compartment is hidden in class Customer.
As you can see, the XML file format we use is pretty straightforward. By comparing the diagram with the XML text, you can see how the diagramming information is maintained in the XML file. If you have a complex diagram, you might want to open the .cd file using Notepad. You will find interesting information in it.
Have you installed the latest CTP and tried out the Class Designer? As we are wrapping up the current release, this is also a good time for you to provide feedback to us. Do you have any suggestions on what topic we should be covering in the coming months? Any missing features you would like us to address? We would love to hear your feedback...
Regards,
Patrick Tseng
Visual Studio Class Designer Team | http://blogs.msdn.com/classdesigner/archive/2005/07/29/444501.aspx | crawl-002 | refinedweb | 996 | 50.33 |
Intel To Make A Greener Microprocessor 229 229... (Score:5, Funny)
Haha, just kidding, I own an AMD.
In other news (Score:2)
Reduced lead? (Score:5, Insightful)
Next step: reduce power consumption.
Re:Reduced lead? (Score:5, Insightful)
This is definitely a necessity as the major ecological impact of modern consumer and IT products occurs during the utilization phase and not during the production or disposal phase.
Re:Reduced lead? (Score:2, Interesting)
Semicondoctor fabs have a truly collosal ecological footprint, good thing what they make is worth more than gold. They consume tremendous amounts of water, and energy, to say nothing of the photoresists, acid baths and slag from the parts of the ores that aren't used. There are no doubt a log of computers, but you want to make an impact invent a lightbulb that costs the same or less, can be adapted to all the same fixtures, lasts longer, uses half as mu
obfuscation nazi (Score:2)
Banias and Dothan (Score:5, Insightful) (Score:5, Insightful):Banias and Dothan (Score:2, Insightful)
"Right now, speeds are fast enough..."
"640Kb ought to be enough for everyone..."
Re:Banias and Dothan (Score:2)
"Right now, speeds are fast enough..."
Note the use of the qualifying term. He's not indicating that nobody will ever need a faster processor, but that for most everyday uses computers are fast enough, and he has a point. Sure, there are some folks out there for whom instantaneous won't be fast enough, but as it is until the next must-have push the envelope app is unleashed on the masses, current computer speeds are good enough f
Re:Reduced lead? (Score:2)
Re:Reduced lead? (Score:2, Insightful)
Indeed, just read the "unintended consequences" article:
A typical computer processor and monitor contain five to eight pounds of lead, and other heavy metals such as cadmium, mercury and arsenic.
Five to eight pounds; that's quite a lot of CPU's! And they aren't even made entirely out of lead.
Re:Reduced lead? (Score:2, Funny)
Re:Reduced lead? (Score:3)
from m [intel.com]
Good job (Score:2, Funny)
question (Score:5, Interesting)
Re:question (Score:5, Informative) (Score:5, Informative)
Actually, intel is moving away from measuring chip speed by GHZ. Wired just had this article [wired.com] about it.
Basically, Intel is a couple years behind AMD who is now using numbers like 2300+ to describe chip speed.
Re:question (Score:5, Interesting)
Re:question (Score:4, Informative) (Score:3, Interesting)
Apple got it right by using Benchmarks to sell their product, even if the benchmarks are strange and deceptive. Hey, lying, cheating, and stealing are what got Microsoft to the top, everyone's gotta play a little dirty.
And yes, buying a PC should be an
Re:question (Score:3, Informative)
Some smart advertiser found if they take all the channels of a 2 or 4 channel amplifier, ignore low distortion (square wave clipped output is ok) list the power del
Re:question (Score:2)
Re:question (Score:4, Insightful)
And therein lies the major problem with GHz-based speed comparisons. As long as you're dealing with the same core (which is not the same as processor name i.e. "Pentium 4",) the speed will scale rather linearily with core speed (ignoring bus speeds etc.)
But you simply can't compare an N-GHz processor with core X to an N-GHz processor with core Y. The problem is, there really is no objective measurement system, as of yet, anyway.
Re:question (Score:2)
Re:question (Score:5, Insightful) (Score:3, Insightful)
Anyways, if Intel can get away from clock-speed ratings, I hope it can get away from 100 watt processors. Where are the quiet and efficient Pentium M desktop systems? Some companies [radisys.com] are designing motherboards for them, but ther
Re:question (Score:3, Interesting)
Look at a typical HP or Dell (or even e-Machines) people buy these days. My cousin's HP Pavilion has a DVD+/-R, CD-RW, 80 GB disk, fast P4 etc -- yet is a very quiet and small machine. There's a shroud over the CPU leading to a case fan (there is also a separate CPU and PSU fan; some Gateways from a couple
Re:question (Score:2)
It is similar to the muscle car days of the 1960's and 70's - everyone was wanting more power, more speed. They got what they wanted, but there was a sacrifice of handling, fuel consumption, etc. Then we saw a shift in the 80's and 90's to the econoboxes. Now for many consumers, the look at
Re:question (Score:2)
Re:question (Score:2) seconda
Re:question (Score:4, Interesting)
Re:question (Score:3, Informative)
So... (Score:4, Funny)
Re:So... (True Story) (Score:2)
There not doing it out of the kindness of... (Score:3, Insightful)
Re:There not doing it out of the kindness of... (Score:2) cos
Re:There not doing it out of the kindness of... (Score:2)
Green friendly? (Score:3, Insightful)
Re:Green friendly? (Score:5, Insightful)? (Score:2, Funny)
I am confused: I thought StrongARM was an Intel processor [intel.com]
Re:Green friendly? (Score:4, Insightful)? (Score:2))
Re:Green friendly? (Score:5, Interesting)? (Score:2)
Re:Green friendly? (Score:2)
In addition, I am surprised at the lack of implementation of more speed-step like features. I leave my PC on all the time. even when im using it, im usually surfing the we
Green friendly? Yeah, right... (Score:3, Insightful)
AMD has the faster high-end processors, too. I just ordered a high-end workstation for modeling and simulation at work. I chose a 64-bit AMD CPU both for the speed it gives now as well as for the future grow
RTFA (Score:2)
Can't dispose of computer parts? (Score:2, Funny)
What are we supposed to do with our old computers, a beowulf cluster?
Reducing waste (Score:4, Insightful)
Re:Reducing waste (Score:2)
I see three problems with this. The more obvious is that the market doesn't want this; otherwise people would buy higher-quality products (at an appropriate and higher price). But many people (possibly most) buy cheaper equipment,
Re:Reducing waste (Score:2)
Markets, markets.... (Score:2)
Re:Reducing waste (Score:2)
First 6 months they have to proove the failure wasn't in the device at the transaction time, next 18 months you can proove it.
Anybody know how this is done? (Score:4, Insightful)? (Score:2, Funny)
Maybe some alloy with cadmium could replace it
Re:Anybody know how this is done? (Score:3, Informative)
All major solder manufacturers allready have lead free products in place, check out thier websites for exact formulations.
BTW, a lot of chip manufacturers have allready done thier lead free packaging. Intels move is late in the day, which is ironic because they are making hi end high cost chips were gold is often used for bonding and plating rather than the solder used to tin the pins of lowwer c
Re:Anybody know how this is done? (Score:2)
But the article mixes two separate issues, thus the answer is a bit longer:
If you look at a BGA package on an PCB, then there are two interconnects: first the silicon die is connected to an intermediate substrate, the interposer. The result is the BGA.
Then the BGA is connected to the board.
For the second level interconnect (interposer to board) eutectic or near-eutectic lead-tin solder is used right now- around 37% lead, melts at 187 deg C.
SnAgCu (~95% tin, ~3.5% silver, ~0.5% copper) is
hype (Score:5, Insightful)
Re:hype (Score:2)
I wouldn't say it is even
CPU's in desktops often get pulled and used in other systems. Pulling a CPU out of socket requires no burning or chemical reaction, hence nothing is released in the enviroment.
As most if not all Desktop
Re:hype (Score:2)
I'm not so sure that this is true these days. I have no sources here, but I believe the majority of solder used in consumer electronics (including PCs) is of the lead-free variety (mostly silver and nickel, I believe).
I do know that some cheaper consumer electronic devices have warnings in the manual about proper disposal because "this product contains lead...", but most things
Re:hype (Score:2)
This might take the place [fujitsu.com] of lead solder, rather than silver, as it can use similar temperatures as lead solder.
Lead free soldering represents a minority in manufacturing, with companies now only starting to switch over with pressure from Japan and eventually the EU.
Re:hype (Score:2)
I'm not sure why I was under the impression that companies had started doing this a while ago, but I guess it's good that something is being done now anyway. I don't really know what kind of dangers lead poses, though even if minor, and if it's not *that* difficult to start using something else, it probably should be phased out...
Alternative heating. (Score:2, Funny)
How much lead is present in a microprocessor? (Score:5, Insightful)? (Score:4, Informative)
A flip-chip package currently contains 0.4 grams of lead. A tiny amount compared to that in the solder in a motherboard, let alone a monitor.
Re:How much lead is present in a microprocessor? (Score:3, Interesting)
where's the 8 lbs of lead?? (Score:5, Insightful)?? (Score:5, Informative)
Seriously, look at the bigger monitor tubes (especially in the EU); they have a radio-dosage sticker certifying the level of beta radiation emitted, usually at the preset acceleration voltage.
Jon.
Re:where's the 8 lbs of lead?? (Score:5, Informative)
The amount of lead in a base unit is limited to solder and tiny amounts within the ICs.
What a Load of Twaddle (Score:5, Insightful).
Watch out for unwanted side-effects (Score:3, Insightful).
Lead is the least of our worries (Score:5, Interesting).
Re:Lead is the least of our worries (Score:2)
The secret life of your computer [calpoly.edu] illustrates what went in to make a computer.
Don't forget, this can be said for a lot of other things as well, like consumer electronics.
Trash and waste abounds at both ends of the equation.
It's just PR (Score:5, Interesting) (Score:2, Informative)
The main problem relates to the higher temperatures needed to melt lead free solder. These higher temperatures can stress components and are particuarly worrying in products that have to last 20 years.
Re:It's just PR (Score:2)
But this isn't anything unique to Intel, and it isn't done out of the goodness of their green little hearts.
I agree with you for the most part. However, lead-free solder isn't much more difficult to work with (at least as an elecronics hobbiest). I think the concern is more the cost of the solder, given that (I believe) it usually contains a lot of silver. Maybe it's harder to manufacture (or manufacture with), or perhaps there are other mechani
Pb Free - Not just Intel (Score:4, Informative) (Score:2, Funny)!
No deposit No return (Score:2)
If manufacturers actually took into account the cost of disposa
Re:No deposit No return (Score:2)
FWIW, I'd suggest you consider keeping your old gear. You may surprise yourself and discover a need you didn't think you had. Even in a home environment, extra gear could easily be used for a test sytem (new program installations, alternative distributions, major upgrades, etc.), or alternatively be put to use as a file server, backup storage, multi-boot replacement, a
Re:No deposit No return (Score:2)
I have a file server, I have a nat firewall, I have web server, I have my pc, that other pc, and some other PC ov
Eutectic alloys vs pure tin (Score:5, Interesting).
If it gets any greener... (Score:2, Funny)
Maybe someone will finally answer my question... (Score:2, Funny)
--------
WAP hosting [chiralsoftware.net]
Re:Maybe someone will finally answer my question.. (Score:3, Funny)
Problems with gold (Score:2, Interesting)
The problem with the pure gold was it was contaminated with about 0.9% of mix of platinum and iridium so it was much harder then normal soft pure gold. It
'burn' (Score:2) (Score:5, Interesting)
-David
Is lead worse than other heavy metals? (Score:2, Interesting)
The other popular alternative to silicon is Gallium Arsenide. Gosh, arsenic, another heavy metal with a place in the history of poisonings.
Lead, mercury, and arsenic are famous just because they're common on the earth and have been known since ancient times. All heavy metals accumulate int he body and cause problems, and I'm not sure that exoti
Sounds good (Score:2)
Intel following AMD again? (Score:5, Informative)
This is a good thing. (Score:2)
But now it won't get into OUR drinking water, and the lead in the water of the enemy means their babies will talk and walk slower, making them easier military targets when they grow up. This could be a nice long term strategy in our war on terrorism, and helps keep our streams and lakes lead free, too.
I fail to see the down side for us.
Soft error rate - alpha radiation (Score:2)
It already IS an industry trend, Intel following (Score:3, Insightful)
It's about time a company started this - good job - and let's hope other tech companies take the hint.
Hello, wake up call. This is a major industry trend. Intel is following along. They're definately not the ones starting this, in hopes the rest of the industry will catch on. It is a European Union Directive that deserves the "good job" credit... and it is Intel and every other major manufacturer in the electronics industry that is "taking the hint".
Most new electronic components are being made with little or no lead. Major companies and contract manufacturers (who solder boards for most smaller companies) are switching to lead-free soldering processes.
Already this forum is filled with +5 comments about power consumption and how the solder contains much more lead than the chips. Well, here's the news... the whole industry is migrating to lead-free solder.
Much of the conversion is driven by an EU directive that all electronic products sold in Europe be lead-free by 2008.
Here's an EE Times Article [eetimes.com] about the trend, and a possibility that the deadline may be moved up to 2006.
I am an electrical engineer, and even at the US-based company where I used to work, they're having to go through the painful process of switching the wave solder and reflow ovens (surface mount soldering) to lead-free fluxes and solder alloys.
So give credit where credit is due. It's the European Union, not Intel, that deserves "good job". The whole industry is taking the hint, as selling or being able to sell in the EU is important to almost everybody.
Re:Greener Chips? (Score:2, Interesting)
Re:Greener Chips? (Score:2)
Re:Greener Chips? (Score:2, Informative)
Re:Greener Chips? (Score:5, Insightful)
intel is meeting its upcoming legal requirements. the real win here (for intel), is turning something they are legally obligated to do into an "environmentally friendly" pr victory. the news media seems to be eating it up.
Re:Greener Chips? (Score:2) | http://hardware.slashdot.org/story/04/04/08/0619220/intel-to-make-a-greener-microprocessor?sdsrc=nextbtmnext | CC-MAIN-2015-32 | refinedweb | 2,609 | 63.7 |
What is auto_ptr?
EXAMPLE: Demonstrate the auto_ptr releasing dynamically allocated object.
- auto_ptr is a smart pointer.
- Owns a dynamically allocated object and performs cleanup when not needed.
- Prevents memory leaks
- The object destructor is called during destruction.
- "release()" method could be used to take manual ownership of the object.
#include <iostream> using namespace std; class MyClass { int data1; public: MyClass() { data1 = 100; } void print() { cout << data1 << endl; } }; void func() { auto_ptr<MyClass> ptr(new MyClass()); ptr->print(); // Delete not done // When ptr goes out of scope MyClass object is automatically destroyed // No memory leak is introduced } void main() { func(); } OUTPUT: 100
Why is this not a norm recommended by C++ .. even better, why does the internal C++ compiler not do this by default ?
Very good question. Does anyone know the answer?
main should return int. :-)
auto_ptr doesnt work for arrays.
Also we cannot use auto_ptr twice in the same scope. it produces different results.
So, we have shared_ptr to overcome disadvantages of auto_ptr | http://www.sourcetricks.com/2008/05/c-autoptr.html | CC-MAIN-2017-04 | refinedweb | 162 | 60.72 |
87422/specify-resource-separate-argument-passing-arguments-resource
Hi,
You don't need to pass any argument in your command. Just run the below command.
$ kubectl get svc
It will show all the services available on that namespace.
I see you have a section in ...READ MORE
You're getting this error because you do ...READ MORE
The application must return a 200 status ...READ MORE
You get this error when your pod ..,
In your secret you can't give data ...READ MORE
Hi@akhtar,
In above code, you have used secret ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/87422/specify-resource-separate-argument-passing-arguments-resource?show=87434 | CC-MAIN-2021-21 | refinedweb | 101 | 79.26 |
PIL OWL Ontology Meeting 2012-05-07
From Provenance WG Wiki
Contents
Meeting Information
prov-wg - Modeling Task Force - OWL group telecon
- previous meeting
- date: 2012-05-07
- time: 12pm ET, 5pm GMT
- via Zakim Bridge +1.617.761.6200, conference 695 ("OWL")
- wiki page:
- titan page:
- next meeting
Attendees
- Tim
- Daniel
- Stephan
- Satya
- Paul
- Khalid (regrets)
- Jun (regrets)
- Stian (regrets)
Agenda
For the issues that you are assigned:
- describe the original concern
- describe any perspectives already expressed
- recommend next step, or propose a solution
ISSUES
Khalid
- annotate subproperties
- The sub-properties were annotated to justify the fact that they are sub-properties. (I sent an email to Tim and Paolo to ask if the related issues 267 can be closed, I haven't received an answer for that).
- TODO: did he commit them
- turtle examples in cross ref
- Khalid made sure that people added all the TTL examples required in the cross references section.
- Tim: are all comments cleared? Still has yellow and red in spreadsheet.
- TODO: Tim to review Satya's
- Tim: we can show Involvement -- how wasConductedBy is understood generically in PROV.
- Daniel: possible attributes of OriginalSource
- Tim: I've noticed this same "finding attributes" problem with most of the Involvements.
- Paul: confidence values on the relation.
Daniel
- deref namespace
- dereferincing namespace will get to a page.
- Tim: conneg? curl -H "Accept: application/rdf+xml" -L vs. curl -H "Accept: text/turtle" -L
-
-
- remove qualified from 3.2 expanded
Jun
- finding attributes CLOSED
- definitions CLOSED
Stephan
- two level ontology
- sent email to group about decision, no response as of yet
- possibly ready to be CLOSED
- TODO send email asking if ready to close (SENT)
- prov:value collision
- Stephan: if we break collections to a separate document, then do we use a different namespace?
- Paul: group all collection info (dm, prov-o) into one document. We've already resolved to use one namespace.
- Paul: regardless of what document, it's all part of the recommendation.
- provc:value is clearly not prov:value
- Tim: use prov-collections as an example of extending prov
- Stephan: alternate option is to rename object proprty prov:value to prov:keyPairValue or similar
- Stephan: PROPOSAL: rename datatype property prov:value to prov:content?
- Paul: too many namespaces causes confusion for developers
- prov:agent vs hadPlan naming CLOSED
Stian
- annotate prov:inverse local names
- approving agent.
- qual pattern definition
Satya
- For ProvRDF issues, move RAISED issues to either POSTPONED, OPEN, or CLOSED
-
- Tim: be ruthless on POSTPONE
- For Involvement - use non PROV properties (not use specific sub-type of involvement)
Tim
- coverage
- automation still down.
- latest round of feedback
- RAISED 44 -> 36
- PENDING-REVIEW 16 -> 19
- CLOSED 2 -> 3
- OPEN 48 -> 50
- property naming
- ongoing, describing and documenting is the focus now.
- w3c style
- Will close unless somebody objects to the current use of aquarius. Is it holding anybody back?
- union domains in html cross ref
- Still not done. Will do this week :-)
- timestamped owl
- Still waiting on Jun to clarify what the problem is and what she wants
- option 1: Tim is going to hg tag the owl file.
- option 2: Tim is adding to the automation so that the prov-o.html will point to the OWL version that it used.
- option 3: <> owl:versionIRI <> ?
- Satya: adding versionIRI into the file itself instead of naming the OWL File directly (owl:version "2012-04-12") owl:versionInfo <ontology IRI>
- Paul: IRI is better. Use case is to download it, and use locally. using namespace supposed to use.
- Tim: support this at every hg commit? Paul: no.
- Tim: only for WD releases? Paul: yes.
- Stephan: I prefer versionIRI as well (but do owl:versionInfo as well)
seed issues
How would you encode in prov-o?
Stephan: Does the removal of responsibilities from dervication deprecate prov:wasApprovedBy? Tim: yes.
New issues
- qualified prop chains
- (prov:qualifiedUsage prov:entity) rdfs:subPropertyOf prov:used .
- Tim tried to add "prov:qualifiedUsage prov:entity -> prov:used", but Protege cut off "prov:entity"
- prov:qualifiedUsage o prov:entity
- done: Stephan will send link.
- Property chains Allow transitivity across multiple properties. For the currently selected property prop3, the editor syntax is prop1 o prop2 [o ...] -> prop3 which means if a prop1 b and b prop2 c then a prop3 c.
- Looks like it works :-) -Tim | http://www.w3.org/2011/prov/wiki/index.php?title=PIL_OWL_Ontology_Meeting_2012-05-07&oldid=7385 | CC-MAIN-2014-52 | refinedweb | 713 | 54.73 |
for connected embedded systems
spawnl()
Spawn a child process, given a list of arguments
Synopsis:
#include <process.h> int spawnl( int mode, const char * path, const char * arg0, const char * arg1..., const char * argn, NULL ); terminate the list with an argument of NULL.
Library:
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
Description:.
Returns:
The spawnl() function's return value depends on the mode argument:
If an error occurs, -1 is returned (errno is set).
Errors:
- E2BIG
- The number of bytes used by the argument() function isn't implemented for the filesystem specified in path.
- ENOTDIR
- A component of the path prefix of the child process isn't a directory.
Examples:
Run myprog as if the user had typed:
myprog ARG1 ARG2
at the command-line:
#include <stddef.h> #include <process.h> int exit_val; ... exit_val = spawnl( P_WAIT, "myprog", "myprog", "ARG1", "ARG2", NULL ); ...
The program is found if myprog is in the current working directory.
Classification:
Caveats:
If mode is P_WAIT, this function is a cancellation point.
See also:() | http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/s/spawnl.html | crawl-003 | refinedweb | 179 | 58.99 |
Graphics.DrawingCombinators
Description
Drawing combinators as a functional interface to 2D graphics using OpenGL.
This module is intended to be imported
qualified, as in:
import qualified Graphics.DrawingCombinators as Draw
Whenever possible, a denotational semantics for operations in this library
is given. Read
[[x]] as "the meaning of
x".
Intuitively, an
Image
a is an infinite plane of pairs of colors and
a's. The colors are what are drawn on the screen when you
render, and
the
a's are what you can recover from coordinates using
sample. The
latter allows you to tell, for example, what a user clicked on.
The following discussion is about the associated data. If you are only
interested in drawing, rather than mapping from coordinates to values, you
can ignore the following and just use
mappend and
mconcat to overlay images.
Wrangling the
a's -- the associated data with each "pixel" -- is done
using the
Functor,
Applicative, and
Monoid instances.
The primitive
Images such as
circle and
text all return
Image Any
objects.
Any is just a wrapper around
Bool, with
(||) as its monoid
operator. So e.g. the points inside the circle will have the value
Any
True, and those outside will have the value
Any False. Returning
Any
instead of plain
Bool allows you to use
Images as a monoid, e.g.
mappend to overlay two images. But if you are doing anything with
sampling, you probably want to map this to something. Here is a drawing
with two circles that reports which one was hit:
twoCircles :: Image String twoCircles = liftA2 test (translate (-1,0) %% circle) (translate (1,0) %% circle) where test (Any False) (Any False) = "Miss!" test (Any False) (Any True) = "Hit Right!" test (Any True) (Any False) = "Hit Left!" test (Any True) (Any True) = "Hit Both??!"
The last case would only be possible if the circles were overlapping.
Note, the area-less shapes such as
point,
line, and
bezierCurve
always return
Any False when sampled, even if the exact same
coordinates are given. This is because miniscule floating-point error
can make these shapes very brittle under transformations. If you need
a point to be clickable, make it, for example, a very small box.
Synopsis
- module Graphics.DrawingCombinators.Affine
- data Image a
- render :: Image a -> IO ()
- clearRender :: Image a -> IO ()
- sample :: Image a -> R2 -> a
- point :: R2 -> Image Any
- line :: R2 -> R2 -> Image Any
- regularPoly :: Int -> Image Any
- circle :: Image Any
- convexPoly :: [R2] -> Image Any
- (%%) :: Affine -> Image a -> Image a
- bezierCurve :: [R2] -> Image Any
- data Color = Color !R !R !R !R
- modulate :: Color -> Color -> Color
- tint :: Color -> Image a -> Image a
- data Sprite
- openSprite :: FilePath -> IO Sprite
- sprite :: Sprite -> Image Any
- data Font
- openFont :: String -> IO Font
- text :: Font -> String -> Image Any
- textWidth :: Font -> String -> R
- unsafeOpenGLImage :: (Color -> IO ()) -> (R2 -> a) -> Image a
- class Monoid a where
- newtype Any = Any {
Documentation
Basic types
The type of images.
[[Image a]] = R2 -> (Color, a)
The semantics of the instances are all consistent with type class morphism.
I.e. Functor, Applicative, and Monoid act point-wise, using the
Color monoid
described below.
Instances
render :: Image a -> IO ()Source
Draw an Image on the screen in the current OpenGL coordinate system (which, in absense of information, is (-1,-1) in the lower left and (1,1) in the upper right).
clearRender :: Image a -> IO ()Source
Selection
sample :: Image a -> R2 -> aSource
Sample the value of the image at a point.
[[sample i p]] = snd ([[i]] p)
Geometry
point :: R2 -> Image AnySource
A single "pixel" at the specified point.
[[point p]] r | [[r]] == [[p]] = (one, Any True) | otherwise = (zero, Any False)
regularPoly :: Int -> Image AnySource
A regular polygon centered at the origin with n sides.
circle :: Image AnySource
An (imperfect) unit circle centered at the origin. Implemented as:
circle = regularPoly 24
convexPoly :: [R2] -> Image AnySource
A convex polygon given by the list of points.
(%%) :: Affine -> Image a -> Image aSource
bezierCurve :: [R2] -> Image AnySource
Colors
Color is defined in the usual computer graphics sense: a 4 vector containing red, green, blue, and alpha.
The Monoid instance is given by alpha composition, described
at
In the semantcs the values
zero and
one are used, which are defined as:
zero = Color 0 0 0 0 one = Color 1 1 1 1
Constructors
Instances
modulate :: Color -> Color -> ColorSource
Modulate two colors by each other.
modulate (Color r g b a) (Color r' g' b' a') = Color (r*r') (g*g') (b*b') (a*a')
tint :: Color -> Image a -> Image aSource
Tint an image by a color; i.e. modulate the colors of an image by a color.
[[tint c im]] = first (modulate c) . [[im]] where first f (x,y) = (f x, y)
Sprites (images from files)
openSprite :: FilePath -> IO SpriteSource
Load an image from a file and create a sprite out of it.
sprite :: Sprite -> Image AnySource
The image of a sprite at the origin.
[[sprite s]] p | p `elem` [-1,1]^2 = ([[s]] p, Any True) | otherwise = (zero, Any False)
Text
text :: Font -> String -> Image AnySource
The image representing some text rendered with a font. The baseline is at y=0, the text starts at x=0, and the height of a lowercase x is 1 unit.
Extensions
unsafeOpenGLImage :: (Color -> IO ()) -> (R2 -> a) -> Image aSource
Import an OpenGL action and pure sampler function into an Image. This ought to be a well-behaved, compositional action (make sure it responds to different initial ModelViews, don't change matrix modes or render or anything like that). The color given to the action is the current tint color; modulate all your colors by this before setting them. | http://hackage.haskell.org/packages/archive/graphics-drawingcombinators/1.4.4/doc/html/Graphics-DrawingCombinators.html | CC-MAIN-2013-20 | refinedweb | 927 | 60.55 |
Free JSP download Books
Free JSP download Books
Java Servlets and JSP
free download books... optimization.
The
Professional JSP Books
The JDC
Free JSP Books
Free JSP Books
Download the following JSP books... simple Java code into the servlet that results from the JSP page
Free Java Projects - Servlet Interview Questions
Free Java Projects Hi All,
Can any one send the List of WebSites which will provide free JAVA projects with source code on "servlets" and "Jsp" relating to Banking Sector? don't know
java/jsp code to download a video
java/jsp code to download a video how can i download a video using jsp/servlet
Struts Books
;
Free
Struts Books
The Apache...
Struts Books
Professional
Struts Books
book
j2me ebook download for free - Java Beginners
j2me ebook download for free could you please send me a link get the j2me ebook for free of cost Hi Friend,
Please visit the following link:
Thanks
Download Search Engine Code its free and Search engine is developed in Servlets
Installation Instruction
Download... engine should work.
Download Search Engine Code
JSF Books
JSF Books
Introduction
of JSF Books
When we... Books
Judging from the job advertisements in employment web sites
Send Email From JSP & Servlet
J2EE Tutorial - Send Email From JSP &
Servlet... webserver, using
JavaMail API, the following code shows how the required... for executing servlets and JSP .
It is a joint effort
Java get Free Memory
the amount of free memory.
Here is the code... will be displayed as:
Download Source Code...
Java get Free Memory
Free Java Training
Free Java training is provided online by Roseindia for all the
non.... This free training is divided into courses that start with
introduction explaining... the language must opt for the online java training
program. It is not only free
Free Java Is it correct?
Free Java Is it correct? Hi,
Is Java free? If yes then where I can download?
Thanks
Hi,
You can download it from
Thanks
searching books
searching books how to write a code for searching books in a library through jsp
Free PHP Books
Free PHP Books
...-or migrating PHP 4 code-here are high-powered solutions you won't find anywhere else..., you will see your first PHP code, as you start writing your first scripts
Free GPS Software
Free GPS Software
GPS Aprs Information
APRS is free GPS software for use with packet... to watch over the internet.
Introduction
of Free GPS Software
Java Get Free Space
will be displayed as:
Download Source Code
... Java Get Free Space
In this section, you will study how to obtain available free
File Download in jsp
File Download in jsp file upload code is working can u plz provide me file download
Free J2EE Online Training
Free J2EE Online Training
The Enterprise Edition of Java popularly known... professionals free online J2EE training
there are various websites but the quality... training. The students are also given training on
JSP (Java Server Pages Java stuffs
Free Java
The Java or JDK is free for development and deployment..., web and mobile devices. Developers are developing both commercial and
free java applications. There are many free Java applications available these
days
Application server to download
Application server to download Which Application server can be downloaded for free to use personally at home to practice JSP,EJB etc
Free Java Shopping Cart,Shopping cart Application
download contains library files and source code.
Getting....
Download the code
...
Free Java Shopping Cart
... code you have written is just creating a file with the same name for download....
this code downloads the file but content is not there(content is zero)please
VoIP Free Software
into a Phone with our Best2 Call VoIP Software. Free to download, easy to use.
*Allows...
VoIP Free Software
What is free VoIP software?
There are many different categories of free VoIP software packages, including:
1. Free VoIP
Top 10 PC Games for Free Download
Top 10 PC Games for Free Download
.games {
clear: both;
width: 100... PC games for free download is one of the most popularly searched items... for free download.
Battlefield 1942
Servlets Books
;
Books : Java Servlet & JSP Cookbook... of lines of code, the Java Servlet and JSP Cookbook yields tips and techniques... leading free servlet/JSP engines- Apache Tomcat, the JSWDK, and the Java Web Server
My Favorite Java Books
Java NotesMy Favorite Java Books
My standard book questions
When I think about textbooks and other books, I usually ask myself some questions:
Would... in a course, should I keep or throw it out?
Language
The following
Free Programmers Magazine
Free Programmers Magazine
Free magazine on Java Technology. The Java Jazz Up is free
monthly magazine... the functionality
of your existing code over the network.
|
JSP PDF books
| Free JSP Books
| Free
JSP Download |
Authentication... through JSP |
Use Break
Statement in jsp code |
Use
Compound Statement in JSP Code |
Connect JSP
with mysql |
Create a Table in
Mysql database
Download Search Engine Code its free and Search engine is developed in Servlets
Display free disk space
in java.io.File. This
method get the free disk space in the bytes.
Code...Description:
This example will demonstrate how to get the free disk space... upon number of drive you have and
the free space available.
Note
PHP find free space, php disk_free_space, disk_free_space
The example Shows of disk_free_space() function in php Program. In this section you will find the example code of disk_free_space() function.
The disk... in php4 and php5.
Here is the code example.
Code for disk_free_space
JAVA JAZZ UP - Free online Java magazine
JAVA JAZZ UP - Free online Java magazine
Our this issue contains:
Java Jazz Up Issue 1 Index... and JavaServer Pages (JSP) for creating enterprise-grade web applications. Earlier
Tomcat Books
and Code Download
Tomcat is an open source web server that processes JavaServer..., there are currently few books and limited online resources to explain the nuances of JSP...
Tomcat Books
Java file get free space
Java file get free space
In this section, you will learn how to find the free space of any file or
directory.
Description of code:
JDK 1.6 provides few new... returns the number of unallocated
bytes in the disc.
Here is the code:
import
java code - JSP-Servlet
java code How to write a java code for sending sms from internet. Hi friend,
public class SMSClientDemo implements Runnable...://
JSP
JSP FILE UPLOAD-DOWNLOAD code USING JSP
How to
J2ME Books
J2ME Books
Free
J2ME Books
J2ME programming camp... editions are available for free via Bruce Eckel's site including all code
Is it true that Swtor2credits.com offer free 10000K Swtor Credits ?
Is it true that Swtor2credits.com offer free 10000K Swtor Credits ? Is it true that Swtor2credits.com offer free 10000K Swtor Credits ?
... are waiting for you to earn on January 24th, 2014. You join, you win! The free
Where has 100% Free Swtor Credits Giveaway ?
Where has 100% Free Swtor Credits Giveaway ? Where has 100% Free..., 2014. You join, you win! The free swtor credits are our heartfelt wishes... and support.While You can buy cheap Swtor Credits with 8% discount code FM8OFF
How to to join Free Swtor Credits Giveaway on Swtor2credits.com?
How to to join Free Swtor Credits Giveaway on Swtor2credits.com? How to to join Free Swtor Credits Giveaway on Swtor2credits.com?
Hello... are waiting for you to earn on January 24th, 2014. You join, you win! The free swtor
We are providing Linux CD's for free.
Trovalds to provide free, open source Unix-like OS. The
code for Linux is freely...Linux! Linux! Linux!
The Best Place to get your Free Linux CD's
Get Your Linux CD's Today
Result of
14-Jan-2003 Red Hat 8.0 Free CD Contest
What
Java Virtual Machine Free Download
Java Virtual Machine Free Download
Java Virtual Machine Free Download
The Java Virtual... version of Java Virtual Machine Free Download is
available without any cost from main
JSP code - JSP-Servlet
JSP code Hi!
Can somebody provide a line by line explanation of the following code. The code is used to upload and download an image.
<... have successfully upload the file by the name of:
Download
/
We are providing Linux CD's for free.
?
Linux is a free Operating System, Which
was developed by Linus Trovalds to provide free, open source Unix-like OS. The
code for Linux is freely available... Installation and Source code
CDs(3+3)
6
240/-
45
Java Virtual Machine Free Download
Java Virtual Machine Free Download
... version of
the software. The latest version of Java Virtual Machine Free Download... Machine Free Download
Visit http:/java.sun.com/javase/downloads/index.jsp
VoIP Free Software
. Free to download, easy to use.
* Allows calls from your computer via... VoIP Free Software
Free
VoIP Software Telephone solution
If you have Java Apps for cell phones
Free Java Apps for cell phones
Here you will find free java apps for cell phones and other mobile...;
There many apps available these days, commercial applications as well as free
java apps
We are providing Linux CD's for free.
Mandrake Linux
Linux! Linux! Linux!
The Best Place to get your Free Linux Mandrake
9.1 CD's
Get Your Linux CD's Today
What is Linux?
Linux is a free Operating System, Which
was developed by Linus Trovalds to provide free, open
JAVA JAZZ UP - Free online Java magazine
JAVA JAZZ UP - Free online Java magazine
... without recompiling.
Maven2 with JPA Example
Download...
Lomboz is an open source and free JEE development environment used
Till Slip Program Error Free - No Main Method
Till Slip Program Error Free - No Main Method Hi there i am a java... the recquirements for the variable names and the necessary code that is needed... that the program recquires a main() method in order to be runned - here is the following code
Download file - JSP-Servlet
Servlet download file from server I am looking for a Servlet download file example
Free PC to PC VoIP Providers
.
7 Day Free Trial
Try it for FREE
Download 3WTel Softphone now for a Free 7.../Desktop Dialer ? Download our free dialer and make calls from your own PC. ...
Free PC to PC VoIP Providers
Babble
Babble is a SIP-based internet
JSP Tutorials
Collection is jsp books in the pdf format. You can download these books and
study it offline.
Free JSP Books
Download the following JSP books.
Free
JSP Download
visualize free disk space
visualize free disk space hi i want to visualize free disk spaces of machines as a bar chart.thanks.
java code to upload and download a file - Java Beginners
java code to upload and download a file Is their any java code to upload a file and download a file from databse,
My requirement is how can i... and Download visit to :
http
Free VoIP Proxies
Free VoIP Proxies
Here are the list of Free VoIP Proxy
that you can use with your... between the STUN client and STUN server.
The current version of the code supports
JSF Training
. Just go through the training and learn
JSF online free of cost.
b) How to download...
through the link Download
code for all examples. Extract the zip file and deploy...
This is free online training course from RoseIndia. The
training course includes all
Online Free MySQL Training
Online Free MySQL Training Where to get the Online Free MySQL Training?
Thanks
Hi,
Thanks
Java Training Free
Java training free available with RoseIndia helps beginners in Java learn... is popular than other language because it is very easy to code in this
language, also...). Java compiler converts the program written in Java into byte
code which
download image using url in jsp
download image using url in jsp how to download image using url in jsp
upload and download mp3
upload and download mp3 code of upload and download Mp3 file in mysql database using jsp
and plz mention which data type used to store mp3 file in mysql database
How to learn programming free?
How to learn programming free? Is there any tutorial for learning Java absolutely free?
Thanks
Hi,
There are many tutorials on Java programming on RoseIndia.net which you can learn free of cost.
These are:
Hi :
diskfreespace php, Find free disk space from PHP program
)
gives the free space present on the disk in bytes
works same as the function disk_free_space()
works in php4 and php5
Code for freediskspace() Function PHP... will shows how you can find the free disk space from your php program | http://roseindia.net/tutorialhelp/comment/94017 | CC-MAIN-2014-35 | refinedweb | 2,090 | 73.68 |
.Net is a technology from the Microsoft Corporation for developing applications for Desktop, web and mobile. All future technologies of Microsoft will depend on .Net..NET is a general-purpose software development platform, similar to Java..Net Technology includes the following features:
Multiple Language Support
.NET supports around 44 languages like C# (Standard ECMA-334 C#), VB.NET, Visual J# (J Sharp), C++/CLI (Standard ECMA-372 C++/CLI) and so on.
Tools used to develop .NET applications
Microsoft Visual Studio.Net IDE and the .Net FrameworkThe .NET Framework SDK must be installed in your machine in order to develop any .NET application as well as in the client machine to run the .NET application.
Components Within .Net FrameworkCLR (Common Language Runtime): It the runtime engine for any .NET applications.The following are the functions of the CLR:
Framework Class Library ( FCL )It contains many classes, interfaces, structures and enumerated types within it, that are arranged in hierarchical manner under various namespaces.A namespace is logical container of classes and other namespaces."System" is the root namespace. (It is included in all types of application development.)The FCL is common for all .NET languages, in other words no separate set of libraries for separate languages.CLS (Common Language Specification)It is an agreement among language designers and class library designers to use a common set of basic language features that all languages need to follow.CTS (Common Type System)It is a data type system that is used in all .NET languages.The following are some data types in the CTS:
Compilation and Execution Procedure of a .Net ApplicationAs shown in the diagram, source code can be in any .NET language, such as C# or VB.Net.If C# then the file name extension is .cs if VB then it is .vb.As per language selection in coding, the language compiler of that language will be used. In other words for C# it is Csc.exe , for Vb.NET it is Vbc.exe.
View All
Build smarter apps with Machine Learning, Bots, Cognitive Services - Start free. | https://www.c-sharpcorner.com/UploadFile/govind77/introduction-to-net/ | CC-MAIN-2018-17 | refinedweb | 346 | 60.61 |
I want to be able to get the data sent to my Flask app. I’ve tried accessing
request.data but it is an empty string. How do you access request data?
from flask import request @app.route("", methods=['GET', 'POST']) def parse_request(): data = request.data # data is empty # need posted data here
The answer to this question led me to ask Get raw POST body in Python Flask regardless of Content-Type header next, which is about getting the raw data rather than the parsed data.
The docs describe the attributes available on the
request object (
from flask import request) during, from a HTML post form, or JavaScript request that isn’t JSON encoded
request.files: the files in the body, which Flask keeps separate from
form. HTML forms must use
enctype=multipart/form-dataor files will not be uploaded.
request.values: combined
argsand
form, preferring
argsif keys overlap
request.json: parsed JSON data. The request must have the
application/jsoncontent type, or use
request.get_json(force=True)to ignore the content type.
All of these are
MultiDict instances (except for
json). You can access values using:
request.form['name']: use indexing if you know the key exists
request.form.get('name'): use
getif the key might not exist
request.form.getlist('name'): use
getlistif the key is sent multiple times and you want a list of values.
getonly returns the first value.
2
To get the raw data, use
request.data. This only works if it couldn’t be parsed as form data, otherwise it will be empty and
request.form will have the parsed data.
from flask import request request.data
0
For URL query parameters, use
request.args.
search = request.args.get("search") page = request.args.get("page")
For posted form input, use
request.form.
For JSON posted with content type
application/json, use
request.get_json().
data = request.get_json()
0
| | https://coded3.com/get-the-data-received-in-a-flask-request/ | CC-MAIN-2022-40 | refinedweb | 314 | 61.53 |
If you create a workbook from an existing xls file, the setLandscape method has
no effect.
If you create a new sheet instead of using the loaded one, the landscape option
works fine. (Commented line)
What i did exactly:
public static void main(String[] args)
{
try{
// Read in a template xls document
InputStream stream = new FileInputStream("c:/temp/template.xls");
HSSFWorkbook wb = new HSSFWorkbook(stream);
HSSFSheet sheet = wb.getSheetAt(0);
// HSSFSheet sheet = wb.createSheet("New Sheet");
HSSFPrintSetup setup = sheet.getPrintSetup();
setup.setLandscape(true);
// Create a row and put some cells in it. Rows are 0 based.
HSSFRow row = sheet.createRow((short) 1);
HSSFCell cell = row.createCell((short) 2);
cell.setCellValue("XXXX");
// Write the output to a file
FileOutputStream fileOut = new FileOutputStream("c:/temp/workbook.xls");
wb.write(fileOut);
fileOut.close();
}catch (Exception e) {
e.printStackTrace();
}
}
The template.xls is simple a new created and saved xls document. (I would have
attached it, but did not know how.)
Used versions:
POI: poi-3.0.1-FINAL-20070705.jar
Java: jdk150_04
Created attachment 21037 [details]
the template file
Created attachment 21038 [details]
The workbook i created with the given reproduce code
I've added two new tests to
src/testcases/org/apache/poi/hssf/usermodel/TestHSSFSheet.java
One of these does setLandscape on an existing sheet, the other on a newly
created sheet. (Methods are testPrintSetupLandscapeNew() and
testPrintSetupLandscapeExisting())
After doing a save and a re-open, getLandscape is correctly working for both of
them.
Could you please create a failing testcase for your problem, and attach that to
the bug? As it is, I'm unable to replicate your problem.
I am able to reproduce this behaviour. The methods work fine in xlsx file while they dont work in xls file. The client in MS Excel 2007.
PrintSetup setup = s.getPrintSetup();
setup.setPaperSize(PrintSetup.LEGAL_PAPERSIZE);
setup.setLandscape(true);
Print properties for an xlsx file show the paper size as LEGAL and it is also set to landscape mode, whereas in xls file the paper size is A4 (default) and its set to portrait.
Note: The get methods show the correct values
I am using POI 3.5 final
The following code can be used to reproduce this problem.
Create a standard blank workbook and save it as test.xls.
Compile and run the following TestPOI.java source.
Open the out.xls file in excel and choose 'File' -> 'Page Setup' from the application menu. The page orientation has not changed to Landscape.
import java.io.*;
import org.apache.poi.hssf.usermodel.*;
import java.util.*;
public class TestPOI {
public static void main(String[] args) throws Exception{
HSSFWorkbook workbook = new HSSFWorkbook(new FileInputStream("test.xls"));
int sheetCount = workbook.getNumberOfSheets();
for (int i = 0; i < sheetCount; i++){
System.out.println(i);
HSSFSheet sheet = workbook.getSheetAt(i);
HSSFPrintSetup print = sheet.getPrintSetup();
print.setLandscape(true);
}
workbook.write(new FileOutputStream("out.xls"));
}
}
(In reply to comment #6)
This problem still occurs in poi-3.6-20091214
It seems Excel has some strange checking on the values set in the PrintSetup, so if you do not set all of them to useful values, Excel may ignore the settings and use defaults, e.g. what did work for me was the following:
setup.setLandscape(true);
setup.setPaperSize(PrintSetup.A4_PAPERSIZE);
setup.setScale((short)100);
setup.setValidSettings(false);
Also the setValidSettings() could be interferring here. Generally it should be "false" to not make Excel use default values!
However I don't think we can re-implement the checking that Excel does internally as it is not part of the Spec and thus a complete mystery to us, so I think this is WONTFIX for us unless someone comes up with a better way to handle the print settings. | https://bz.apache.org/bugzilla/show_bug.cgi?format=multiple&id=43693 | CC-MAIN-2021-31 | refinedweb | 615 | 51.24 |
continuing theme of fast serialization library, i should say that this
library, Streams, also implements much faster general (text) I/O.
below is it's results comparing to well-known Handles:
===8<==============Original message text===============
> Do you have a URL or darcs repository? - docs
AltBinary is not yet documented, but al least you can do
import Data.AltBinary
import Data.Binary.ByteAligned
and use the well-known NewBinary (GHC Binary) interface with
openBinMem, get, put_ and all other functions.
One more Streams-advertizing is the following table of text I/O
speeds. It constructed, again, using 100 mb in each test. Strings I/O
was tested using 15-char and 80-char lines (see conclusion at the end):
Handle:
vPutChar: 150.336 secs (user: 566.974 secs)
vGetChar: 100.905 secs (user: 92.864 secs)
vPutStrLn15: 34.310 secs (user: 31.165 secs)
vGetLine15 : 18.397 secs (user: 16.914 secs)
vPutStrLn80: 9.874 secs (user: 8.843 secs)
vGetLine80 : 10.955 secs (user: 9.974 secs)
vGetContents: 16.255 secs (user: -381.408 secs)
Streams: File with locking:
vPutChar: 48.900 secs (user: -384.632 secs)
vGetChar: 48.029 secs (user: 44.304 secs)
vPutStrLn15: 10.986 secs (user: 9.854 secs)
vGetLine15 : 11.316 secs (user: 10.355 secs)
vPutStrLn80: 5.618 secs (user: 4.987 secs)
vGetLine80 : 8.672 secs (user: 8.032 secs)
vGetContents: 14.571 secs (user: 13.379 secs)
Streams: File:
vPutChar: 3.065 secs (user: 1.222 secs)
vGetChar: 1.232 secs (user: 1.082 secs)
vPutStrLn15: 5.388 secs (user: 4.737 secs)
vGetLine15 : 8.523 secs (user: 7.791 secs)
vPutStrLn80: 4.687 secs (user: 4.076 secs)
vGetLine80 : 7.822 secs (user: 7.200 secs)
vGetContents: 14.831 secs (user: 13.640 secs)
Streams: Memory-mapped file:
vPutChar: 1.022 secs (user: 1.021 secs)
vGetChar: 0.882 secs (user: 0.861 secs)
vPutStrLn15: 4.667 secs (user: 4.246 secs)
vGetLine15 : 7.451 secs (user: 6.920 secs)
vPutStrLn80: 3.926 secs (user: 3.705 secs)
vGetLine80 : 8.443 secs (user: 6.800 secs)
Streams: File with UTF-8 encoding:
vPutChar: 8.853 secs (user: 8.042 secs)
vGetChar: 9.113 secs (user: 8.312 secs)
vPutStrLn15: 10.005 secs (user: 8.933 secs)
vGetLine15 : 13.770 secs (user: 12.518 secs)
vPutStrLn80: 9.683 secs (user: 8.442 secs)
vGetLine80 : 13.219 secs (user: 12.198 secs)
vGetContents: 24.616 secs (user: 22.613 secs)
Streams: MemBuf:
vPutChar: 0.951 secs (user: 0.851 secs)
vGetChar: 0.711 secs (user: 0.591 secs)
vPutStrLn15: 4.446 secs (user: 4.166 secs)
vGetLine15 : 7.130 secs (user: 6.599 secs)
vPutStrLn80: 3.746 secs (user: 3.575 secs)
vGetLine80 : 7.161 secs (user: 6.479 secs)
vGetContents: 14.361 secs (user: 13.329 secs)
as you can see, now, after all my optimizations, text I/O speeds are
seriously limited by speed of lazy strings itself
(getline/putline/getcontents are several times slower than
getchar/putchar although it should be vice versa - much faster!). i
foresee that Streams + Fast Packed Strings together will yield a
breakthrough in GHC I/O speed, and this can be implemented even
without waiting for GHC 6.6
===8<===========End of original message text===========
--
Best regards,
Bulat mailto:Bulat.Ziganshin <at> gmail.com | http://article.gmane.org/gmane.comp.lang.haskell.general/13625 | crawl-002 | refinedweb | 549 | 75.06 |
resize_callback
The callback that is invoked when an orientation change occurs that the app must respond to.
Synopsis:
#include <glview/glview.h>
typedef void(* resize_callback)(unsigned int width, unsigned int height, void *callback_data);
Since:
BlackBerry 10.0.0
Library:libglview (For the qcc command, use the -l glview option to link against this library)
Description:
The application descriptor file (bar-descriptor.xml) specifies the orientation behavior for an app. If the behavior is set to default or auto-orient, then any registered resize_callback will be invoked whenever the device is turned from landscape to portrait or vice-versa. Turning the device 180 degrees does not result in executing the resize_callback.
Last modified: 2014-09-30
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.glview.lib_ref/topic/resize_callback.html | CC-MAIN-2018-09 | refinedweb | 131 | 51.04 |
Deploying and managing scalable web services with Flume
Flume architecture
Flume is a distributed, reliable, and available service used to collect, aggregate, and move large amounts of streaming event data from many sources to a centralized data store.
Figure 1. Flume architecture).
InfoSphere® BigInsights™ enables the continuous analysis and storage of streaming data with low latency. InfoSphere Streams can be used to configure the agent and collector processes described above (see Related topics). Alternatively, Flume can be used to collect data on a remote location, and a collector can be configured on an InfoSphere BigInsights server to store data on the distributed file system (DFS). In this article, however, we will be using Flume as both agent and collector processes, together with a Hadoop Distributed File System (HDFS) cluster as storage.
Data flow model. The channel is a passive store that keeps an event until it is consumed by a Flume sink. For example, a file channel uses the local file system; the sink extracts the event from the channel and puts it in an external repository like the HDFS, or forwards it to the Flume source of the next Flume agent (next hop) in the flow; the source and sink within the given agent run asynchronously with the events staged in the channel.
There can be different formats used by the source for different purposes. For example, an Avro Flume source can be used to receive Avro events from Avro clients. An Avro source forms half of Flume's tiered collection support. Internally, this source uses Avro's NettyTransceiver to listen for and handle events. It can be paired with the built-in AvroSink to create tiered collection topologies. Other popular network streams that Flume uses are Thrift, Syslog, and Netcat.
Avro
Apache's Avro is a data serialization format. It is an RPC-based framework, used widely by Apache projects — such as Flume and Hadoop — for data storage and communication (see Related topics). The purpose of the Avro framework is to provide rich data structures, a compact and fast binary data format, and simple integration with dynamic languages, such as C++, Java™, Perl, and Python. Avro uses JSON for its Interface Description Language (IDL) to specify data types and protocols.
Avro relies on a schema stored with data. This enables fast and easy serialization since there are no per-value overheads. During the remote-procedure call (RPC), the schema is exchanged during client-server handshake. Using Avro, correspondence between the fields can be easily resolved, since it uses JSON.
Reliability, recoverability, and multi-hop flows
Flume uses a transactional design to guarantee reliability of event delivery. Transactional design corresponds to each event being treated as a transaction, and the events are staged in a channel on each agent. Each event is delivered to the next agent (like source bar) or terminal repository (like HDFS) in the flow. The events are removed from a channel only after they are stored in the channel of the next agent or in the terminal repository, thus maintaining a queue of current events until the storage confirmation is received. This happens through the source and the sink, which encapsulate the storage or retrieval information in a transaction provided by the channel. This ensures end-to-end reliability of the flow for single-hop message delivery semantics in Flume.
Recoverability is maintained through staging events in the channel, which manages recovery from failure. Flume supports a durable file channel that is backed by the local file system (essentially maintaining state on permanent storage). If a durable file channel is used, any events lost — in case of a crash or system failure — can be recovered. There is also a memory channel that stores the events in an in-memory queue, which is faster, but any events still left in the memory channel when an agent process dies cannot be recovered.
Flume also.
Figure 2. Multi-hop flows
System architecture
In this section, we will discuss how to set up a scalable web service using Flume. For this purpose, we will need code to read RSS feeds. We also need to configure Flume agents and collectors to receive RSS data and store it in the HDFS.
Flume agent configuration is stored in a local configuration file. This is similar to a Java properties file and is stored as a text file. Configurations for one or more agents can be specified in the same configuration file. The configuration file includes properties of each source, sink and channel in an agent and how they are wired together to form data flows.
An Avro source needs a hostname (IP address) and a port number to receive
data. A memory channel can have maximum queue size (capacity), and an HDFS
sink needs to know the file system URI and path to create files. An Avro
sink can be a forward sink (
avro-forward-sink), which can
forward to the next Flume agent.
The idea is to create a miniature Flume distributed feed (log events) collection system. We will use agents as nodes, which get data (RSS feeds in this case) from an RSS feed reader. These agents will pass on these feeds to a collector node that will be responsible for storing these feeds into an HDFS cluster. In this example, we will use two Flume agent nodes, one Flume collector node, and a three-node HDFS cluster. Table 1 describes sources and sinks for the agent and collector nodes.
Table 1. Sources and sinks for agent and collector nodes
Figure 3 shows the architectural overview of our multi-hop system with two agent nodes, one collector node, and an HDFS cluster. The RSS web feed (see code below) is an Avro source for both the agents and stores feeds in a memory channel. As the feeds pile up in the memory channel of the two agents, the Avro sinks start sending these events to the collector node's Avro source. The collector also uses a memory channel and an HDFS sink to dump feeds into the HDFS cluster. See below for agent and collector configurations.
Figure 3. Architectural overview of multi-hop system
Let's look at how we can spin up a simple news reader service using Flume. The following Java code describes an RSS reader that reads RSS web sources from the BBC. As you may already know, RSS is a family of web feed formats used to publish frequently updated works, such as blog entries, news headlines, audio, and video, in a standardized format. RSS uses a publish-subscribe model to check the subscribed feeds regularly for updates.
The Java code uses Java's Net and Javax XML APIs to read the contents of a URL source in a W3C Document, and processes that information, before writing the information to the Flume channel.
Listing 1. Java code (RSSReader.java)
import java.net.URL;; public class RSSReader { private static RSSReader instance = null; private RSSReader() { } public static RSSReader getInstance() { if(instance == null) { instance = new RSSReader(); } return instance; } public void writeNews() { try { DocumentBuilder builder = DocumentBuilderFactory.newInstance(). newDocumentBuilder(); URL u = new URL(" ?edition=uk#"); Document doc = builder.parse(u.openStream()); NodeList nodes = doc.getElementsByTagName("item"); for(int i=0;i<nodes.getLength();i++) { Element element = (Element)nodes.item(i); System.out.println("Title: " + getElementValue(element,"title")); System.out.println("Link: " + getElementValue(element,"link")); System.out.println("Publish Date: " + getElementValue(element,"pubDate")); System.out.println("author: " + getElementValue(element,"dc:creator")); System.out.println("comments: " + getElementValue(element,"wfw:comment")); System.out.println("description: " + getElementValue(element,"description")); System.out.println(); } } catch(Exception ex) { ex.printStackTrace(); } } private String getCharacterDataFromElement(Element e) { try { Node child = e.getFirstChild(); if(child instanceof CharacterData) { CharacterData cd = (CharacterData) child; return cd.getData(); } } catch(Exception ex) { } return ""; } protected float getFloat(String value) { if(value != null && !value.equals("")) { return Float.parseFloat(value); } return 0; } protected String getElementValue(Element parent,String label) { return getCharacterDataFromElement((Element)parent.getElements ByTagName(label).item(0)); } public static void main(String[] args) { RSSReader reader = RSSReader.getInstance(); reader.writeNews(); } }
The following code listings show sample configuration files for agents (10.0.0.1 and 10.0.0.2) and a collector (10.0.0.3). The configuration files define semantics for source, channel, and sink. For each source type, we also need to define type, command, standard error behavior and failure options. For each channel, we need to define the channel type. The channel type, capacity (maximum number of events stored in the channel) and transaction capacity (maximum number of events the channel will take from a source or give to a sink per transaction) have to be defined as well. Similarly, for each sink type, we need to define type, hostname (IP of the recipient of the event), and port. In case of an HDFS sink, the directory path to the HDFS head name node is provided.
Listing 2 shows sample configuration file 10.0.0.1.
Listing 2. Agent 1 configuration (flume-conf.properties on 10.0.0.1)
# The configuration file needs to define the sources, # the channels and the sinks. # Sources, channels and sinks are defined per agent, # in this case called 'agent' 3 shows sample configuration file 10.0.0.2.
Listing 3. Agent 2 configuration (flume-conf.properties on 10.0.0.2) 4 shows the collector configuration file 10.0.0.3.
Listing 4. Collector configuration (flume-conf.properties on 10.0.0.3)
Collector configuration (flume-conf.properties on 10.0.0.3): # The configuration file needs to define the sources, # the channels and the sinks. # Sources, channels and sinks are defined per agent, # in this case called 'agent' agent.sources = avro-collection-source agent.channels = memoryChannel agent.sinks = hdfs-sink # For each one of the sources, the type is defined agent.sources.avro-collection-source.type = avro agent.sources.avro-collection-source.bind = 10.0.0.3 agent.sources.avro-collection-source.port = 60000 # The channel can be defined as follows. agent.sources.avro-collection-source.channels = memoryChannel # Each sink's type must be defined agent.sinks.hdfs-sink.type = hdfs agent.sinks.hdfs-sink.hdfs.path = hdfs://10.0.10.1:8020/flume #Specify the channel the sink should use agent.sinks.hdfs
Next steps
Now that we have the code to read RSS feeds and we know how to configure Flume agents and a collector, we can set up the whole system in three steps.
Step 1
The compiled Java code should be executed as a background process to keep it running.
Listing 5. Compiled Java code
$ javac RSSReader.java $ java -cp /root/RSSReader RSSReader > /var/log/flume-ng/source.txt &
Step 2
Before starting the agents, you need to modify the configuration file using the template provided under $FLUME_HOME/conf/ directory. Once the configuration files are modified, the agents can be started using the following commands.
Listing 6 shows starting the agent on node 1.
Listing 6. Starting the agent on node 1
Agent node 1 (on 10.0.0.1): $ $FLUME_HOME/bin/flume-ng agent -n agent1 -c conf -f $FLUME_HOME/conf/flume-conf.properties
Listing 7 shows starting the agent on node 2.
Listing 7. Starting the agent on node 2
Agent node 2 (on 10.0.0.2): $ $FLUME_HOME/bin/flume-ng agent -n agent2 -c conf -f $FLUME_HOME/conf/flume-conf.properties
Here,
$FLUME_HOME is defined as an environmental variable
(bash or .bashrc), which points to the home directory of Flume
(/home/user/flume-1.4/, for example).
Step 3
Listing 8 starts the collector. It is worth noting that the configuration files are responsible for how a node behaves, such as whether it is an agent or a collector.
Listing 8. Collector node (on 10.0.0.3)
$ $FLUME_HOME/bin/flume-ng agent -n collector -c conf -f $FLUME_HOME/conf/flume-conf.properties
Conclusion
In this article, we introduced Flume, a distributed and reliable service for efficiently collecting large amounts of log data. We described how it can be used to deploy single-hop and multi-hop flows, depending on need. We also described a detailed example in which we deployed a multi-hop news aggregator web service. In the example, we read RSS feeds using Avro agents and used an HDFS collector to store the newsfeeds. Flume can be used to build scalable distributed systems to collect large streams of data.
Downloadable resources
Related topics
- Read a Flume tutorial.
- Learn more about Avro.
- Read "Visualizations and analytics for supply chains" to learn about visualizing data for supply chains.
- Read "Essentials, Part 1, Lesson 1: Compiling and Running a Simple Program."
- Check out the Big Data Glossary, by Pete Warden, O'Reilly Media, ISBN: 1449314597, 2011.
- Download InfoSphere BigInsights Quick Start Edition, available as a native software installation or as a VMware image.
- Download InfoSphere Streams, available as a native software installation or as a VMware image.
- Use InfoSphere Streams on IBM SmartCloud Enterprise. | https://www.ibm.com/developerworks/library/bd-flumews/index.html | CC-MAIN-2020-05 | refinedweb | 2,143 | 57.67 |
The.
The next step in the cookbook is
creating a connection to a WMI namespace.
We create a
WbemLocator and connect it to the desired
namespace.
Step three in the cookbook is
setting the security context on the interface,
which is done with the amusingly-named function
CoSetProxyBlanket.
Once we have a connection to the server,
we can ask it for all (
*) the
information from
Win32_.
We know that there is only one computer in the query, but I'm going to write a loop anyway, because somebody who copies this code may issue a query that contains multiple results.
For each object, we print its Name, Manufacturer, and Model.
And that's it.
Cast is to convert from the bstr_t class to WCHAR, but shouldn't the format string then be L"…"?
@SI, the L"…" would be necessary for wprintf. Anyway, you nailed the exercise: msdn.microsoft.com/…/btdzb8eb.aspx
Especially since most mobo manufacturers put “System manufacturer” in that field (and “System Product Name” in the model field). The motherboard manufacturer field is more reliable.
Ok, today I found out that PWSTR is basically TCHAR*, so it depends on UNICODE being defined.
"we can ask it for all (*) the information"
Nitpicker's corner: Today there is no nitpicker's corner.
> %ls always means an ANSI value
copy/paste error: should read "%ls always means a Unicode value"
We had exactly the opposite problem, our codebase is littered with (const char*)bstr_t(…) wrappers around unicode status messages to convert them to ansi for vsprintf logging calls, instead of using %ls directly.
30-some lines of code to…retrieve a string???
> I found out that PWSTR is basically TCHAR*, so it depends on UNICODE being defined
Hmm… there seems to be some confusion about how L"" and "" etc. work.
printf always uses "…"; this is always ANSI. wprintf always uses L"…"; this is always Unicode.
_tprintf is either printf or wprintf, depending on whether UNICODE/_UNICODE is defined. This always uses TEXT("…").
Consider the following:
printf("foo"); // OK
wprintf(L"foo"); // OK
_tprintf(TEXT("foo")); // OK
The other six possibilities (e.g. wprintf(TEXT("foo"))) are all either compiler or stylistic errors.
%ls corresponds to a value which is a Unicode string; %hs corresponds to a value which is an ANSI string. %s by itself corresponds to a string **which is of the same type as the format string itself** (regardless of whether UNICODE/_UNICODE is defined.)
Let us suppose that the string we are trying to print contains some non-ANSI characters, e.g.: Contosó. Consider the following:
printf("%s", "foo"); // OK; %s in an ANSI format string means an ANSI value
printf("%hs", "foo"); // OK; %hs always means an ANSI value
printf("%ls", L"foo"); // OK; %ls always means an ANSI value
printf("%ls", L"Contosó"); // Iffy; prints "Contoso", Unicode value is downconverted to ANSI format string
wprintf(L"%s", L"foo"); // OK
wprintf(L"%s", L"foo"); // OK
wprintf(L"%hs", "foo"); // OK
wprintf(L"%ls", L"Contosó"); // OK (prints Contosó)
So I think Raymond's printf("%ls", (LPCWSTR)GetPropertyValue(…)) is iffy, because any Unicode data in the property value would be downconverted to ANSI. I would prefer wprintf(L"%ls", (LPCWSTR)GetPropertyValue(…)).
I overlooked the l in the %ls, that makes much more sense than the PWSTR forcing the compiler to use the char* operator due to current mode. But if we are converting it down to ansi anyhow, why not use the const char* operator present in the bstr_t class, which caches the copy?
@Maurits [MSFT]: wprintf(L"%s", L"foo"); // OK
This will not work on a POSIX system, %s is _always_ char* there. The MS implementation is much less painful to work with and allows both TCHAR types to build from the same source.
@skSdnW: If you want portable code, or escape bloated COM, you have to use DMI. Then POSIX may be relevant.
ho, the static cast is necessary not because of wchar/tchar/char issues (BSTR is an OLESTR is a WCHAR always ) but because printf is a variable parameter list and the conversion for bstr_t to use to push on the stack is ambiguous. would not have been a problem if it were a BSTR directly
@skSdnW: Well, there's little reason to fiddle with UTF16 on POSIX systems. UTF8 is the variable byte encoding of choice, and UTF32 handles full codepoints, so no need to take any compromise solution which is neither ascii-compatible nor fixed width. But if you really have to, you can use the proper defines…
@J. Peterson: If you only need to retrieve WMI data, use the right tool for the job (PowerShell)! In other news, you can write a Windows program in assembly code, but it will take many more lines of code than C.
This is mildly off-topic.
WMI CIM Studio (which lets you browse and modify WMI objects) is implemented as a web page containing ActiveX controls. The only application that can host this (as far as I know) is Internet Explorer. Unfortunately, it has stopped working in IE 11.
Does anyone know either a) how to get it to work or b) an alternative WMI browser?
@laonianren: Host the ActiveX browser control in a vb6 app and use that to load the page.
This is pretty neat, though, is it not optimistic to expect a PC to have a manufacturer? So many PCs are built by their owners anyway! It returns this on my system:
Name = T-PC
Manufacturer = To Be Filled By O.E.M.
Model = To Be Filled By O.E.M.
@Raymond, that makes sense! Next part: how to modify this information :).
@Deduplicator: Who said anything about UTF16? wchar_t is usually 32bit on other systems.
The point is, working with printf functions where %s does not match the type of the format string is annoying if the code is going to be used on Windows and POSIX…
While we're on the subject of Unicode, how about also using wmain instead of main? =)
extern "C" int __cdecl wmain(int argc, wchar_t **argv, wchar_t **envp)
Well, we have seven "To Be Filled By O.E.M." for both Manufacturer and Model plus 5 for only Model (there Manufacturer = Mainboard Manufacturer). All bought at a local vendor (not everyone buys dell-only ;-))
But it sure would be nice if at least the big ones could get their names consistent: "HP", "Hewlett-Packard", "Hewlett-Packard Company", the same for Dell, Siemens etc. So you need an additional step consolidating anyway, plus regular maintenance when they come up with something different.
To the program: just exec() wmiq and parse the result string. No need to handle wmi yourself! ;-)
@skSdnW: The worst part under POSIX (or really, under the ISO/IEC C standard) is not that %s is "always ANSI, all the time" (equivalent to %hs). The worst is that there is NO WAY AT ALL to specify a string argument "The same width as the function"!
Quite frankly, I think that part of the C standard is dumb and I was flabbergasted when I discovered it. The Windows way is much easier to use (especially with the late addition to the C++ standard that says L"blah" "blah" is no longer a string width mismatch, invaluable when working with macros).
[It will probably take you even more lines of code to get the number of unread messages in the user's Yahoo inbox, and that's just a 32-bit integer! -Raymond]
I'm pretty sure that's 6 lines.
int r;
char buf[4096];
snprintf(buf, 4096, "wget -O – https://… ", username, stored_password);
FILE *f = popen(buf, "r");
fscanf(f, "%d", &r);
return r;
Joshua: I'm pretty sure you must first log in with OAuth, then parse the response, and send the appropriate cookie in your inbox query. The most complicated part is probably parsing the response to the authentication request.
I didn't think I was going to answer, but Gabe did so I will. The last time I looked up the call for any such thing OAuth didn't even exist.
[If not, then I can do it in one line of C++: system("wmic computersystem get name, manufacturer, model"); -Raymond]
The only reason I think you cheated is you didn't parse the result.
[The program takes no command line arguments. Who cares! (Don't forget people: Little Program.) -Raymond]
Butbutbut it saves 512 bytes and starts up unnoticeably faster!…… =)
I just make it a habit to use wmain / wWinMain so I don't mess up for real programs. =^-^=
C:Projectstemptests>echo extern "C" int __cdecl main() { return 0; } > unicode.cpp
C:Projectstemptests>cl /MT /Ox /nologo unicode.cpp > nul
C:Projectstemptests>ls -l unicode.exe
-rwxrwxrwx 1 user group 36864 Jan 8 14:33 unicode.exe
C:Projectstemptests>echo extern "C" int __cdecl wmain() { return 0; } > unicode.cpp
C:Projectstemptests>cl /MT /Ox /nologo unicode.cpp > nul
C:Projectstemptests>ls -l unicode.exe
-rwxrwxrwx 1 user group 36352 Jan 8 14:34 unicode.exe
int main() {
std::cout << "Plz run yr favourite Yahoo! mail reader and enter the number of Inbox messages, followed by the Enter key: ";
char buf[4]; gets(buf); std::cout << "Result: " << buf << std::endl;
}
Note that the first line can be omitted by taking advantage of long file name support: name the program the same as the intro line (remove/replace unsupported characters) and its usage is self-documenting. Additionally the program becomes even more flexible and re-usable since the name of the program (or shortcuts to it) can be changed to reflect the desired task. Finally, this snippet does not include external code executed and human motor/brain activity from computer start to user entry.
@Total spirit violator:
I like (1) the use of gets, one of the worst functions ever; (2) the assumption that the number of inbox messages is at most 3 digits (definitely not true for many people); (3) the juxtaposition of gets with c++ iostreams.
Evan: Sure, 3 chars are allocated to the buffer, but what's really going to happen if there are more? Crash on exit? At that point the program has already done its job. | https://blogs.msdn.microsoft.com/oldnewthing/20140106-00/?p=2163/ | CC-MAIN-2016-50 | refinedweb | 1,701 | 62.48 |
Code Like a Fighter Pilot, Design Like an Engineer — and Measure What?
The Agile Zone is brought to you in partnership with JetBrains. Learn how Agile Boards in YouTrack are designed to help teams plan, visualize and manage their work in an efficient manner, with support for both Scrum and Kanban processes.
- How do you measure 'agility'?
- Who should be worrying about 'agility' in a given project -- developers or managers?
Both questions have easy, unsatisfactory answers (1: cycle time; 2: both). But I'd like to probe a little deeper than bean-counter level and think about these questions from the point of view of design.
Now maybe too much up-front design is a bad thing -- at least for certain kinds of software (e.g. consumer-facing & small-scale). But engineering involves design at every stage of development. The question is just how much and where.
And in fact some of the most extreme cases of up-front design – large-scale military, manufacturing, aerospace – jumped on 'agility' almost before anyone else. If this seems counter-intuitive ('how can factories fail early any often??'), read on.
Agile in 1991: responding to uncertainty
Well, of course the 'agile' buzzword is a lot older than the Agile Manifesto. Government and industry have been bloviating about 'agile' since at least 1991, when the Iacocca Institute compiled and discussed vast swaths of research under a US Navy contract, publishing the results as 21st Century Manufacturing Enterprise Strategy: An Industry-Led View.
This 1992 paper by Rick Dove summarizes the state of the buzzword one year later. Even the briefest glance at the bottom of page 1 will show you that swirling buzz-talk about 'The Agile Enterprise' pre-dates the Agile Manifesto by a decade. Dove observes that 'The Agile Enterprise' must contain systems that:
"..must be structured to allow decisions at the point of knowledge, to encourage the flow of information, to foster concurrent cooperative activity, and to localize the side-effects of sub-system change."
Sound familiar? Sure, these principles flow naturally from good software practices (separation of concerns! modularity! robust namespaces!). But the basic motivator for this early, manufacturing-and-military-centric concept of 'agility' was the astonishing success of 'lean' manufacturing in 1980s Japan. The basic need for enterprise agility comes from the same features of the modern world that modern software itself is designed to respond to: unpredictability, uncertainty, risk, and variation.
The history of the importation of the enterprise-centric picture 'agile' into the world of software development -- and the deep early conceptual rifts between (1980s Japanese) 'lean' and (1990s American-as-planned) 'agile' manufacturing -- is fascinating in itself. (Basically: lean is mean and agile is forgiving. Which you probably knew already.)
But agility -- in the sense of 'responding to uncertainty', not 'shortening cycle times' -- didn't deeply affect the enterprise until software developers led the way.
Government and manufacturing developed the concept, and yet developers often feel -- and in so, so many cases, this feeling is obviously spot-on -- that their non-technical managers don't really get it (however much they buzz the word). Why?
This is (partly) what I was trying to figure out in Seattle.
I have a few initial thoughts. For one thing, it seems to me that the early manufacturing-level literature on 'agile' doesn't articulate iteration nearly as explicitly as developers do. (This is understandable -- military and manufacturing can't pivot very quickly, because ships and factories are gigantic and incredibly expensive.)
But even Royce's classic 'waterfall' paper strongly emphasized the need for iteration between successive steps (and was written from experience building software for massive aeronautical projects at Lockheed). [Royce was saying that iterative-free, purely linear development doesn't work.] That's over two decades before Scrum. But Royce introduced iteration to avoid from-scratch redesign in the face of failure to meet external requirements -- not specifically to respond to uncertainty.
Contrarwise, consider the famously iterative OODA loop, also developed by the military, with the aid of software, specifically for fighter pilots, in order to deal with the radical environmental uncertainty generated by enemies trying to shoot you out of the sky.
Agile in the 1970s: closing the OODA loop
You've probably heard about the OODA loop -- Observe, Orient, Decide, Act -- and how it was developed (partly by computer simulation), who it was developed for (fighter pilots and designers), why it was developed (because Soviet-built fighters were surprisingly effective against 'more advanced' American fighters in Vietnam), and maybe some of its immediate effects (revivification of the F15 program; development of the F16).
You may also have read some of Boyd's musings on the philosophical underpinnings of the concept (note the considerable conceptual overlap with Ken Schwaber's initial paper on Scrum -- and the centrality of uncertainty). A bit further afield, you might find a bunch of exciting thoughts in Boyd's massively influential Patterns of Conflict. (Briefly: rapid pivoting confuses enemies; you can win by getting 'inside' the enemy's OODA loop.)
Like the early pictures of the Agile Enterprise, Boyd and Schwaber both focus on uncertainty. But uncertainty is a feature of the environment. Given an uncertain environment, the most important factor on the part of the agile enterprise/developer/pilot is time. If you can't predict how the environment will change, then you better be able to respond quickly when it does.
As a result, Boyd's preferred fighter was the F16 (for its maneuverability), and metrics like cycle time have received a great deal of well-justified attention in the world of ALM.
Agility vs. speed - the line strikes back?
But speed isn't quite what the fighter pilot needs. More important than straight-line speed is the rate of change of direction. Agile jets and agile codebases change direction quickly; they don't just move fast. The distinction is straightforward in concept, of course, but all too easy to miss in practice.
The reason is partly a function of technical psychology: flow is good, and immensely satisfying. And the whole point of a flow is that Boyd's O and O are completely invisible. There's just D and A, and of course you're Ding and Aing correctly every time.
But every part of a laminar flow moves in one direction. A speed-maxed laminar flow needs to slow down in order to avoid turbulence during unexpected shifts in direction. To me this feels like a certain kind of coding. Anyone who has buzzed their way through a cowboy-dev session knows the feeling. The code is flowing thrillingly, but the momentum vector is getting too big to change.
Even assuming no technical debt (because you're totally in the zone, you're not taking shortcuts, everything is coming out just perfect), the straight-line productivity might be advancing too fast. At least, too fast to respond to uncertainty -- maybe fast enough to skip the first to O's. Maybe 'hurtling' is a good word.
(Paul Virilio thinks that the failure to distinguish straight-line speed from rate of change of direction is endemic to the entire modern world. I don't know how true that is, but I do understand the feeling, especially while 'in the zone'.)
What do you think?
So I wonder whether too much emphasis on cycle time, or time to market, or other purely straight-line-speed-focused metrics are perhaps offering 'straight line' thinking a little revenge over our lovely Scrum iterations. Cycles let you reorient; retrospectives close your OODA loop and prepare to open the next. And if you don't ever close the loop, then you're just in one huge loop and your enemy will shoot you down...
We'll talk more about methods and solutions later. DevOps is a start, but ops considerations aren't the same as good design-level engineering (which needs all four stages of the loop). Smarter user acceptance testing is probably another; and static analysis needs to fit in somewhere, because the real driver of software agility is the code itself. (I also have a feeling that trees are going to prove annoyingly helpful..but I'm not sure.)
But right now I'm trying to figure out what metrics can really be used to measure your agility rather than speed -- that is, the rate at which software developers (and projects) change direction to respond to the unexpected and uncertain.
And I'd love to hear what you think.
Are pure-speed metrics harmful, neutral, helpful, meaningless, and if so how and why? Let's assume (unrealistically) that there is no technical debt -- because we're trying to measure degree of agility, not degree of perfection.
How would you measure software agility in the sense described here -- as the rate at which software developers (and projects) change direction to respond to the unexpected and uncertain? }} | https://dzone.com/articles/agile-fighter-jets | CC-MAIN-2015-48 | refinedweb | 1,482 | 52.6 |
I am new to video analysis and python and is working on a project on video composition. I have three videos that overlap.
for example below are the start and end time of the videos
video1 video2 video3
start 19-13-30 19-13-25 19-13-45
end 19-13-55 19-13-35 19-13-59
Suppose you have a list of starttime and endtime of the videos [[video1_starttime,video1_endtime],[video2_starttime,video2_endtime],[video3_starttime,video3_endtime]], you can first sort the list based on startimes and then iterate over it to check if it is overlapping.
You can use the below code to check for it:
overlapping = [ [x,y] for x in intervals for y in intervals if x is not y and x[1]>y[0] and x[0]<y[0] ] for x in overlapping: print '{0} overlaps with {1}'.format(x[0],x[1])
where intervals is the list of
[starttime,endtime]
To compare the timestamps, convert them to datetime objects using:
from datetime import datetime dateTimeObject = datetime.strptime(timestampString) | https://codedump.io/share/Ul79wW7RatVX/1/how-to-check-if-videos-overlap | CC-MAIN-2017-26 | refinedweb | 172 | 51.52 |
import "github.com/hanwen/go-fuse/fuse/nodefs"
This package is deprecated. New projects should use the package "github.com/hanwen/go-fuse/v2/fs" instead.
The nodefs package offers a high level API that resembles the kernel's idea of what an FS looks like. File systems can have multiple hard-links to one file, for example. It is also suited if the data to represent fits in memory: you can construct the complete file system tree at mount time
api.go defaultfile.go defaultnode.go dir.go files.go files_linux.go fsconnector.go fsmount.go fsops.go fuse.go handle.go inode.go lockingfile.go memnode.go syscall_linux.go
type File interface { // Called upon registering the filehandle in the inode. This // is useful in that PathFS API, where Create/Open have no // access to the Inode at hand. SetInode(*Inode) // The String method is for debug printing. String() string // Wrappers around other File implementations, should return // the inner file here. InnerFile() File Read(dest []byte, off int64) (fuse.ReadResult, fuse.Status) Write(data []byte, off int64) (written uint32, code fuse.Status) // File locking GetLk(owner uint64, lk *fuse.FileLock, flags uint32, out *fuse.FileLock) (code fuse.Status) SetLk(owner uint64, lk *fuse.FileLock, flags uint32) (code fuse.Status) SetLkw(owner uint64, lk *fuse.FileLock, flags uint32) (code fuse.Status) // Flush is called for close() call on a file descriptor. In // case of duplicated descriptor, it may be called more than // once for a file. Flush() fuse.Status // This is called to before the file handle is forgotten. This // method has no return value, so nothing can synchronizes on // the call. Any cleanup that requires specific synchronization or // could fail with I/O errors should happen in Flush instead. Release() Fsync(flags int) (code fuse.Status) // The methods below may be called on closed files, due to // concurrency. In that case, you should return EBADF. Truncate(size uint64) fuse.Status GetAttr(out *fuse.Attr) fuse.Status Chown(uid uint32, gid uint32) fuse.Status Chmod(perms uint32) fuse.Status Utimens(atime *time.Time, mtime *time.Time) fuse.Status Allocate(off uint64, size uint64, mode uint32) (code fuse.Status) }
A File object is returned from FileSystem.Open and FileSystem.Create. Include the NewDefaultFile return value into the struct to inherit a null implementation.
NewDefaultFile returns a File instance that returns ENOSYS for every operation.
NewDevNullFile returns a file that accepts any write, and always returns EOF for reads.
NewLockingFile serializes operations an existing File.
LoopbackFile delegates all operations back to an underlying os.File.
NewReadOnlyFile wraps a File so all write operations are denied.
FileSystemConnector translates the raw FUSE protocol (serialized structs of uint32/uint64) to operations on Go objects representing files and directories.
func Mount(mountpoint string, root Node, mountOptions *fuse.MountOptions, nodefsOptions *Options) (*fuse.Server, *FileSystemConnector, error)
Mount mounts a filesystem with the given root node on the given directory. Convenience wrapper around fuse.NewServer
func MountRoot(mountpoint string, root Node, opts *Options) (*fuse.Server, *FileSystemConnector, error)
MountRoot is like Mount but uses default fuse mount options.
func NewFileSystemConnector(root Node, opts *Options) (c *FileSystemConnector)
NewFileSystemConnector creates a FileSystemConnector with the given options.
DeleteNotify signals to the kernel that the named entry in dir for the child disappeared. No filesystem related locks should be held when calling this.
EntryNotify makes the kernel forget the entry data from the given name from a directory. After this call, the kernel will issue a new lookup request for the given name when necessary. No filesystem related locks should be held when calling this.
FileNotify notifies the kernel that data and metadata of this inode has changed. After this call completes, the kernel will issue a new GetAttr requests for metadata and new Read calls for content. Use negative offset for metadata-only invalidation, and zero-length for invalidating all content.
func (c *FileSystemConnector) FileNotifyStoreCache(node *Inode, off int64, data []byte) fuse.Status
FileNotifyStoreCache notifies the kernel about changed data of the inode.
This call is similar to FileNotify, but instead of only invalidating a data region, it puts updated data directly to the kernel cache:
After this call completes, the kernel has put updated data into the inode's cache, and will use data from that cache for non direct-IO reads from the inode in corresponding data region. After kernel's cache data is evicted, the kernel will have to issue new Read calls on user request to get data content.
ENOENT is returned if the kernel does not currently have entry for this inode in its dentry cache.
func (c *FileSystemConnector) FileRetrieveCache(node *Inode, off int64, dest []byte) (n int, st fuse.Status)
FileRetrieveCache retrieves data from kernel's inode cache.
This call retrieves data from kernel's inode cache @ offset and up to len(dest) bytes. If kernel cache has fewer consecutive data starting at offset, that fewer amount is returned. In particular if inode data at offset is not cached (0, OK) is returned.
If the kernel does not currently have entry for this inode in its dentry cache (0, OK) is still returned, pretending that the inode could be known to the kernel, but kernel's inode cache is empty.
func (c *FileSystemConnector) InodeHandleCount() int
InodeCount returns the number of inodes registered with the kernel.
func (c *FileSystemConnector) LookupNode(parent *Inode, path string) *Inode
Follows the path from the given parent, doing lookups as necessary. The path should be '/' separated without leading slash.
func (c *FileSystemConnector) Mount(parent *Inode, name string, root Node, opts *Options) fuse.Status
Mount() generates a synthetic directory node, and mounts the file system there. If opts is nil, the mount options of the root file system are inherited. The encompassing filesystem should pretend the mount point does not exist.
It returns ENOENT if the directory containing the mount point does not exist, and EBUSY if the intended mount point already exists.
Finds a node within the currently known inodes, returns the last known node and the remaining unknown path components. If parent is nil, start from FUSE mountpoint.
func (c *FileSystemConnector) RawFS() fuse.RawFileSystem
Returns the RawFileSystem so it can be mounted.
func (c *FileSystemConnector) Server() *fuse.Server
Server returns the fuse.Server that talking to the kernel.
func (c *FileSystemConnector) SetDebug(debug bool)
SetDebug toggles printing of debug information. This function is deprecated. Set the Debug option in the Options struct instead.
func (c *FileSystemConnector) Unmount(node *Inode) fuse.Status
Unmount() tries to unmount the given inode. It returns EINVAL if the path does not exist, or is not a mount point, and EBUSY if there are open files or submounts below this node.
An Inode reflects the kernel's idea of the inode. Inodes have IDs that are communicated to the kernel, and they have a tree structure: a directory Inode may contain named children. Each Inode object is paired with a Node object, which file system implementers should supply.
AddChild adds a child inode. The parent inode must be a directory node.
Returns any open file, preferably a r/w one.
Children returns all children of this inode.
Files() returns an opens file that have bits in common with the give mask. Use mask==0 to return all files.
FsChildren returns all the children from the same filesystem. It will skip mountpoints.
GetChild returns a child inode with the given name, or nil if it does not exist.
IsDir returns true if this is a directory.
NewChild adds a new child inode to this inode.
Node returns the file-system specific node.
Parent returns a random parent and the name this inode has under this parent. This function can be used to walk up the directory tree. It will not cross sub-mounts.
RmChild removes an inode by name, and returns it. It returns nil if child does not exist.
Print the inode. The default print method may not be used for debugging, as dumping the map requires synchronization.
type Node interface { // Inode and SetInode are basic getter/setters. They are // called by the FileSystemConnector. You get them for free by // embedding the result of NewDefaultNode() in your node // struct. Inode() *Inode SetInode(node *Inode) // OnMount is called on the root node just after a mount is // executed, either when the actual root is mounted, or when a // filesystem is mounted in-process. The passed-in // FileSystemConnector gives access to Notify methods and // Debug settings. OnMount(conn *FileSystemConnector) // OnUnmount is executed just before a submount is removed, // and when the process receives a forget for the FUSE root // node. OnUnmount() // Lookup finds a child node to this node; it is only called // for directory Nodes. Lookup may be called on nodes that are // already known. Lookup(out *fuse.Attr, name string, context *fuse.Context) (*Inode, fuse.Status) // Deletable() should return true if this node may be discarded once // the kernel forgets its reference. // If it returns false, OnForget will never get called for this node. This // is appropriate if the filesystem has no persistent backing store // (in-memory filesystems) where discarding the node loses the stored data. // Deletable will be called from within the treeLock critical section, so you // cannot look at other nodes. Deletable() bool // OnForget is called when the kernel forgets its reference to this node and // sends a FORGET request. It should perform cleanup and free memory as // appropriate for the filesystem. // OnForget is not called if the node is a directory and has children. // This is called from within a treeLock critical section. OnForget() // Misc. Access(mode uint32, context *fuse.Context) (code fuse.Status) Readlink(c *fuse.Context) ([]byte, fuse.Status) // Mknod should create the node, add it to the receiver's // inode, and return it Mknod(name string, mode uint32, dev uint32, context *fuse.Context) (newNode *Inode, code fuse.Status) // Mkdir should create the directory Inode, add it to the // receiver's Inode, and return it Mkdir(name string, mode uint32, context *fuse.Context) (newNode *Inode, code fuse.Status) Unlink(name string, context *fuse.Context) (code fuse.Status) Rmdir(name string, context *fuse.Context) (code fuse.Status) // Symlink should create a child inode to the receiver, and // return it. Symlink(name string, content string, context *fuse.Context) (*Inode, fuse.Status) Rename(oldName string, newParent Node, newName string, context *fuse.Context) (code fuse.Status) // Link should return the Inode of the resulting link. In // a POSIX conformant file system, this should add 'existing' // to the receiver, and return the Inode corresponding to // 'existing'. Link(name string, existing Node, context *fuse.Context) (newNode *Inode, code fuse.Status) // Create should return an open file, and the Inode for that file. Create(name string, flags uint32, mode uint32, context *fuse.Context) (file File, child *Inode, code fuse.Status) // Open opens a file, and returns a File which is associated // with a file handle. It is OK to return (nil, OK) here. In // that case, the Node should implement Read or Write // directly. Open(flags uint32, context *fuse.Context) (file File, code fuse.Status) OpenDir(context *fuse.Context) ([]fuse.DirEntry, fuse.Status) Read(file File, dest []byte, off int64, context *fuse.Context) (fuse.ReadResult, fuse.Status) Write(file File, data []byte, off int64, context *fuse.Context) (written uint32, code fuse.Status) // XAttrs GetXAttr(attribute string, context *fuse.Context) (data []byte, code fuse.Status) RemoveXAttr(attr string, context *fuse.Context) fuse.Status SetXAttr(attr string, data []byte, flags int, context *fuse.Context) fuse.Status ListXAttr(context *fuse.Context) (attrs []string, code fuse.Status) // File locking // // GetLk returns existing lock information for file. GetLk(file File, owner uint64, lk *fuse.FileLock, flags uint32, out *fuse.FileLock, context *fuse.Context) (code fuse.Status) // Sets or clears the lock described by lk on file. SetLk(file File, owner uint64, lk *fuse.FileLock, flags uint32, context *fuse.Context) (code fuse.Status) // Sets or clears the lock described by lk. This call blocks until the operation can be completed. SetLkw(file File, owner uint64, lk *fuse.FileLock, flags uint32, context *fuse.Context) (code fuse.Status) // Attributes GetAttr(out *fuse.Attr, file File, context *fuse.Context) (code fuse.Status) Chmod(file File, perms uint32, context *fuse.Context) (code fuse.Status) Chown(file File, uid uint32, gid uint32, context *fuse.Context) (code fuse.Status) Truncate(file File, size uint64, context *fuse.Context) (code fuse.Status) Utimens(file File, atime *time.Time, mtime *time.Time, context *fuse.Context) (code fuse.Status) Fallocate(file File, off uint64, size uint64, mode uint32, context *fuse.Context) (code fuse.Status) StatFs() *fuse.StatfsOut }
The Node interface implements the user-defined file system functionality
NewDefaultNode returns an implementation of Node that returns ENOSYS for all operations.
NewMemNodeFSRoot creates an in-memory node-based filesystem. Files are written into a backing store under the given prefix.
type Options struct { EntryTimeout time.Duration AttrTimeout time.Duration NegativeTimeout time.Duration // If set, replace all uids with given UID. // NewOptions() will set this to the daemon's // uid/gid. *fuse.Owner // This option exists for compatibility and is ignored. PortableInodes bool // If set, print debug information. Debug bool // If set, issue Lookup rather than GetAttr calls for known // children. This allows the filesystem to update its inode // hierarchy in response to kernel calls. LookupKnownChildren bool }
Options contains time out options for a node FileSystem. The default copied from libfuse and set in NewMountOptions() is (1s,1s,0s).
NewOptions generates FUSE options that correspond to libfuse's defaults.
type TreeWatcher interface { OnAdd(parent *Inode, name string) OnRemove(parent *Inode, name string) }
TreeWatcher is an additional interface that Nodes can implement. If they do, the OnAdd and OnRemove are called for operations on the file system tree. These functions run under a lock, so they should not do blocking operations.
type WithFlags struct { File // For debugging. Description string // Put FOPEN_* flags here. FuseFlags uint32 // O_RDWR, O_TRUNCATE, etc. OpenFlags uint32 }
Wrap a File return in this to set FUSE flags. Also used internally to store open file data.
Package nodefs imports 10 packages (graph) and is imported by 290 packages. Updated 2020-03-11. Refresh now. Tools for package owners. | https://godoc.org/github.com/hanwen/go-fuse/fuse/nodefs | CC-MAIN-2020-16 | refinedweb | 2,330 | 53.37 |
Hi @abhilash did you get the USB stack to work fine from DDR? Would you be kind enough to attach it here ? We are facing the very same issue. It runs fine in Flash, but the DDR build just won't work. We've tried everything from changes in linker file to tcl script to watchdog. Your project will really help us. Thanks.
The released demo just support flash and RAM target, but I think you may use the RAM target as a starting point.
The main differences with a standard “RAM target” are :
− Appropriate linker file
− Macro set-up file that initializes the DRAM chip before the J-link starts download.
they can be both referred to 128MB_DDR2.icf from KINETIS_120MHZ_SC, and K70_MT47H64M16HR-3.mac as well.
Please kindly refer to the following for details.
Hope that helps,
B.R
Kan
Thanks for the information
Kan
I followed your instruction and used linker files from KINETIS_120MHZ_SC for DDR(128MB_ddr.lcf) and ram(MK70FN1M0_ram.lcf). Also we are using initialization file for the DDR (init_twr_k70_ddr2.tcl). But when i loaded and started running the program it exits when TestApp_Init(); function is called .The program runs fine when run from flash memory.
{ I am running USB stack 4.1.1 CDC sample program in High speed config using code warrior IDE10.2 and OSJTAG downloader }
As I know, K70 has two PLLs , PLL1 is dedicated for DDR , and PLL0's output can be used for USB module, I am not sure how you configure them, but in my opinion , PLL0 should output 48MHz for USB application, PLL1 may output as high as possible to meet the DDR2 spec. so please check the PLL initialization first.
Hope that helps,
B.R
Kan
hello kan,
inside the init file pll1 is configured and it is used for ddr configuration. when using DDR memory Sysinit function is not working.For now i have it disabled. Now my doubt is on the linker file . Is there any specific linker file that can be used with USB stack
thanks
Abhi
Hi Abhi,
Sorry for the late response!! I had some issue with my IAR license until I re-installed it. I have implemented the CDC demo running in DDR based on TWR-K70, but It is not only the link file, but also the PLL init function called in the usb stack should be modified to make the usb stack run in DDR.
First , I recommend you start with a SRAM target configuration,
second, just as mentioned before, use mac file to initialize the PLL0 and PLL1 as well as the DDR controller before the IDE load the image.
and choose verify download if you like
then, you may change the sram link file as the following:
you may find the RAM section are in DDR now, but please note don't set 0x08000000 to 0x10000000 as the range, this configuration are used by KINETIS120MHZ_SC, but as the USB stack uses DMA to transmit or receive data, you have to put the section in a location that allow access from both core and eDMA.
and set the vector table in SRAM.
As PLL0 and PLL1 have been initialized in the mac file, there is no need to do it in SYS_Init(), add some code from line 884 in main_kinetis,c
#ifdef SDRAM
/* Initialize clocks to match values from init script */
mcg_clk_hz = 120000000;
pll_1_clk_khz = 150000;
pll_0_clk_khz = mcg_clk_hz / 1000;
#else
#define ASYNCH_MODE /* PLL1 is source for MCGCLKOUT and DDR controller */
#define SDRAM
and #endif at line 924.
Define SDRAM in "derivative,h "
#define SDRAM
After that, run the application , you will see the following view:
and found the cdc device in the device manager:
Hope that helps,
Have a nice weekend!!
B.R
Kan | https://community.nxp.com/t5/Kinetis-Microcontrollers/How-to-load-usb-stack-to-DDR-memory/m-p/231950 | CC-MAIN-2021-43 | refinedweb | 626 | 79.4 |
GitHub: Making Commits
mariel
Updated on
・3 min read
If you’re going to be a developer, you can’t be afraid of commitment.
It's okay. You can do this.
We previously had a brief overview, and talked about making and cloning repositories. Below, we're going to finish our fly-by of GitHub by going over commits.
What is a “commit”?
The “commit” command is used to save your changes to your repository. Each change you make - adding or removing a space, changing a character from lowercase to capital, adding or removing variables - is noted as an update by your computer and a commit will save them to your repo, along with when they were made.
Why commit?
Not only does git help keep straight which version of a program is current or being worked on, git commits preserve the history of its development. Unlike a regular “ctrl+s”, every commit is a snapshot of where the code was at that moment in time, and every message shines a little light on why things were changed. Not only is this helpful for others, it can be helpful for you if you pick up a project you haven’t worked on in a while.
When do I commit?
Commits should be made often and cover small, digestible chunks of code and/or code that is related by a single idea. That is, you don’t need to necessarily commit after defining a variable, but it would be a good idea to commit after writing the function you’re using the variable in.
If you grew up in a time before autosaves, you’ll remember Save Early, Safe Often - it’s a good rule of thumb here. It’s possible to return (or revert) to a prior commit, which is helpful if you’ve just gotten so far round the bend you’re not sure who you are or where you came from anymore. If you’ve been diligent about your commits, you may not have to back up very much. If not, you could end up too close to the beginning and losing a lot of your good progress.
How do I commit?
From the command line, you need to add your changes and then commit them with a message re: what the changes are about. For simplicity in this description, we’re going to use “git add .” which will add all of your changes. (If you’re curious, you can absolutely be more specific: see here and here.) This is called staging a commit.
Once your changes have been added/staged, you need to commit with a helpful message. The command line will look as below. The “-m” denotes “message” and the comment in quotes is what your message will be.
git add .
git commit -m “helpful message here”
What’s a commit message?
Commit messages describe the changes that are made in the commit.
Here, I specified that the change was to “remove unnecessary p tag.” When opening the code to look at the changes, the change that was made matches up with what I said in the comment.
Messages should be short and written in the present tense, not past tense - fix bug, versus fixed bug. Most importantly, the messages should describe the changes that were made. This is easy when you’re making small changes and need to make a brief statement. If you need more room to add context, that can be done too. The most important thing to take away is that the messages will provide explanation and context for others who look at your code later.
Think about reviewing work you did a long time ago - maybe pulling out an old essay from middle school - and seeing scratched out text and changes all over it, making it hard to read. It would be helpful to have some notes explaining why past-you made those changes, right? That’s what you’re doing with commit messages.
Now what
Now you go forth and create things.
What are your favourite resources for beginners?
What resources would you give a new team member who is also starting their first tech job?
Nice article! We often hear people telling us to commit more but not how to commit, it's a great idea to spread good practices around it !
I would also suggest some other points such as why should it be written in present tense, knowing is good, understanding is better !
Thank you!
github.com/pBouillon/git_tutorials... gives 404
Oh sorry, this should be the right one ! github.com/pBouillon/git_tutorials...
Sorry I didn't clarify what was giving 404, your initial link was working, the one that doesn't is the link to emoji_commit_list.md which is now github.com/pBouillon/git_tutorials...
You changed the path from git_tutorials/blob/master/methods/emoji_commit_list.md to
it_tutorials/blob/master/methodology/emoji_commit_list.md
Probably I should have opened an issue :P or submitted a PR
Oh damn, you're right ! Thanks for clarifying, I missed that point ! | https://dev.to/mariel/github-making-commits-3716 | CC-MAIN-2020-10 | refinedweb | 840 | 72.36 |
See also: IRC log
<egombocz> cannot get on the conference line. Anything changed?
<Joanne_Luciano> Agenda Continue discussions from last Thursday (dbSNP etc.) Deciding on extensions of the Translational Medicine Ontology (TMO) - what should be covered in the TMO and what should be left to ad-hoc schemas based on source data? Revisiting the plans for a journal submission
Ram, Metaome -
<Joanne_Luciano> Clinical Decision Support for Personalized Medicine
<Joanne_Luciano> Last week Michel Made progress on converting dbSNP
<matthias_samwald> joanne, could you keep scribing?
<Joanne_Luciano> If Michel talks a little slower :-)
<Joanne_Luciano> I wasn't there last week or for a few works.
<matthias_samwald> (Thanks, Joanne)
<Joanne_Luciano> using the list annotated SNPs from PharmGKB
<matthias_samwald> eUtils web service
<Joanne_Luciano> queries eUtils gets back XML record, from them parses (RDF)
<Joanne_Luciano> fields of intrest were discussed last week
<Joanne_Luciano> looked at SNP record last week (See notes from last week)
<Joanne_Luciano> then generated from 1300 from dbSNP and loaded them into endpoints from email
<michel> one example:
<michel> ncbi entry:
<Joanne_Luciano> I can't tell what he is referring to.
<michel> the XML record:
i can also help with the scribing
maps to can reference several assemblies such as celera genome
<Joanne_Luciano> for example: <Assembly dbSnpBuild="137" genomeBuild="37.3" groupLabel="GRCh37.p5" current="true" reference="true">
<Joanne_Luciano> Assemblies vary by project
MapLoc element refers to the interesting part i.e the variations
has variations based on the mRNA/protein form that it references
in the each FxnSet
<Joanne_Luciano> Function Set: FxnSet
<Joanne_Luciano> Genomic, Contig, mRNA, protein /// 4 different identifers
<matthias_samwald>
M samwald created a whole scale rdf version using the batch export with a limited set of properties
from dbDNP
has an identifier for the exact combination for alleles which is useful, also has xref to bio2rdf (albeit a slightly older version)
this is a new derivative dataset (the bio2rdf conversion could have enough information to derive this)
Next topic: TMO
- Translational Medicine Ontology
Should we continue with TMO or move to something like CIO, or have a higher level ontology (M Samwald's thought) or the option of Schema.org
The challenge with Schema.org may not be complete, e.g SNP may be present, will fine grained information be there?
Use as much of schema.org as possible and create own ontology
the approach in bio2rdf has been to create in its own namespace and then refer to them and then define equivalences
M Dumontier: 2 reasons why it is a problem (went too fast, did'nt catch that)
<ericP> matthias_samwald, were you hoping to have the subset of data which is expressible in schema.org be indexed by google
<ericP> ?
<michel> i. BFO and RO are insufficient to accurately capture the semantics of data in HCLS
<michel> matthias: substantial overlap between TMO and schema.org
<matthias_samwald>
<michel> extensions:
would be good to ask schema.org (google) to ask them to include subclass/sub property
Consensus: Extend Schema.org, switch over to SIO from TMO or some middle ground (correct if this wrong!)
<michel> dataset -
<michel> statistics -
gid is different
<matthias_samwald>
<Joanne_Luciano> Matthias re: paper -- bioinformats journal or medical informatics? (leaning towards medical informatics now)
<Joanne_Luciano> Michel: missing from paper - what are the results we will present as an interesting finding?
<egombocz> apologies - need to logoff, another meeting
<Joanne_Luciano> Mathias - presenting RDF utility is not enough (agreeing)
<Joanne_Luciano> Michel - GWAS linked to Pharmacogenomic -- how big of a problem is it to not be genogyped?
<Joanne_Luciano> idea is from the frequency of a specific allele and outcome - can get an idea of the impact.
thanks
interesting call
This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) No ScribeNick specified. Guessing ScribeNick: ram Inferring Scribes: ram WARNING: No "Topic:" lines found. Default Present: michel, +1.518.276.aaaa, +1.510.705.aabb, joanne_luciano, Erich, ram, ericP, SimonLin_Marshfie Present: michel +1.518.276.aaaa +1.510.705.aabb joanne_luciano Erich ram ericP SimonLin_Marshfie WARNING: No meeting title found! You should specify the meeting title like this: <dbooth> Meeting: Weekly Baking Club Meeting WARNING: No meeting chair found! You should specify the meeting chair like this: <dbooth> Chair: dbooth Got date from IRC log name: 28 Jun] | http://www.w3.org/2012/06/28-hcls-minutes.html | CC-MAIN-2017-17 | refinedweb | 711 | 50.87 |
On Sat, 1 Feb 1997, Marc Slemko wrote:
> * Satisfy Any can be changed if .htaccess exists
> If you give Satisfy Any in access.conf for a particular directory,
> and have a .htaccess in that directory, Satisfy mode reverts
> to Satisfy All even if the .htaccess has _no_ authentication
> directives.
This should fix this.
Dean
Index: http_core.c
===================================================================
RCS file: /export/home/cvs/apache/src/http_core.c,v
retrieving revision 1.62
diff -c -3 -r1.62 http_core.c
*** http_core.c 1997/02/01 22:03:36 1.62
--- http_core.c 1997/02/02 07:06:35
***************
*** 153,159 ****
conf->sec = append_arrays (a, base->sec, new->sec);
! conf->satisfy = new->satisfy;
return (void*)conf;
}
--- 156,162 ----
conf->sec = append_arrays (a, base->sec, new->sec);
! if( new->satisfy ) conf->satisfy = new->satisfy;
return (void*)conf;
} | http://mail-archives.apache.org/mod_mbox/httpd-dev/199702.mbox/%3CPine.LNX.3.95dg2.970201230553.9750X-100000@twinlark.arctic.org%3E | CC-MAIN-2018-39 | refinedweb | 133 | 56.11 |
Hi Monks
I am trying to use the CHI module but am coming up against erratic behaviour. There may well be good reason for it but I am stumbling along in the dark at the moment and so hope someone can point me in the right direction as to what is going on.
In my main code I create an accessor to the cache. I am using Moose so it looks like:
(I could probably do better than 'Any' and will get to that when I work out what it should be!) ... and to store items:
Note I have experimented with the duration setting with no substantial difference in results.
Now I wrote a short inspector script to tell me what keys were in the cache at the moment when the script is run:
#!/usr/bin/perl
use strict;
use CHI;
my $namespace = 'vbsite';
my $cache = CHI->new( driver => 'Memory', global => 1,serializer => '
+Data::Dumper', namespace => $namespace );
my @keys = $cache->get_keys(namespace => $namespace);
print "Content-type: text/plain\n\n";
print "there are ".scalar(@keys)." keys in namespace $namespace:\n";
print $_."\n" foreach(@keys);
[download]
So I run the main program until it seems to be caching, then I run the inspector script. Sure enough I get something like:
there are 3 keys in namespace vbsite:
{"dir":"/var/www/html","filename":"intro.xml":"panel_id":"1"}
{"dir":"/var/www/html","filename":"menu.xml":"panel_id":"2"}
{"dir":"/var/www/html","filename":"home.xml":"panel_id":"3"}
[download]
which looks good. But then if I keep running the inspector script (without changing anything at all, without running the main program again or any other programs) I get a different result:
there are 2 keys in namespace vbsite:
{"dir":"/var/www/html","filename":"intro.xml":"panel_id":"1"}
{"dir":"/var/www/html","filename":"menu.xml":"panel_id":"2"}
[download]
... and in fact if I run it over and over, I seem to get a different result each time. Now I dont know if it is designed so ->get_keys only returns a key if the cache would have returned a cached value if it was requested at that moment, and does not return a key if the cache decides this particular request should miss the cache.
However it is causing me problems because, when my main code performs a 'save' operation I would like to inspect all keys in the cache and remove those items that would be affected by the saved file. Because ->get_keys does not seem to reliably return all the keys in the cache this doesnt seem to work
Alternatively perhaps it *should* consistently return all keys in the cache, and there is something wrong with my implementation? Though my implementation really is not complex and it is difficult to see what I could be doing to cause the error. Can anyone shed any light? Your help greatly appreciated...
In reply to some caching issues using CHI
by tomgracey. | https://www.perlmonks.org/?node_id=3333;parent=1058170 | CC-MAIN-2018-39 | refinedweb | 485 | 66.57 |
Notes on generators¶
Numba recently gained support for compiling generator functions. This document explains some of the implementation choices.
Terminology¶
For clarity, we distinguish between generator functions and
generators. A generator function is a function containing one or
several
yield statements. A generator (sometimes also called “generator
iterator”) is the return value of a generator function; it resumes
execution inside its frame each time
next() is called.
A yield point is the place where a
yield statement is called.
A resumption point is the place just after a yield point where execution
is resumed when
next() is called again.
Function analysis¶
Suppose we have the following simple generator function:
def gen(x, y): yield x + y yield x - y
Here is its CPython bytecode, as printed out using
dis.dis():
7 0 LOAD_FAST 0 (x) 3 LOAD_FAST 1 (y) 6 BINARY_ADD 7 YIELD_VALUE 8 POP_TOP 8 9 LOAD_FAST 0 (x) 12 LOAD_FAST 1 (y) 15 BINARY_SUBTRACT 16 YIELD_VALUE 17 POP_TOP 18 LOAD_CONST 0 (None) 21 RETURN_VALUE
When compiling this function with
NUMBA_DUMP_IR set to 1, the
following information is printed out:
----------------------------------IR DUMP: gen---------------------------------- label 0: x = arg(0, name=x) ['x'] y = arg(1, name=y) ['y'] $0.3 = x + y ['$0.3', 'x', 'y'] $0.4 = yield $0.3 ['$0.3', '$0.4'] del $0.4 [] del $0.3 [] $0.7 = x - y ['$0.7', 'x', 'y'] del y [] del x [] $0.8 = yield $0.7 ['$0.7', '$0.8'] del $0.8 [] del $0.7 [] $const0.9 = const(NoneType, None) ['$const0.9'] $0.10 = cast(value=$const0.9) ['$0.10', '$const0.9'] del $const0.9 [] return $0.10 ['$0.10'] ------------------------------GENERATOR INFO: gen------------------------------- generator state variables: ['$0.3', '$0.7', 'x', 'y'] yield point #1: live variables = ['x', 'y'], weak live variables = ['$0.3'] yield point #2: live variables = [], weak live variables = ['$0.7']
What does it mean? The first part is the Numba IR, as already seen in
Stage 2: Generate the Numba IR. We can see the two yield points (
yield $0.3
and
yield $0.7).
The second part shows generator-specific information. To understand it we have to understand what suspending and resuming a generator means.
When suspending a generator, we are not merely returning a value to the
caller (the operand of the
yield statement). We also have to save the
generator’s current state in order to resume execution. In trivial use
cases, perhaps the CPU’s register values or stack slots would be preserved
until the next call to next(). However, any non-trivial case will hopelessly
clobber those values, so we have to save them in a well-defined place.
What are the values we need to save? Well, in the context of the Numba Intermediate Representation, we must save all live variables at each yield point. These live variables are computed thanks to the control flow graph.
Once live variables are saved and the generator is suspended, resuming the generator simply involves the inverse operation: the live variables are restored from the saved generator state.
Note
It is the same analysis which helps insert Numba
del instructions
where appropriate.
Let’s go over the generator info again:
generator state variables: ['$0.3', '$0.7', 'x', 'y'] yield point #1: live variables = ['x', 'y'], weak live variables = ['$0.3'] yield point #2: live variables = [], weak live variables = ['$0.7']
Numba has computed the union of all live variables (denoted as “state variables”). This will help define the layout of the generator structure. Also, for each yield point, we have computed two sets of variables:
- the live variables are the variables which are used by code following the resumption point (i.e. after the
yieldstatement)
- the weak live variables are variables which are del’ed immediately after the resumption point; they have to be saved in object mode, to ensure proper reference cleanup
The generator structure¶
Layout¶
Function analysis helps us gather enough information to define the layout of the generator structure, which will store the entire execution state of a generator. Here is a sketch of the generator structure’s layout, in pseudo-code:
struct gen_struct_t { int32_t resume_index; struct gen_args_t { arg_0_t arg0; arg_1_t arg1; ... arg_N_t argN; } struct gen_state_t { state_0_t state_var0; state_1_t state_var1; ... state_N_t state_varN; } }
Let’s describe those fields in order.
- The first member, the resume index, is an integer telling the generator at which resumption point execution must resume. By convention, it can have two special values: 0 means execution must start at the beginning of the generator (i.e. the first time
next()is called); -1 means the generator is exhausted and resumption must immediately raise StopIteration. Other values indicate the yield point’s index starting from 1 (corresponding to the indices shown in the generator info above).
- The second member, the arguments structure is read-only after it is first initialized. It stores the values of the arguments the generator function was called with. In our example, these are the values of
xand
y.
- The third member, the state structure, stores the live variables as computed above.
Concretely, our example’s generator structure (assuming the generator function is called with floating-point numbers) is then:
struct gen_struct_t { int32_t resume_index; struct gen_args_t { double arg0; double arg1; } struct gen_state_t { double $0.3; double $0.7; double x; double y; } }
Note that here, saving
x and
y is redundant: Numba isn’t able to
recognize that the state variables
x and
y have the same value
as
arg0 and
arg1.
Allocation¶
How does Numba ensure the generator structure is preserved long enough? There are two cases:
- When a Numba-compiled generator function is called from a Numba-compiled function, the structure is allocated on the stack by the callee. In this case, generator instantiation is practically costless.
- When a Numba-compiled generator function is called from regular Python code, a CPython-compatible wrapper is instantiated that has the right amount of allocated space to store the structure, and whose
tp_iternextslot is a wrapper around the generator’s native code.
Compiling to native code¶
When compiling a generator function, three native functions are actually generated by Numba:
- An initialization function. This is the function corresponding to the generator function itself: it receives the function arguments and stores them inside the generator structure (which is passed by pointer). It also initialized the resume index to 0, indicating that the generator hasn’t started yet.
- A next() function. This is the function called to resume execution inside the generator. Its single argument is a pointer to the generator structure and it returns the next yielded value (or a special exit code is used if the generator is exhausted, for quick checking when called from Numba-compiled functions).
- An optional finalizer. In object mode, this function ensures that all live variables stored in the generator state are decref’ed, even if the generator is destroyed without having been exhausted.
The next() function¶
The next() function is the least straight-forward of the three native functions. It starts with a trampoline which dispatches execution to the right resume point depending on the resume index stored in the generator structure. Here is how the function start may look like in our example:
define i32 @"__main__.gen.next"( double* nocapture %retptr, { i8*, i32 }** nocapture readnone %excinfo, i8* nocapture readnone %env, { i32, { double, double }, { double, double, double, double } }* nocapture %arg.gen) { entry: %gen.resume_index = getelementptr { i32, { double, double }, { double, double, double, double } }* %arg.gen, i64 0, i32 0 %.47 = load i32* %gen.resume_index, align 4 switch i32 %.47, label %stop_iteration [ i32 0, label %B0 i32 1, label %generator_resume1 i32 2, label %generator_resume2 ] ; rest of the function snipped
(uninteresting stuff trimmed from the LLVM IR to make it more readable)
We recognize the pointer to the generator structure in
%arg.gen.
The trampoline switch has three targets (one for each resume index 0, 1
and 2), and a fallback target label named
stop_iteration. Label
B0
represents the function’s start,
generator_resume1 (resp.
generator_resume2) is the resumption point after the first
(resp. second) yield point.
After generation by LLVM, the whole native assembly code for this function may look like this (on x86-64):
.globl __main__.gen.next .align 16, 0x90 __main__.gen.next: movl (%rcx), %eax cmpl $2, %eax je .LBB1_5 cmpl $1, %eax jne .LBB1_2 movsd 40(%rcx), %xmm0 subsd 48(%rcx), %xmm0 movl $2, (%rcx) movsd %xmm0, (%rdi) xorl %eax, %eax retq .LBB1_5: movl $-1, (%rcx) jmp .LBB1_6 .LBB1_2: testl %eax, %eax jne .LBB1_6 movsd 8(%rcx), %xmm0 movsd 16(%rcx), %xmm1 movaps %xmm0, %xmm2 addsd %xmm1, %xmm2 movsd %xmm1, 48(%rcx) movsd %xmm0, 40(%rcx) movl $1, (%rcx) movsd %xmm2, (%rdi) xorl %eax, %eax retq .LBB1_6: movl $-3, %eax retq
Note the function returns 0 to indicate a value is yielded, -3 to indicate
StopIteration.
%rcx points to the start of the generator structure,
where the resume index is stored. | https://numba.readthedocs.io/en/stable/developer/generators.html | CC-MAIN-2020-40 | refinedweb | 1,475 | 56.96 |
erik scheirer
June 6, 2007 at 1:55 am
Super job! Everything worked perfect! Thanks!
drain
June 15, 2007 at 10:02 pm.
evil drain
June 19, 2007 at 4:24 pm
Thanks drain. I’ve updated the code sample accordingly. SimpleMessageListenerContainer implements IDisposable, so I’ve wrapped it in a using block (I used Reflector to ensure the Dispose method calls Shutdown.)
remark
June 19, 2007 at 9:42 pm?
Tao
August 6, 2007 at 7:22 pm
This was very useful, thanks for the example!
derek
August 9, 2007 at 12:26 am
August 16, 2007 at 5:51 pm
Suma,
I’ve not seen this problem. Might be worth logging the issue on the ActiveMq User Forum.
remark
August 20, 2007 at 9:46 pm
Great article! I followed the mentioned steps and everything worked perfectly as expected.
Quick question: is there a way (or an article) to get similar notifications on a web (.aspx) page rather than console window?
Thanks in advance.
dash
November 14, 2007 at 12:53 am.
remark
November 22, 2007 at 8:54 am
Herman
February 27, 2008 at 3:29 pm
To compile the Program class, you need to have the Listener class in the same namespace. So, check to see that you have added a Listener class to the ListenerConsole project.
remark
February 27, 2008 at 10:51 pm
Thanks a lot. Working now as expected.
Great article.
Herman
Herman
February 27, 2008 at 11:17 pm
Great example!
The sender program does not exit properly after sending the message.
isaac
March 28, 2008 at 3:16 am
isaac,
Thanks for the feedback. Do you have any more information about how it’s not exitting properly?
remark
March 28, 2008 at 11:33 am
This is a fantastic example! Very useful.
I have the same problem as Isaac. The sender program doesn’t terminate after the message is sent. Cntl C is required to stop the Sender program.
Mark
April 2, 2008 at 4:15 am
April 11, 2008 at 7:55 am
Gerald,
ActiveMQ topics can be used via NMS. I posted an article about publish-subscribe using NMS and ActiveMQ that shows topics being used. Click here to read the full article.
remark
April 11, 2008 at 10:04 am
[…] ActiveMQ via .NET Messaging Overview […]
ActiveMQ and NMS (via Spring.NET) Resources « HSI Developer Blog
April 15, 2008 at 8:09 pm
Kpatel
June 3, 2008 at 12:05 am?
Marty
June 13, 2008 at 1:06 am.
Ben
July 9, 2008 at 2:03 am.
Alex
July 17, 2008 at 1:34 am
Danish Ahmed
July 24, 2008 at 2:01 pm
Hi, I am having a problem running this example, the listener is not receiving all the messages, am I missing some kind of configuration?
thanks for your help
David Kepes
August 5, 2008 at 1:20 am
Hello everybody,
this is a very nice introduction in the activeMQ.
i’ve tested it with the Apache 5.1.0 version and it works :-)
Thanks in Advance for this information :-)
best regards from germany
Jan Bludau
Jan Bludau
August 5, 2008 at 3:13 pm);
mark
August 25, 2008 at 8:02 pm ?
feno
September 16, 2008 at 11:13 pm;
feno
September 17, 2008 at 8:56 am
Excellent
Pradeep Nair
September 25, 2008 at 11:34 am
Thank for the code.
I have already test the code and its working. But my problem is I want to list all the queues in active MQ.
JuliusR
October 13, 2008 at 10:38 am
hi,
this code is pretty useful. But i want to have the ListenerConsole
as a windows service with auto startup. Can you please help me, how to do it?
Thanks in advance
Prasad
November 14, 2008 at 2:52 pm
NZorn
November 24, 2008 at 12:37 pm
Great stuff!
I’ve added a link to the Articles section of the ActiveMQ website…
James Strachan
December 16, 2008 at 11:45 am.
Gleb
January 5, 2009 at 3:11 pm
sorry, my fault. Moved occasionally Console.Read from using block.
Thank you very much one more time.
Gleb
January 5, 2009 at 4:08 pm
[…] are some examples using the Spring framework (same as […]
Getting started with ActiveMQ « Ben Biddington
February 4, 2009 at 4:24 pm)
Rod
February 21, 2009 at 7:16 pm
Rod
February 21, 2009 at 7:18 pm.
Karthikeyan Manickam.
February 24, 2009 at 9:47 am
To Rod: Got same problem after getting newer 1.1.0.0 snapshot from svn, but fortunately I still have working older version. NMS 1.1 seems to be bit unstable through the time. Like ActiveMQ…
Anthavio Lenz
March 3, 2009 at 2:14 pm
With .Net and ActiveMQ, is there any possible way to send/receive messages via port with security protocol? I am looking the way for a long time. Thank you.
David
March 27, 2009 at 8:19 am.
Sam
April 26, 2009 at 6:17 am
Should have mentioned, the error is: “unable to connect…machine actively refused connection….{IPv6 of localhost}:61616
Sam
April 26, 2009 at 6:19 am:49 pm:50 pm
Thank you, very useful article.
But I am Having trouble with listener, it doesn’t receive messages.
I checked send queue is ok..
Can anyone help for me?
Thanks in advance..
robin
June 17, 2009 at 3:42 am
Sorry.. I solved it.. I didn’t change any program source… Thanks
robin
June 18, 2009 at 2:58 am
works very well…except that you need to get the ActiveMQ and NMS from Spring.NET-1.3.0-RC1\lib\Net\2.0 and change the it to …
using Apache.NMS.ActiveMQ;
using Spring.Messaging.Nms;
Kash
August 25, 2009 at 8:48 pm
Hi, first of all, thank you very much for this example it is really usefull.
I have to implement this in a Mobile application, and I am having lots of problems with the references, I replaced the cf references with the references not cf, but it throws an error on the System reference (apparently System.Uri has some differences in cf) I’ve tried to add a reference to System.dll from not cf Framework and, of course, it didn’t work.
I’m stacked and I can’t seem to move on. This happens with the main(). I’ll go on with the rest, but it won’t work unless I solve (or you help me sove) this issue.
Greatings.
Jose.
Jose
September 16, 2009 at 3:22 pm
I had to recompile all the libraries that I use, in order to use then with compact framework, but it’s working.
Thanks again for the example.
Greatings
Jose
Jose
September 17, 2009 at 9:32 pm
Jose could you help me? I am trying to rebuild the libraries in orer to be able to use ActiveMq on CF but I am able to build it only for .net 2.0/3.5 (I don’t understand why), how ever I found the Apache.Nms.dll for .net cf-2.0 and now I am trying to solve the System.Uri problem, but with no sucess. Can you tell me how you did it (adyc(_at_)email.cz)?
Andrei
November 26, 2009 at 9:58 am
My project requires me to connect to a Websphere JMS queue. I have not included the spring libraries yet (will do that in the morning). I have been successful connecting to a local WAS-CE queue, but all kinds of errors conneting to a full Websphere installation on a SuSe 9 box.
First question: Is it even possible to use this article for connecting to Websphere ?
Second: Any hints or URLs with more information ?
Thanks in Advance…
Jerry
December 11, 2009 at 5:04 am
Do you have any samples of doing a Send and Receive using NMSTemplates.
I was looking for examples where in my client would make a request and then call receive to get the response [not using an update stream]
Amjad K
January 19, 2010 at 7:45 pm
Do you have Apache.NMS.ActiveMQ.dll and Apache.NMS.dll compiled for compact framework.net.
I’m trying to rebuild alla but i have still problem…..
stefano
January 21, 2010 at 7:18 pm
Hi,
I already had a working AMQ, but with a Java listener. That setup works fine. If I use the same AMQ, but with a c# listener, it doesn’t receive any messages at all.
I used the listener and sender examples from above. When I launch the listener, I can see in the AMQ admin that I have a new consumer. When I send a message with the sender (or thru jconsole), I can see that a message is received and sent again. But the listener doesn’t even come into the OnMessage() method.
I am using the same host/port for the AMQ in both the listener and the sender, as well as the same destination string.
Any ideas what can be wrong?
Thanks in advance,
Mark
Mark
January 22, 2010 at 5:27 pm
I have the exact same problem, did u get this resolved?
Thanks,
Ricky
Ricky
March 5, 2010 at 8:07 pm
just got this working so not sure , but i think i had to create the “test” queue ( or was it already there? ) on
seems to fail silently with a non existand queue atleast with activemq 5.3 ( new to this so dont know if that is correct or not )
Morten
March 9, 2010 at 6:51 pm
If you have this problem, please make sure that you add an assembly reference to Spring.Data. That solved it for me.
phw
June 21, 2010 at 7:33 pm
Hi,
This one works fine but are there examples showing how to implement mulithreading using activeMQ.
Thanks in Advance,
Solomon
solomon
March 2, 2010 at 7:17 am
Hi, I tried to build a example as yours by using VB.NET. and I have some questions for you. Please kind help me if you’ve experienced these issues in C#.
1. The listener got the excepetions as the followings when retrieved a message from a queue and the event didn’t be raised.
‘System.ArgumentException’ occurred in Spring.Core.dll
‘Spring.Messaging.Nms.Listener.Adapter.ListenerExecutionFailedException’ occurred in Spring.Messaging.Nms.dll
2. I can make it to send a message to ActiveMQ, however I don’t know how to specify the message header. Any idea about how to do this?
Thanks in Advance for your kind help.
Jim
Jim
April 16, 2010 at 7:54 am
Thanks for the very helpful posts about using ActiveMQ with NMS and Spring.NET. I find that there are so many wonderful products from Apache, and they are sometimes just out of reach for .NET users. In this case both NMS and Spring.NET NMS support have really scored a home run. I created a quick POC, and I used Camel routes to follow some basic EIPs, given that Camel runs inside the broker – Load balancing for free!
Any thoughts or recommendations on best practices for message type definition?
@solomon: if you are trying to have many threads consume from a queue and assuming you only want each message to be consumed by one consumer, you would simply start another instance of your listener – ActiveMQ automatically distributes messages to only one consumer. If you want many listeners of the same type to receive the message (this is a much rarer case), you can use the multicast EIP defined in Camel.
Once again, thanks for the brilliant tutorials!
Chet
Chet
April 19, 2010 at 6:29 pm
Hi,
I get exception error when i’m running Sender
Cannot access a disposed object.
Object name: ‘System.Net.Sockets.NetworkStream’.
wee
June 29, 2010 at 4:10 am
[…] Home NMS Download ActiveMQ n .NET Request Response with NMS Publish n Subscribe with NMS Transactional Messaging with […]
Getting started with ActiveMQ « The Extremist Programmer
July 12, 2010 at 3:38 pm
This example works fine and I’m having success sending a text message from C# to ActiveMQ. In turn I’m receiving the message on a Java client.
Great!
Now I want to send an object and ultimately an array of objects. For now I have a simple “stock” object (name, symbol, high, low, volume).
….
Stock stock = new Stock();
stock.Name = “International Business Machines”;
stock.Symbol = “IBM”;
stock.high = 167.55m;
stock.low = 144.55m;
stock.volume = 12345;
IObjectMessage omsg = prod.CreateObjectMessage(stock);
prod.Send(omsg, MsgDeliveryMode.NonPersistent, MsgPriority.High, TimeSpan.MinValue);
….
The message is sent and I do recieve it on the java client, however the body is null. I am wondering if you or anyone have any suggestions.
Regards …
Bob Tierney
July 15, 2010 at 5:12 pm
Where do I find a reference to ActiveMQ dll?
I tried to look in Spring .Net 1.3 framework and also in Apache.NMS-1.3.0-bin but could not find it. Please let me know. Thanks
Nikhil
August 29, 2010 at 12:45 am
never mind found it,
It can be downloaded from here:
Nikhil
August 29, 2010 at 12:52 am
no it cannot be downloaded from there. none of those downloads have an ActiveMQ.dll in them!
vince
December 21, 2010 at 7:36 pm
Is it possible to get in C# the list of all topics on the broker? Exactly what does DestinationSource in java
Roxy
September 3, 2010 at 9:57 am
I cannot find the ActiveMQ.dll nor can I find the NMS.dll. These DLL’s are not found in the bin\net\2.0\debug folder of Spring.NET. None of the Spring.NET versions have these. I really need some help here. I am missing something here. the “using ActiveMQ;” reference in the code will not resolve without the proper DLL. Please help.
vince
December 21, 2010 at 7:31 pm
Vince,
to overcome these problems add Apache.NMS.dll and Apache.NMS.ActiveMQ.dll which can be found at the folder lib\net\”your version”
Afterwards type in for the program class the using directives
using Apache.NMS;
using Apache.NMS.ActiveMQ;
using Spring.Messaging.Nms;
using Spring.Messaging.Nms.Listener;
and for the listener-class:
using Spring.Messaging.Nms.Core;
using Apache.NMS
and rename NMS.IMessage at the listener class to Apache.NMS.IMessage
This did it for me (used the most up-to-date ActiveMQ and Spring.NET)
Tyrex
February 10, 2011 at 1:07 pm
I cannot find activemq.dll and nms.dll in bin folder of the zip..
simmy
March 10, 2011 at 11:58 am
Hello, i try to run this code in my network, i run Listener in my win2008 ane sender in my win7, i try do this in the Sender:
private const string Uri = “tcp://192.168.4.29:61616″;
But its crash, how i do to communicate in my network?
Marcio Althmann
April 29, 2011 at 9:44 pm
Hello :) i solve the problem in my last comment :).
But i have one more doubt.
I need the Listener send one message to sender, a response message, and the Sender execute one event when recieve the response.
Its possible?
Marcio Althmann
May 2, 2011 at 3:09 pm
i have an application that sends smses using AT commands in vb.net via a 3g modem.the application works fine but when i send bulk smses it fails. someone told me to use queues so that the modem is not overwhelmed.how can i use ActiveMQ queues to send messages to the 3g modem on a com port.
Ignatius
June 2, 2011 at 3:43 pm
Hi, I am novice to Messaging Q…. i tried this with C# working great. But I need this to work in VB.Net and I tried to do the above exact code with VB.Net…and unfortunately the console cant display what he had listened but then the ActiveMQ host had showed that it had dequeued the message.
Kindly help me where could possibly wrong. Thanks
Highly appreciate it.
Malini
Malini
December 12, 2012 at 11:40 am
Malini, I have exactly the same problem as you.
In a console application works perfect. But I have to move to a WinForms application.
Someone who can help?
Thanks,
Alejandro.
Alejandro
February 1, 2013 at 12:30 pm
Hi tried that “The Listener” part in Winform. Can any one help me.
Rashad
Rashad
March 30, 2013 at 2:56 am
Thanx for so excelent example. It worked like a charm.
Carlos Daniel
January 7, 2014 at 6:50 pm | http://remark.wordpress.com/articles/messaging-with-net-and-activemq/ | CC-MAIN-2014-52 | refinedweb | 2,797 | 75.4 |
Detect and decode QR codes using a Vue.js component
vue-qrcode-reader
A Vue.js component, accessing the device camera and allowing users to read QR codes, within the browser.
Visit the page below to scan a QR* code through your camera and read on your display where it leads to. The user has to be asked for camera access permission first and the camera stream has to be loaded. Demo Page
* Click the link if you want to know more about building a QR scanner in a React Native Camera Tutorial*.
How it works
Once a stream from the user's camera is loaded, it is displayed and continuously scanned for QR codes. Results are indicated by the
decode event.
Events
decode
locate- is an array of coordinates (for example
{ x: 278, y: 346 }) of the QR codes corners, emitted whenever the coordinates change or no QR code is detected anymore, resulting in an empty array payload.
init- emitted as soon as the component is mounted
Camera access permission can't really be requested a second time. Once denied, it can only be re-granted in the browser settings.
It might take a while before the component is ready and the scanning process starts. The user has to be asked for camera access permission first and the camera stream has to be loaded.
Installation & Usage
yarn add vue-qrcode-reader # or npm install --save vue-qrcode-reader
Register component globally:
import Vue from 'vue' import VueQrcodeReader from 'vue-qrcode-reader' Vue.use(VueQrcodeReader)
A CSS file is included when importing the package. You may have to setup your bundler to embed the CSS in your page.
Note: In Chrome, this component only works with HTTPS (or localhost)
This project is open-source under an MIT License. | https://vuejsfeed.com/blog/detect-and-decode-qr-codes-using-a-vue-js-component | CC-MAIN-2019-43 | refinedweb | 298 | 63.09 |
Content uploaded by Christoph Lorke
Author content
All content in this area was uploaded by Christoph Lorke on Mar 18, 2019
Content may be subject to copyright.
AFTER THE END OF “LITTLE MOSCOW”:
MEMORIES, (RE)CONSTRUCTION, AND
APPROPRIATION OF SPACE IN WÜNSDORF
Christoph Lorke
Department of History
Westfälische Wilhelms University, Münster, German
lived in close proximity to the Russians. In the German Democratic Republic,
the (limited) real and imagined encounters, interactions, and perceptions of the
“other” were highly determined by traditional images, and were most likely inu-
enced by the tabooed ofcial discourse of “occupiers” vs. “friends”. This ambivalent
potpourri of different memorial dimensions has strongly shaped negotiations of
the past and remembrance of the transition period (1989/1990–1994), as well as
of the post-Soviet/Russian phase up to the present. By analyzing individual and
collective modes of handling a problematic and highly conictual military force, as
well as the German Democratic Republic’s past, different ways of (re)constructing
and appropriating the post-military space become apparent.
Keywords: Cold War, German Democratic Republic’s past, German reunication,
identity, (contested) memory, military heritage, otherness, space
On August 31, 1994, Matvei Prokopevich Burlakov, the last Commander-in-
Chief of the Group of Soviet Forces in Germany, reported to President Boris
Yeltsin: “The intergovernmental treaty regarding the conditions of the tempo-
rary residence of Russian troops and the withdrawal modalities are fullled.…
Today was the last day of the past” (König 2010). According to Article 4 of
the “Two Plus Four Treaty” (“Treaty on the Final Settlement with Respect to
Germany”, September 12, 1990), the Soviet Union was obliged to withdraw its
troops stationed in East Germany within four years, i.e. by the end of 1994. On
August 31, the largest relocation of troops during peacetime in history, which
brought about an unprecedented demilitarization of land and property, was
realized four months earlier than originally planned. The Western Group of
20
Christoph Lorke
Forces1 was considered an elite unit of the Soviet Army and included 550,000
people, of whom 380,000 were members of the army and 170,000 were civilians
(among whom there were 90,000 children). The troops were based in more than
one thousand locations all over East Germany. The country was considered an
immensely important geostrategic, military, and, not least, symbolic-political
forward post, located right on the Iron Curtain.2 There were many important
military bases,3 and many of them4 in the immediate vicinity of East Berlin.
One of the main reasons for this military cordon was to be ready to quell po-
tential riots, as happened when the Group of Soviet Forces in Germany helped
suppress the Uprising of 1953 in East Germany (Fig. 1).
Figure 1. Western Group of Forces in the German
Democratic Republic, October 3, 1990 (Naumann
1996 [1993]: 345).
Folklore 70 21
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
By far, the largest number of troops were based in Wünsdorf. Since 1954 the
headquarters of the high command of the Soviet Forces in Germany had been
situated in this small town, less than fty kilometers south of Berlin. Wüns-
dorf was a divided – military and civilian – location during the Cold War. The
gures vary, but it can be assumed that between 40,000 and 70,000 soldiers
and civilians were living and working there. Thus, the place was an immensely
important strategic outpost and, because of its location close to the Cold War’s
geographical border, the Western Group of Forces were regarded as the “chosen
ones”, “the proud and favorite children” of the entire Soviet Army.5
When the last soldiers left in 1994, a 600-hectare area with tens of thousands
of rounds of ammunition and explosive ordnance remained, including almost
680 buildings, 45,000 cubic meters of rubbish, waste oil, paint, chemicals, bat-
teries, used tires, and asbestos, as well as 404 cats, twenty-six dogs, one goat,
and one wild sheep (Kaiser & Herrmann 2010 [1993]: 199–200). In the common
parlance of the locals, the military area of Wünsdorf was generally known as
“Little Moskwa” or the “Forbidden City” (Verbotene Stadt). With few exceptions,
natives were not allowed to enter this zone and the whole settlement, includ-
ing the daily life of the Soviet families, was taboo. Nevertheless, living in close
proximity led to the fact that the Russians were omnipresent in the daily lives
of the German residents before the transition period (1989–1994). The result
was the emergence of conictual situations and memories, which – as has been
discussed regarding other examples of Soviet military bases in the German
Democratic Republic (GDR) – have often lasted until the present time (e.g. von
Wrochem 2003). As a consequence, noteworthy tensions between the collective
and communicative memory, on the one side, and the public commemorative
culture, on the other, could be observed (for denitions of the collective and
communicative memory, see Assmann 1997 [1992]; Welzer 2002; Erll 2005).
By far the largest base of Soviet/Russian soldiers prior to 1994, the military
Figure 2. General Matvei P. Burlakov
and Manfred Stolpe. Wünsdorf, June 11,
1994 (Gehrke 2008: 74).
22
Christoph Lorke
district of Wünsdorf appeared in many respects to be a “non-place”, with its
distorted, inconclusive relationship between history and identity (Augé 1992).
This article discusses the memorial dimension of the Soviet/Russian past
in Wünsdorf, as well as the symbolic (re-)construction and the collective and
individual appropriation of this particular space after the Soviet/Russian with-
drawal in 1994. By analyzing hegemonic forms of public (primarily involving
politics and the media) and individual remembrance of the “foreign” Soviet/
Russian past within the (post-)socialist GDR society (Obertreis & Stephan
2009), the social, discursive, and symbolic (re-)shaping of space and its symbolic
(pre-)determination can be illustrated (Assmann 2009; Keller 2016). Focusing
on these aspects, Wünsdorf exemplies double-layered, closely intertwined ne-
gotiations with a conictual “problematic” past with regard to 1) the GDR as
a whole and 2) the Soviet/Russian occupiers as “foreign” forces. This contribution
deals with the different modes of managing conictual and dissonant heritage
in the individual and broader political and public dimensions (Tunbridge &
Ashworth 1996; for the relation between cultural heritage and war, see Sö-
rensen & Viejo-Rose 2015) by focusing on the following questions: how did the
long-standing presence of the “foreign” shape the remembrance of Wünsdorf’s
recent past? How do certain layers of memory interact with each other? What
kind of “master narratives” of that time were (and are) dominant, and why?
How can German and Russian perspectives be integrated when dealing with
the still “smoking” past (Tuchman 1964)?
To answer these questions, I analyzed research, scholarly and popular publi-
cations on the matter, and media narratives since 1990. Furthermore, in spring
and summer 2016, I conducted twenty interviews with German contemporary
witnesses. I contacted the interview participants through a press call that
was distributed via local media.6 The call explicitly asked for witnesses who
remembered not only the process of withdrawal but also the time before. Thus,
most of the interviewees were – and, in most cases, still are – local residents.
The guided telephone interviews usually lasted one or two hours.7 The oldest
interviewee was born in 1929, and the youngest in 1954. This range allowed
for further insights regarding the relationship between generations and space,8
its different symbolic constructions, performances, and acquisitions, as well as
the generational temporalization of the space in question (Grothusen 2014).
Signicantly, nineteen of the twenty people who answered the call were male;
this obvious gender imbalance requires explanation (Leydesdorff 1996). It seems
that the topic of (military) history and its aftermath is much more interesting
for men. Due to traditional, dualistic gender stereotypes and corresponding
attributions regarding “male” and “female” spheres of interest and awareness,
it is also possible that men consider themselves “more important” and “more
Folklore 70 23
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
competent” witnesses of this time period. The tabooed topic of rape also may
have inuenced the willingness of people to answer the call (von Wrochem
2003: 67–68).9 Thus, the “voluntary” aspect of the call signicantly distorted
the sample. However, this article does not claim to be a representative survey,
but rather a glimpse into the widely encountered patterns of memory and their
presence today. Therefore, a gendered perspective on the story is built into the
study. After a quick glance at the military history of Wünsdorf in the twentieth
century, the paper discusses the circumstances and forms of remembrance of
the process of withdrawal from today’s perspective. In the last chapter, I will
outline the most common ways of dealing with the Soviet/Russian past in the
context of the “conversion” after 1994.
FROM WÜNSDORF TO ВЮНСДОРФ AND BACK: A GARRISON
TOWN AND ITS MILITARY HERITAGE
The history of Wünsdorf as a military site is suspenseful, as well as full of
fractures and new beginnings (for an overview, see Kaiser 1998). Wünsdorf
was a small village with less than 900 inhabitants when an Infantry School
was opened in 1910. During World War I the rst mosque on German territory
was built there at the request of the Ofce for Foreign Affairs, when a camp for
prisoners of war was opened in Wünsdorf. The “Half Moon Camp” housed up
to at least 15,000 Muslim prisoners of war until 1918, mainly Tatars, Indians,
Moroccans, Algerians, and Senegalese. After the end of the war, the camp served
as a shelter for Russian emigrants, mostly Muslim Tatars, many of whom had
decided not to go back to their home country. The camp was nally closed in
1922 and the mosque was torn down two years later because of dilapidation
(Abdullah 1984: 18–20; Höpp 1997). During the Third Reich, the area served
as a military gymnastics school, and was used as a training camp for athletes
to prepare for the Olympic Games in Berlin in 1936. There was an enormous
barracks area, a military training area, and a ring range. Beginning in 1938,
the headquarters of the Supreme Command of the Armed Forces (Oberkom-
mando der Wehrmacht) was situated in Wünsdorf. On April 20, 1945, the area
was occupied by Soviet troops; the command staff and Marshal Georgy Zhukov
stayed there during the nal battle of Berlin. Beginning in 1946, the area was
used by the 1st Belorussian Front.
In February 1954, the place became the headquarters of the High Com-
mand of the Soviet Forces in Germany, and the Soviet military housing rapidly
expanded: 175 local families, 800 people in total, had to leave their houses,
apartments, and property, and were resettled to make way for the Soviet Army
24
Christoph Lorke
and its personnel (Kaiser & Herrmann 2010 [1993]: 138). Elderly citizens still
remember this time as a deep disruption of their personal mobility and lives.10
At this point, the highway F 9611 – by then the longest highway within the GDR
and the most important direct connection to its capital, Berlin – was closed to
transit trafc until 1994, dividing Wünsdorf into two. Ordinary people who
did not have authorized transit permission (propusk) had to make a laborious
detour of more than ten kilometers (Fig. 3).
Henceforth, the military area was closed to GDR civilians, and even the
Socialist Unity Party of Germany’s (Sozialistische Einheitspartei Deutschlands,
SED) ruling elite was not allowed to enter until 1960, when Willi Stoph, the
then Minister of National Defense and subsequently Deputy Prime Minister
of the GDR (1964–1973), paid a visit to the troops. Most GDR citizens were not
aware of the existence, size, and importance of Wünsdorf as a military site and a
control center of the Soviet Army during the Cold War. From there not only was
armored protection organized during the construction of the Berlin Wall under
Marshal Ivan Konev, but also aviation security for the entire GDR airspace
was guaranteed. Both the suppression of the Prague Spring in 1968 and the
change in the GDR government in 1971, when Walter Ulbricht was replaced by
Erich Honecker as the General Secretary of the Central Committee of the ruling
Figure 3. Map of Wünsdorf. Garnisonsmuseum Wünsdorf, March 10, 1994.
Folklore 70 25
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
party, were coordinated and commanded from Wünsdorf. Doubtless, this place
could be regarded as the st of Soviet policy in the GDR (Kowalczuk & Wolle
2010: 126; for the circumstances of the occupation, see Satjukow 2008; for the
broader context, see Loth 1998). There was a daily military train to Moscow for
Soviet soldiers and their families at 8 pm every evening, which departed from
what was called Russen-Bahnhof (‘Russians’ Station’).
The closed doors of the “Forbidden City” – also a popular term to describe
other Soviet military places in the GDR, such as Hillersleben, Neuruppin,
Naumburg, and Weimar – stimulated speculation, and not only in regard to the
quantity of troops and civilians stationed in Wünsdorf, which was a proper city
with schools and kindergartens, medical care, a theater, sport facilities, and
its own hairdressers and shops. In this context, the ideologically justied and
politically imposed “friendship” between the occupants and the natives was full
of suspense and was decisively inuenced by 1) the former ideas of the highly
ideologically and racially connoted image of the “Bolsheviks” and 2) the perception
of the Russenkasernen (‘Russian barracks’) in daily life. As the historian Silke
Satjukow asserted (2004: 237–240; 2005; 2009: 57–58), many residents did not
perceive the barracks as places of safety, but rather of unpredictability and
hidden danger due to unpleasant noises and odors, incoming and outgoing tanks
and helicopters, damage along public roads or agricultural areas, explosions,
aviation noises and resulting impairments. Furthermore, because of trafc
accidents, “unnatural deaths”, brawls in restaurants, robberies, and sexual
attacks, the barracks became places of danger and foreignness (Behrends 2003;
Müller 2011: 163–189). This refers to specic modes of inclusion, exclusion, and
xation of the “foreign” within a certain space, in this case the “Forbidden City”
(with reference to Georg Simmel: Geenen 2002: 223–239).
On the other hand, the forbidden zone also had considerable appeal, which
the Wünsdorf locals experienced notably in the area of consumption. It is sig-
nicant that almost half of the interviewees mentioned several aspects which
referred to a well-functioning partnership of convenience, especially in later
decades. The special Russenmagazine (‘Russians’ stores’) sold many sought-after
products. Party functionaries and a few people who were working within the
restricted area were holders of propusks, entry tickets into the restricted area,
and they described how they beneted from certain privileges. Popular, but
usually very rare products, such as building materials, Czech beer, Hungarian
ham, tropical fruits, tinned sh, confections, and even smoked eels from the
Baltic sea were sold, and thus represented another dimension of encountering
the “foreign”: culinary delights and accouterments. In retrospect, such ex-post
constructed imagined behavior patterns could obviously also evoke the aftertaste
of unjustied, “conspicuous consumption” (Veblen 1899), which is very evident in
26
Christoph Lorke
the example of Gerhard Dombritz (born in 1942). He was a local political activ-
ist in the 1990s and described himself as “not a Russian whisperer”. Dombritz
stated, “more by hearsay than by personal experience”, that, in his memory,
the lifestyle of the ofcers was exorbitant. Furthermore, the high-ranking ofc-
ers’ food and supplies were even “more snobbish”12 than in the secure housing
zone for leading functionaries in Wandlitz, about thirty kilometers northeast
of Berlin. Senior party members of the Socialist Unity Party of Germany lived
there; the area remained off-limits to ordinary East Germans until 1990.
This statement illustrates that, in terms of more than boarding and lodging,
the interviewees remember a massive discrepancy between German and Soviet
higher ranks. In addition, the differences and prosperity gaps between the mili-
tary ranks – and thus, inevitably, between the locals and the lower ranks – were
also immense. Hence, there was self-ghettoization of the Soviet troops, which
was not surprising since it helped to limit the soldiers’ “Western experience”,
especially with regard to consumption. In the eyes of many ordinary Soviet sol-
diers and in comparison with their own situation after the end of World War II,
the Germans lived “off the fat of the land” (Satjukow 2004: 225–249). Thus,
rigorous spatial isolation, poor accommodations, low salaries, strict regulations
regarding contact with the locals, and prohibitions against fraternization were
implemented by the military administration, as those seemed to be the safest
means of avoiding disciplinary violations (Bassistow 1994: 46–48).
However, in the case of Wünsdorf, as everywhere else, German-Soviet contact
could never be prevented entirely, exceeding the usual scope of highly formal-
ized, prepared and stage-managed ofcial encounters, and not only because
of the approximately 1,000 Germans who worked in the garrison at the end
of the GDR; instead, “friendships” or “friendly relations” – terms frequently
used in the interviews – and even a few love affairs developed. Nonetheless,
the Waffenbrüderschaft (‘comrades-in-arms’) were, just like everywhere else
in the GDR, apparently limited to the ofcer corps (Müller 2005: 128–132).
While the lower ranks lived in comparatively meager accommodations – al-
though ush toilets, washbasins, and showers were not standard in the Soviet
Army – service in the GDR forces was particularly advantageous for ofcers
and generals: between 800 and 1,000 marks per month, a family allowance of
up to 250 marks, and a signicantly better range of products available. Four
or ve years in the GDR forces made it possible to procure goods and clothes,
and even to save some money. In short, service in Wünsdorf was regarded as
an honor for the “favored few” Soviet Army soldiers, in particular in terms of
living standard (Bassistow 1994: 49–50; Kaiser & Herrmann 2010: 144). For the
locals, the image of Wünsdorf was strongly marked by the presence of soldiers.
Hence, they resigned themselves to living in a city of “occupiers”; for many, living
Folklore 70 27
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
with the Russians became a part of the everyday routine, eventually not only
in Wünsdorf, but in other Soviet military bases, too. This routine was suddenly
and unexpectedly shaken by the fall of the Berlin Wall in the autumn of 1989.
TIMES OF CHANGES, TIMES OF UNCERTAINTY: THE INTERIM
PHASE, THE WITHDRAWAL, OLD AND NEW CONFLICTS
In many respects, the early 1990s in reunied Germany can be characterized
as a transition period, although the break was usually much more abrupt and
intense for East Germans than for West Germans (Danyel 2015). The presence
(and later, withdrawal) of the Russian troops is one of the many different,
overlying, and partially interwoven passages between the “old” and the “new”.
After the fall of the Berlin Wall and the reunication of Germany in October
1990, the Russian military command initially regarded the desire of many
Germans for unity, freedom, and sovereignty as ingratitude. Little by little,
understanding grew, while at the same time concerns increased with regard to
the period after the withdrawal. Uncertainty and psychological stress among
the soldiers increased (Arlt 1998: 619).
The majority of the East Germans, however, welcomed the withdrawal as
a “second” or even “real liberation”, since now there was a way to express long-
repressed sentiments. Sensationalist press articles and simple stigmatizations
supported a shift in liability, a deection of responsibility regarding the failures
and the end of the GDR, which served as mental exculpation. The Russians,
who were previously praised, were in this emotionally charged phase defamed
as “uncivilized occupiers” (Satjukow 2009: 62) and thus represented the “other”,
anti-civilization, now in contrast to the West. Emphasizing a narrative of wild
upheaval, the media landscape was full of lurid articles dealing with crime, cor-
ruption, and immorality, half-barbaric behavior, a shadow economy, maa-type
actions, bribes, the ourishing “black market”, drug trafcking, unexplained
murders, and contract killings. The “ogging” of all manner of things – including
food, cars, and guns – from which both the Russian and (West and East) Ger-
man traders had beneted, was one of the main topoi. Wünsdorf was especially
pointed out as an important trading center. Other sensationalist comments
involved the Russians’ lax handling of environmental problems.13 By appeal-
ing – both intentionally and unintentionally – to anti-Soviet prejudices and
feelings, these media narratives enjoyed great popularity among the reunied
German public.
These discourses seem to have strongly inuenced, shaped, and strengthened
individual perceptions and imaginations. The same applies to the debates about
28
Christoph Lorke
the GDR as a “Stasi state” or Unrechtsstaat (‘illegitimate state’), which for many
East Germans involved a symbolic general devaluation of their biographies and
overlapped with the discourses regarding the Russians (for an overview, see
Großbölting 2010; Kollmorgen 2010; Sabrow 2012). After 1990, opinions and
prejudices regarding the Russians, which had been taboo due to the propaganda-
imposed glorication of the Soviets as heroic liberators, were able to emerge
directly. It seems that very soon after 1990 many East Germans – and thus, of
course, Wünsdorf locals – regarded the Russians as a complementary element of
the new society, which helped to strengthen a new specic, occasionally ostenta-
tious, and condently performed East German sense of unity (Satjukow 2009:
65). In contrast, others saw the derogatory judgments regarding the Russians
as personal attacks on themselves. Provided this brief sketch of a conictual
and contested scenario, many Wünsdorf residents remember feeling joy and
relief, as well as compassion and uncertainty, when the Russian troops left.
Probably because they knew that the end of the transition period was near and,
at the same time, recognizing the importance of the armed forces to the local
economy, they felt a certain empathy with the soldiers. Local businessmen in
particular were even very sad, as Günther Heisig (born in 1933), at that time
the owner of a shoe store, remembered.14
From a source-critical point of view, personal statements about the “Soviet
occupiers” involved problems: whether the statements served as a subsequent
smoothing, or reected actually existing sentiments, varied from individual
to individual. Quite a few respondents’ descriptions of their experiences with
Soviets/Russians were most probably affected by contemporary stereotypes
or their opinions on present-day Russia. However, in Wünsdorf – as in many
other military bases in East Germany – concerns about the remaining soldiers
did arise, and with alarming openness. There were occasional demands, such
as “Civilian Russians Go Home”, “Leave, Russian Parasites” or, as residents
painted in Cyrillic on the road to the department store: “Get Out, You Bas-
tards”.15 The environmental damage – in the end, a cost borne by the Federal
Republic of Germany – in all likelihood strengthened such negative sentiments.
According to Arnold Klein (born in 1954), who felt melancholy after the
withdrawal, thefts and vandalism were the order of the day,16 and even physi-
cal assaults targeting soldiers and their families were observed. Even though
these were only scattered incidents, these years were characterized by wild-
ness, confusion, and a new form of uncertainty. Ilse Bollman, who had worked
for more than twenty years inside the “restricted zone”, said with regard to
crime and the attacks: “During this period, you could trust no one – neither
Russians nor Germans”.17 Both Winfried Bläse (born in 1950) and Bernhard
Michel stated that after the withdrawal, Wünsdorf was dead, an utter ghost
Folklore 70 29
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
town.18 When the rising unemployment and the closing of businesses became
more evident – after the initial phase of euphoria and relief – very quickly an
atmosphere of disillusionment and uncertainty developed among many locals.
They considered the period after 1994 a standstill or even a decline, and thus
mourned in many respects the passing of the good old days.19 It is obvious that
the assessments of those days were highly linked to the respective individual’s
perception and valuation of the Soviet/Russian troops.
A closer look at the “other” side reveals further insights: for the Russian
soldiers, the shift was apparently even more radical. The psychological effects
of the ideological collapse and the instability in their home regions, and the
pronounced feeling of being unwanted and unwelcome guests undermined self-
condence: for many of the soldiers, withdrawal meant social decline. They felt
like “beaten winners”, as the last Minister-President of the GDR, Lothar de
Maizière, stated in Moscow in spring 1990. Due to the insecure future, a sig-
nicant proportion – according to estimates, up to one-third – of all returned
families split up (Locke 2014).
Another serious problem was the slow process of the housing program. De-
spite the eight-billion-mark support by the Federal Government, there were
signicant delays. Although 45,000 apartments were built in Russia, Ukraine,
and Belarus between 1992 and 1996, 50,000 families had no suitable housing
after their return (Foertsch 1994: 125–127). Preparing for their withdrawal,
many soldiers bought household appliances, technological items, or second-hand
cars in order to sell them in Russia. There were rumors of secret arms sales –
according to recent surveys, 81,000 tons of ammunition went unaccounted for
(Kaiser & Herrmann 2010 [1993]: 184) – and Kalashnikov for used car swaps
(e.g. Liebold 1991). “Taking everything that was not nailed down” was a phrase
often mentioned in the interviews. In contrast, Heinz Bremer (born in 1936),
who generally pleaded for an “objective analysis” of those developments, ex-
pressed an explicit warning against a derogatory attitude toward the situation,
especially by those who did not know the actual living conditions in their home
countries very well.20
The ofcial farewell celebration, which was initiated and orchestrated by
the Russian commanders, was intended to symbolize the departure of Russian
troops from all of Germany, and to make people forget any negative feelings.
Thus, the narrative Heimkehr / Abschied in Würde (‘Leave in Dignity’) was
established in bilateral contracts after 1990 in order to express caution, gentleness,
and tact (Burlakov 1994; Foertsch 1994; Nawrocki 1994; Abschied in Würde
1994). However, even though the withdrawal was performed in a calm, formal
atmosphere that could be considered a “logistical tour de force” (Gießmann 1992:
177–209; Kaiser & Herrmann 2010: 182; for a meticulous chronological
30
Christoph Lorke
summary of the withdrawal, see
Hoffmann & Stoof 2013), the aim
of a “worthy” nal stage of the
Russian troops in Germany was
only partially successful. The
farewell parade in Wünsdorf,
broadcast live by the regional broadcaster Ostdeutscher Rundfunk Brandenburg
(ORB),21 was an essential part of this project, and was meant to symbolically
prove the new openness of the Russian troops. On June 11, 1994, thousands of
people had the opportunity to observe the “inner life” of the former “Forbidden
City”. For an entrance fee of ten marks, most of the citizens of Wünsdorf could
visit the inside area for the rst time. In his farewell address, the Prime Minister
at the time, Manfred Stolpe, thanked the Russian troops for their prudence in
1989 and 1990. “It was a folk festival, and everybody celebrated. We ate cake
and solyanka, drank vodka, and
I had tears in my eyes”, Winfried
Bläse, one of the interviewees,
remembered. This observation
sheds light on the perception
of “foreign” food culture in the
town with respect to the Russian
“tradition” and its consequences of
inter-cultural learning dynamics
(for West Germany, see Möhring
2012). Born in 1950, Bläse had
grown up with the Russians, and
he and his family proted greatly
from them. The period between
1990 and 1994 was, he added,
“the best time of [his] life”,22 not
Figure 4. Open house in Wünsdorf,
June 11, 1994. Civilians were given the
opportunity to observe the ‘inner life’ of the
former “Forbidden City” (Gehrke 2008: 74).
Figure 5. The bilingual poster reads,
“Homeward, to the motherland. Fare-
well, Germany!” Wünsdorf, June 11,
1994 (Gehrke 2008: 75).
Folklore 70 31
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
despite but rather because of the presence of the Soviet/Russian forces. The
celebrations in summer 1994 were regarded as the symbolic culmination of
a felicitous relationship.
While these festivities were remembered positively by some, they also evoked
serious political inconsistencies, and this still plays a key role in many memories:
on that day, politicians from the Brandenburg state government came, but no
representatives from the federal government or the federal armed forces were
present (Kampe 2009: 49). In most of the interviews, people mentioned their
disappointment, describing how they interpreted this as a sign of arrogance,
and thus a downgrading of the Russian troops by the Bonn government, which
seemed to reect an ongoing lack of respect for the Eastern Germans’ lives,
as well as for the Russian Army. Moreover, the Russian withdrawal was ac-
companied by different, either intended or unintended, forms of “tactlessness”,
misconceptions, and friction. One prominent example is the appointment of
Hartmut Foertsch as the director of the liaison organization between the Ger-
man and Russian Armies. Foertsch’s father Friedrich had served as a general
during the 900-day siege of Leningrad in 1941.
Figure 6. Spectators at martial arts performances in Wünsdorf.
June 11, 1994 (Gehrke 2008: 61).
32
Christoph Lorke
In meetings with representatives of the German Federal Armed Forces (Bun-
deswehr), which were doubtless full of clear and mutual reservations, quite
a few of the Russian commanders were dismayed at the fact that their property
and goods had become (almost) valueless. Walter Meining, who took part in the
negotiations with the Soviet Army, described the meetings as full of arrogance
on the part of the Germans, “with only a few exceptions”: Meining, for example,
mentioned General Werner von Scheven, the Chief Ofcer of the Federal Armed
Forces in the newly-formed German states, as a very fair-minded person who
dealt with the Russians “eye to eye”.23 Siegfried Marquart (born in 1947), a for-
mer high-ranking ofcer of the National People’s Army (Nationale Volksarmee),
remembered a “fundamental arrogant stupidity”, intended to show the “other”
(Russian) side that “we were back again”.24 In the terms of the American so-
ciologist Harold Garnkel (1956), we may interpret these forms of (direct and
indirect) encounters as “rituals of degradation” (for the administrative sphere,
see Gravier 2003). These specic transitional rituals were typically associated
with a discrediting of the past and thus indicated a revaluation of the past.
As the sociologist Nina Leonhard recently stated, these rituals were a funda-
mental condition for the negotiation of new identities among former members
of the National People’s Army after their integration into the Federal Armed
Forces in October 1990. In this process, the label “army of unity” was invented
(Leonhard 2016: 133–144). At that time, only a small number of soldiers were
taken on permanently, which caused additional problems in accepting the new
(military and societal) order. The views expressed above came from someone who
spoke Russian uently, spent several years in the Soviet Union, studied at the
military academy in Moscow, and thus had countless encounters with Soviet/
Russian (civilian and military) citizens. These individual experiences shaped
his perceptual patterns and may explain his feeling of being downgraded. Vice
versa, this perceived devaluation most likely strengthened his already close
attachment and solidarity with the former “brothers’ army” further.
The circumstances of the parting ceremony evoked other notable moments
of irritation, which had repercussions for the Wünsdorf locals and their re-
membrances, too. First, there was a great deal of astonishment over the idea
of organizing the farewell ceremony for the Russian troops not as a common
event with the British, American, and French military forces, but instead as
a singular event held not even in Berlin, but in the National Theater in Weimar.
“This is not our place”, Matvei Burlakov said angrily, apparently referring to the
liberation of the Buchenwald concentration camp in April 1945 by the American
army and the following running of the camp by the People’s Commissariat for
Internal Affairs (Narodnyi Komissariat Vnutrennikh Del, NKVD). Until its
Folklore 70 33
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
dissolution in 1950, more than 7,000 people died of starvation, malnutrition,
and disease in Special Camp No. 2.
It was not until the Social Democratic Party’s (Sozialdemokratische Partei
Deutschlands, SPD) leading politicians, including Wolfgang Thierse, Friedrich
Schorlemmer, and Manfred Stolpe, sent a letter to Helmut Kohl asking him
to change the location so as not to humiliate the Russians, that the chancellor
settled on Berlin. Nonetheless, Bundeskanzler Kohl was still against a “joint
and equal leaving of all allied forces in Germany” (Kaiser & Herrmann 2010
[1993]: 185–186). Although according to a survey, 75% of Germans supported
a common celebratory ceremony, the German government opposed this idea,
as they too deeply felt the ideological divide (ibid.). “Our soldiers do not leave
as occupiers, but as partners and friends,” Yeltsin stressed in his speech on
August 31, 1994, during the ofcial farewell ceremony in Berlin. But even
the highly symbolic joint laying of a wreath at the Soviet memorial in Berlin-
Treptow and the emotional singing of the specially composed song titled “Lebe
wohl, Deutschland, wir reichen dir die Hand” (‘Goodbye Germany, We Reach
Out Our Hands’) could not hide the fact that the day was experienced and re-
membered as a “second class” leaving (Kaiser & Herrmann 2010: 185–186).25
This symbolic and real distinction is also reected in the interviews. The
majority of the interviewees remembered the ceremonial dimension as being
important and dignied because it symbolized gratitude, especially in the con-
text of the Peaceful Revolution in 1989, when the Russian Army remained calm.
In general, the interviewees would also have preferred a common ceremony
with all four allied forces to prevent the Russian Army from appearing in an
outsider role. However, four interviewees explicitly emphasized the importance
of holding separate ceremonies. A separate event expressed the “hierarchy”
among the occupying forces, with the Red Army being the least respected. Her-
bert Wüllenweber (born in 1951), who strongly supported separate ceremonies,
explained his opinion via a biographical and generational experience: his father
had been a front-line soldier on the Eastern Front, ghting against the Soviets.
“I am in no way a friend of the Russians,” he added, and he also mentioned the
overly “arrogant and dolled-up Russian women” (Russenweiber) and not least
the current political developments (“I am anything but a Putin whisperer”26).
He clearly demonstrated that the interpretation of the past is always affected
by knowledge of the present (Sabrow 2014: 36–37; for the context of the military
transition, see Ehlert 2013; Thoß 2007). The feeling of cultural superiority may
also have played a central role in retrospective descriptions and the reproduc-
tion of pejorative stereotypes like the ones discussed above (von Wrochem 2003:
62; for an overview, see Müller 2005).
34
Christoph Lorke
This mixture eventually also shaped the present-day perception and evalu-
ation of Wünsdorf (and its desired future). In general, it is striking how the
symbolic space of the former military base was inuenced and dominated by
a clear dichotomy regarding the images of the Soviets/Russians, which oscillated
between idealizing descriptions and demonizing horror stories. While some of
the interviewees tended to idealize the time with the Soviets and speak of it as
the “most wonderful period of their lives” referring directly to the post-Russian
time, which was in their eyes characterized by “disorder, decline, and dirt”,
and which transformed Wünsdorf into a dead ghost town, others did not even
try to conceal their Russophobia. In the interviews, which were by no means
free of polemics, a self-referential split was most clearly expressed via external
and self-attribution and the categorization of “Russian friend”, “whisperer”,
or “enemy”,27 which very likely was not only the case in Wünsdorf but also in
other former garrison towns, even outside Germany.
A noteworthy differentiation can be concluded regarding 1) the size and
importance of Wünsdorf in the military network in the GDR and the whole
Eastern bloc and, even more important, 2) the specic context of the reunited
German society, which lies transversely to these processes of appropriation
and negotiation and, subsequently, the (new/old, visible/invisible, open/sub-
tle) borders which affect memories, narratives, and emotions. In this society
different “arenas of transition” happened to occur: conicting elds that rep-
resent problematic, conictual, and often contradictory processes of merging,
identication, and self-understanding (for a rst draft of these “arenas”, see
Großbölting & Lorke 2017).
As one example of an “arena”, the case of Wünsdorf in its (Soviet/Russian)
past and present claries the overlapping of current and long-lasting conict
situations in different dimensions: the military, political, social, cultural, me-
morial, collective, and individual. The Wünsdorf case represents not only how
the different modes within the GDR past were negotiated repeatedly, but also
how encounters with Russians (and references to them) before and after the
period of 1989–1994 were highly determined by biographically acquired, avail-
able, and activated reservoirs of cultural and national clichés and stereotypes.
Yet, there was also a recursiveness in the handling of the individual’s past
(Gallinat & Kittel 2009; von Plato 2009) and in the negotiation of GDR and/or
East German identity (Pollack & Pickel 1998), which for many Wünsdorf locals
even today is closely interwoven with the Soviet/Russian presence until 1994.
Folklore 70 35
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
THE (LASTING) PROCESS OF CONVERSION:
WÜNSDORF BETWEEN “HOBBYHORSE” AND “HUMBUG”
When the last Russian soldier left Wünsdorf in September 1994, ownership
of the property was assigned by the state of Brandenburg. The restructuring,
renovation, and conversion of former military sites were great challenges nan-
cially, logistically, and symbolically. For Brandenburg, above all, the immense
size of former military areas was a huge burden: about 120,000 hectares were
transferred to the state by the federal government after the withdrawal in June
1994. In comparison to the other four New Länder, Brandenburg was the area
most affected by military utilization of land and conversion. Thus, the impor-
tance of this task was codied in the Constitution of the Land of Brandenburg
(Article 40; “Grund und Boden”).28 Quickly, the conversion of this intersection
of German, European, and Soviet military history came to be a prestige project,
the “hobbyhorse”29 of Prime Minister Manfred Stolpe (SPD), which took place
under the heading Von der Konfrontation zur Kooperation (‘From Confrontation
to Cooperation’). But what can be done with an area six kilometers long and
800 meters wide, with a mix of contaminated soils and sites, approximately
three million liters of kerosene, 300,000 tons of waste, ammunition, and a na-
ture reserve, and how can the different layers of the past be integrated within
a more or less “consistent” memorial narrative (Kaiser & Herrmann 2010 [1993]:
204–205; Gießmann 1992: 199–206)?
One of the rst major measures, aside from the return of property and
houses, and one of the most notable elements of commemoration among the
interviewees, was the reopening of federal highway B 96, which had been closed
to through trafc since the 1950s. There are reasons why almost all of the in-
terviewees mentioned the reopening. By 1991, several local initiatives had tried
to reopen the highway, leading to an ongoing battle between the locals and the
Russian troops. More than 1,000 applications arrived in the community’s ofce.
Eventually, the Russian commanders refused these requests on the grounds of
possible noise pollution and the running out of goods in the Russian shops (Für
die Wünsdorfer 1991). According to a journalist’s observation, at that time the
“German-Russian climate was extremely tense” (Liebold 1991). All the greater
was the joy when the highway was eventually opened to public trafc in 1994.
Many interviewees regarded this as a symbolic new beginning,30 and one of
them even saw it as the “only positive effect of the withdrawal”.31
The development company Landesentwicklungsgesellschaft (LEG)32 had am-
bitious plans, and in 1993 cited locational factors, such as its close proximity
to Berlin, the labor potential, favorable trafc links, and landscape (Wieschol-
lek 2005: 51–62).33 Eventually, nine development scenarios were proposed,
36
Christoph Lorke
ranging from a zero solution (i.e. renaturation), and an eco-city (“Architecture,
Ecology, and Art”) to Germany’s largest city for refugees (which, according to
a documentary, led to many objections from the locals (see Richter 1993)),34
a leisure, service, technology, and innovation center like Silicon Valley, and
a bureaucratic and satellite town with up to 20,000 inhabitants (“Good Night in
Fresh Air”) (Kaiser & Herrmann 2010 [1993]: 201–202; Brüske 1993; Hénard
1993). In April 1995, there was a cabinet decision to maintain the character of
the area and, using the name Waldstadt (‘Woody City’), which today is a part of
the community of Wünsdorf, create a place for living, trading, administration,
education, and working within an attractive environment. Furthermore, eighty
million marks in aid money was made immediately available (Wieschollek 2005:
Figure 7–8. Glimpses of Wünsdorf after the withdrawal of Soviet
forces in 1994. Garnisonsmuseum Wünsdorf.
Folklore 70 37
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
70). In the end, none of the plans were realized. Considering the unemployment
rate of up to 20% in Wünsdorf in the mid-1990s, the price of commercial spaces
was presumably too high. On the other hand, there was no complete break-
down either, not least due to an immense amount of aid money from private
initiatives and the European Union. Nowadays, there are approximately 6,500
inhabitants in Wünsdorf, half of whom live in Waldstadt.
By 2009, 80% of the former military sites had been sold (Kaiser & Herrmann
2010 [1993]: 204). However, as almost everywhere in East Germany, especially
in rural areas, there is still a comparatively high number of empty properties
in Wünsdorf, although that number has decreased slightly during the last ten
years (for an overview, see Kratz 2003). Additionally, most likely as a strategic
decision, the Brandenburg state agency for the road sector and the state ofce
for the preservation of order are based in Wünsdorf and have several hundred
employees.
The causes of this situation are complex and multilayered, as well as contro-
versial: unused potential, conicts over use, the lack of sufcient development,
and the premature development of common visions, and missing or overesti-
mated infrastructure are some of the general aspects which were mentioned
regularly (Lohnes & Kucera 1997; Wieschollek 2005: 131–160). Due to high ex-
pectations, the term “conversion” often has a negative connotation. In contrast,
the interviewees were less squeamish, and they often used such phrases as
utopian, unrealistic ideas, fantasies, “humbug”, sinister and clandestine machi-
nations and intrigues by third-class incompetent West German professionals,
and unfeasible and useless ideas full of lobbying, trickery, and wheeling and
dealing in the context of restructuring the former military property.35 For some
of the interviewees, with the withdrawal of the Russians a part of the imagined
GDR past left, too. Such statements may be interpreted as a delimitation of the
“new time” and/or of the West Germans and, thus, a reaction to the perceived
devaluation of the individual and collective life’s work (Müller 2011: 368).
Today, there is a special focus on the touristic potential and European-wide
important military history of Wünsdorf related to the Kaiser, Hitler, and the
Russians, along with ties to the arts, culture, and nature. In September 1998, the
rst and only German “book town” was founded here, following a British model,
in order to promote humanistic ideas, appreciation of books and the closed bun-
kers as symbols of peace, and to encourage a sensible approach to the past and
present.36 The private limited company Bücherstadt-Tourismus GmbH organizes
different thematic guided tours through the “Forbidden City”, accompanied by
campres, the serving of stew from a eld kitchen, military-historical seminars,
encounters with military vehicles, an “underground Sunday” in the “zeppelin”
signal bunker, and readings. Even though the book town project is regarded as
38
Christoph Lorke
Figure 9–10. Glimpses of Wünsdorf
after the withdrawal of Soviet forces
in 1994. Garnisonsmuseum Wünsdorf.
a success (e.g. Arlt 2010: 672), it is in a constant struggle for its existence: of the
twenty original antiquarian booksellers, only three have survived, and there
are 400,000 books waiting to be sold (Mallwitz 2015). There is also a garrison
museum, Roter Stern (‘Red Star’), which is supported by a local booster club
and gives an interesting but quite uncritical overview of the Soviet/Russian
stay in Germany, with both permanent and changing exhibitions showing the
didactic and educational efforts to preserve the memory of Wünsdorf’s military
past (Fischer 2000; 2010).
It is evident that these developments shaped memorial representations as
well as the practical aspects of managing the former military past. In Wünsdorf,
there are still initiatives to deal with the military heritage in general and the
withdrawal of the army in particular. In order to preserve the memory of the
Soviet presence, a ring road in Wünsdorf was named after Pjotr Koschewoj,
a former Soviet marshal who was based there for several years. The renaming
Folklore 70 39
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
was a response to the failed initiative of the Freunde der Bücherstadt Wüns-
dorf (‘Friends of the Wünsdorf Book Town’) to rename another street after the
controversial commander Burlakov (Degener 2014a). As a common initiative
of the Bücherstadt Wünsdorf and the Russian embassy, on the 20th anniver-
sary of the withdrawal, in 2014, Anton Terentjew, who was a colonel general
in Wünsdorf in 1993 and 1994 and thus played a signicant role in the process
of the withdrawal, returned and thanked the locals for their “maintenance of
tradition” (Die Rückkehr 2014; Degener 2014b). On that day, gratitude for Eu-
rope’s liberation from fascism was expressed in Wünsdorf, including greetings
from local and national politicians, although at that time the conict between
Russia and the Ukraine was underway.
Among Russians, there is signicant interest in and willingness to visit
Wünsdorf, and especially among the younger generation there is a vibrant online
culture of commemoration, for example, in the social medium VK.37 The lively
exchange of class photographs may not be merely a surrogate for remembering
their “homeland”, and many plan to visit the place of their childhood as potential
“homesick tourists” (provided they have the nancial ability to do so).38 This
specic double perspective was also registered by the locals and emphasized
in some of the interviews: for many Russians, Wünsdorf became their “home-
land”, as Dietrich Meyer (born in 1943) highlighted, and their withdrawal was
“tantamount to a catastrophe”.39
Figure 11. A glimpse of Wünsdorf
after the withdrawal of Soviet forces
in 1994. Garnisonsmuseum Wünsdorf.
40
Christoph Lorke
CONCLUSION: REINTERPRETING THE MILITARY PAST
IN WÜNSDORF
As discussed above, the permanent presence of Soviets/Russians has left deep
traces in Wünsdorf regarding the creation of (new) cultural and spatial, as well
as social and individual, identities. The variety of the collective and individual
handling of the legacies of the Cold War in Wünsdorf nowadays illustrates dif-
ferent forms of appropriating, updating, reinforcing, neglecting, and excluding
certain elements of the Soviet/Russian past. Opinions about the Russians before
1990 are cross-generational and still present today, and they now stretch the
full range from anti-Russian sentiments and the commemoration of a highly
negative concept of “foreign domination” to feelings of belittlement and con-
tinuing melancholy.
This nding corresponds with a survey of East Germans by the Institut für
Demoskopie Allensbach (‘Allensbach Institute for Public Opinion Research’)
in 1994, when 32% of the respondents assessed the Russian troops as “mostly
friends and allies”, while 42% regarded them as “mostly an occupying power”
(Müller 2011: 144). Even if one concedes that the sample of the present study
represents a multiple skewed perspective – those who responded to the press
call had “something to say” and a special “need for communication” – the conclu-
sions strengthen the argument presented by historian Evemarie Badstübner-
Peters, who claimed that the Soviet (Russian) inuence was a constant and
highly relevant factor in everyday life, to a far greater extent than assumed
previously. Its impact is noticeable even today. The “difcult handling of the
difcult foreignness” (Badstübner-Peters 1997a; 1997b) most likely not only
inuenced behavioral and orientation uncertainties after 1990, in regard to deal-
ing with foreign cultures and lifestyles, but also led to different ways of coming
to terms with the past, which reects a highly ambivalent memorial landscape
and current (geo)political and diplomatic developments. These ndings can be
classied as selected practices of “othering” in terms of a certain space, where
“foreignness” can be interpreted as a result of everyday interaction, construction,
identication, and irritation. This also reects on both existing and obsolete
ideas of social, economic, cultural and ethnic order within a certain space, and
the embedded role of the “foreign” that over many years signicantly inuenced
the local symbolic order (Geenen 2002: 245–247; Reuter 2002).
In terms of the future, many residents place plenty of hope in the comple-
tion of a major airport for Berlin. The Waldstadt page advertises a “space for
visions”, an “exceptional environment”, the “best infrastructure and transport
link”, a place with a historical location, vivid culture, and “enchanting lake
Folklore 70 41
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
scenery”, which is, however, still in a “deep sleep”. The “very low commercial
tax rate” and, above all, the proximity to the future Berlin airport would offer
“unlimited opportunities”.40 Many interviewees mentioned this scenario too,
and not only the relocation of “noise refugees” (i.e. people escaping the noise of
city life), but also the existence of a major Russian investor were mentioned.41
Taking a quick glance at its current status, in the past year approximately
1,500 refugees were admitted for the rst time to live at the former military
base in Wünsdorf (Fischer 2015). In May 2015, two local right-wing youths
attacked the complex with reworks. The local initiative Wünsdorf wehrt sich
(‘Defending Wünsdorf’) organized several demonstrations last autumn, warn-
ing against crime, disease, and sexual assault. At the end of the event, the
crowd loudly demanded the withdrawal of Chancellor Angela Merkel and sang
the national anthem (Brockhausen & Rohowski 2015). Their Facebook page
has more than 3,100 likes (as of September 2017), much more than the 643
likes for the local refugee aid campaign from the same month, and notable
statements by their followers include: “I really preferred the Russians much
more”, or “If only the Russians were still here”. Statements like these again
powerfully demonstrate how for many locals the unloved past can be updated
(and upgraded) when new symbolic hierarchies are required and new borders
have to be established. For the time being, the question must remain open, as
an interviewee suggested, as to whether some of the residents have difcul-
ties handling any type of foreignness: “Fear of Russians, fear of wolves, fear of
refugees – this is a constant feature of Wünsdorf’s history”.42 The last statement
indicates a divergent type of “foreignness”, which privileges the Soviet/Russian
past in Wünsdorf. It again proves that how to deal with former Soviet bases
in Germany is strongly inuenced by the different layers of the aftermath in
the context of the German reunication and the lasting effects of the “power of
unofcial memory” (Burke 1991: 300).
ACKNOWLEDGEMENTS
I would like to thank Sabine Kittel and Lilith Buddensiek for the rst, unofcial
proofreading. I would also like to thank the anonymous reviewers along with
the editors for providing indispensable suggestions on how to more effectively
structure this essay.
42
Christoph Lorke
NOTES
1 This was the name beginning in 1988. From 1954 the name was the Group of Soviet
Forces in Germany. The Soviets stayed based on the “Treaty on Relations between
the USSR and the GDR” (1955).
2 For a summary of the locations, see the database edited by the Militärgeschichtliches
Forschungsamt (‘Military History Research Ofce’), available at
html/standorte_einleitung.php, last accessed on August 23, 2017.
3 For example: Altengrabow, Karl-Marx-Stadt, Dresden, Grimma, Halle, Hillersleben,
Jena, Magdeburg, Merseburg, Rostock, Schwerin, Stendal, Weimar, or Wittenberg.
4 Bernau, Cottbus, Dallgow, Eberswalde, Fürstenberg, Jüterbog, Perleberg, Potsdam,
Neuruppin, Neustrelitz, Rathenow, or Vogelsang, to name only a few.
5 See the contribution by Evgeny V. Volkov in this volume.
6 In detail: “Märkische Allgemeine”, “Wochenspiegel”, “Blickpunkt”, “Teltow-Kanal”,
“Stadtblatt Zossen”, and the homepage of the community of Zossen (available at www.
zossen.de, last accessed on August 23, 2017).
7 The questions were: 1) What part did the Soviet troops and the place of Wünsdorf play
for you before the year 1989? 2) How would you describe or characterize the “interim
phase” between 1989 (the fall of the Berlin Wall) and 1994 (the withdrawal of the
troops)? 3) How did you experience the process of withdrawal: the mood in Wünsdorf
among the local residents as well as among the soldiers? What has happened to this
place since then? These open questions allowed enough space for additional remarks
by the interviewees and also for further inquiries on my part.
8 This is not the place to propose a broader discussion of the term “generation”. Very
briey, subdividing these people into generations (Ahbe & Gries 2006), eight inter-
view partners (40%) belonged to the Aufbau-Generation (“Construction Generation”),
born between 1920 and the mid-1930s, seven (35%) to the funktionierende Generation
(“Functioning Generation”), born from the mid-1930s until the end of the 1940s, and
ve (25%) to the integrierte Generation (“Integrated Generation”), born in the 1950s.
What is important here is the fact that the majority of my interview partners were
from the Aufbau- and funktionierende Generation, which shows their interest as well
as personal/emotional involvement.
9 The only interviewed woman mentioned that in the context of the end of World War II
the locals were “frightened”. Interview with Ilse Bollmann (born in 1929), February 26,
2016. To protect their privacy, all names of the interviewees have been ctionalized
and created by the author.
10 Interview with Ilse Bollmann, February 26, 2016.
11 “F” stands for Fernverkehrsstraße; in 1990, the name was changed to B 96 – Bundesstraße
(‘Federal Highway’).
12 Interview with Gerhard Dombritz, February 18, 2016.
Folklore 70 43
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
13 Only a small selection: Sowjettruppen 1990; Schwelien 1991; Maa 1991; Unsere
Leute 1993; Habbe 1993; Militär 1994; Zwischenbilanz 1994.
14 Interview with Günther Heisig, February 19, 2016; similar statements were men-
tioned in the interviews with Walther Meining (born in 1935), March 1, 2016, and
Willy Tuchscherer (born in 1932), March 5, 2016.
15 See, for instance, the following selection of media articles: Furman 1991; Lippold 1991;
Schwelien 1991. Resentment was mentioned in detail in one interview, with Gerd
Langer (born in 1931), March 3, 2016. These verbal attacks were addressed both to
soldiers and the families of higher ranks.
16 Interviews with Arnold Klein, February 25, 2016, and Bernhard Michel (born in 1939),
March 19, 2016.
17 Interview with Ilse Bollmann, February 26, 2016.
18 Interviews with Winfried Bläse, March 3, 2016, and Bernhard Michel, March 19, 2016.
19 For example, in the interviews with Werner Schmidt (born in 1933), February 28,
2016, and Harald Weber (born in 1951), March 3, 2016.
20 Interview with Heinz Bremer, March 8, 2016; see also Kowalczuk & Wolle 2010: 223.
21 Die Russischen Truppen verabschieden sich. ORB, June 11, 1994; 02’20, Deutsches
Rundfunarchiv Babelsberg, No. 9400834. See also a short extract available at https://, last accessed on August 23, 2017.
22 Interview with Winfried Bläse, March 3, 2016.
23 Interview with Walther Meining, March 1, 2016.
24 Interview with Siegfried Marquardt, March 7, 2016.
25 See also Staatsfeiern 1994; Hénard 1994; Jelzin-Besuch 1994.
26 Interview with Herbert Wüllenweber, March 15, 2016.
27 Interview with Gerhard Dombritz, February 18, 2016.
28 See, last accessed on August 23,
2017.
29 Hobbyhorse. Märkischen Allgemeine Zeitung. January 25, 2002.
30 Interview with Walther Meining, March 1, 2016.
31 Interview with Winfried Bläse, March 3, 2016.
32 Landesentwicklungsgesellschaft (state development corporation). In June 1995 the
LEG, which was operating in decit, was succeeded by the Entwicklungsgesellschaft
Waldstadt Wünsdorf/Zehrensdorf (EWZ). For further background information, see
Wieschollek 2005.
44
Christoph Lorke
33 Infrastructural and nancial limitations (mainly, being far from Berlin’s sphere of
inuence, a remarkable workforce potential that was concentrated only in a few eco-
nomic sectors, and a lack of investor interest) were discussed, too.
34 Following this article, observations could be made that Wünsdorf local residents oc-
casionally stated that foreigners would be the least favorable new neighbors.
35 Interviews with Günther Heisig (born in 1933), February 19, 2016; Winfried Bläse,
March 3, 2016; Herbert Wüllenweber, March 15, 2016; and Bernhard Michel (born
in 1939), March 19, 2016.
36 Bücher und Bunkerstadt Wünsdorf. Bücherstadt-Tourismus GmbH. Available at www.
buecherstadt.com, last accessed on August 23, 2017.
37 For instance, see GSVG ★ ZGV ★ VIuNSDORF ★ WUNSDORF ★ GDR ★ DDR,
available at; Vse kto sluzhil v Viunsdorfe GSVG i ZGV
(Everyone who served in Wünsdorf in the Group of Soviet Forces in Germany
(GSFG) and in the Western Group of Forces (WGF)), available at
club4598721; Shkola 89 GSVG/ZGV Viunsdorf (School No. 89 GSFG/WGF), available at; ZGV. Viunsdorf. Shkola №1 (WGF. Wünsdorf, School
No. 1), available at; ZGV Viunsdorf NIKEL’ p.p.35714
(WGF Wünsdorf Nikel p.p.35714), available at; http://
wunsdorf.livejournal.com, all last accessed on August 23, 2017.
38 Wunsdorf, DDR – Posledniaia osen’ / / Letzten Herbst. Available at.
youtube.com/watch?v=LEOeTtfCigo, last accessed on August 23, 2017.
39 Interview with Dietrich Meyer, March 10, 2016.
40 Die Waldstadt Wünsdorf. Available at, last
accessed on August 23, 2017.
41 Interview with Bernd Holtzschke (born in 1939), March 8, 2016; see also van der Kraat
2014.
42 Interview with Heinz Küstner (born in 1935), March 8, 2016.
REFERENCES
Abdullah, Muhammad S. 1984. Halbmond unter dem Preußenadler: Die Geschichte der
islamischen Gemeinde in Preußen (1731–1934). Altenberg: Verlag für Christlich-
Islamisches Schrifttum.
Abschied in Würde 1994 = Abschied in Würde: Die ‘Westgruppe der Truppen’ verließ
Deutschland. Wehrtechnik, Vol. 26, No. 9, pp. 8–12.
Ahbe, Thomas & Gries, Rainer 2006. Gesellschaftsgeschichte als Generationengeschichte:
Theoretische und methodologische Überlegungen am Beispiel der DDR.
In: Annegret Schüle & Rainer Gries & Thomas Ahbe (eds.) Die DDR aus
generationengeschichtlicher Perspektive: Eine Inventur. Leipzig: Leipziger
Universitätsverlag, pp. 475–571.
Folklore 70 45
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
Arlt, Kurt 1998. Sowjetische (russische) Truppen in Deutschland (1945–1994). In:
Torsten Diedrich & Hans Ehlert & Rüdiger Wenzke (eds.) Im Dienste der Partei:
Handbuch der bewaffneten Organe der DDR. Berlin: Links, pp. 593–632.
Arlt, Kurt 2010. Zossen. In: Kurt Arlt & Michael Thomae & Bruno Thoß (eds.)
Militärgeschichtliches Handbuch Brandenburg-Berlin. Berlin: be.bra
Wissenschaftsverlag, pp. 666–672.
Assmann, Aleida 2009. Geschichte ndet Stadt. In: Moritz Csáky & Christoph Leitgeb
(eds.) Kommunikation – Gedächtnis – Raum: Kulturwissenschaften nach dem
‘Spatial Turn’. Bielefeld: Transcript, pp. 13–27.
Assmann, Jan 1997 [1992]. Das kulturelle Gedächtnis: Schrift, Erinnerung und politische
Identität in frühen Hochkulturen. München: Beck.
Augé, Marc 1992. Non-Lieux: Introduction à une anthropologie de la surmodernité.
Paris: Le Seuil.
Badstübner-Peters, Evemarie 1997a. Über uns und über die ‘Russen’: Zur Alltagsgeschichte
(ost)deutsch-sowjetischer Beziehungen. In: Ludwig Elm & Dietmar Keller &
Reinhard Mocek (eds.) Ansichten zur Geschichte der DDR, Band 7. Eggersdorf:
Verlag Matthias Kirchner, pp 251–275.
Badstübner-Peters, Evemarie 1997b. Ostdeutsche Sowjetunionerfahrungen: Ansichten
über Eigenes und Fremdes in der Alltagsgeschichte der DDR. In: Konrad H.
Jarausch & Hannes Siegrist (eds.) Amerikanisierung und Sowjetisierung in
Deutschland 1945–1970. Frankfurt am Main & New York: Campus, pp. 291–311.
Bassistow, Juri W. 1994. Die DDR – ein Blick aus Wünsdorf. Persönliche Eindrücke
eines russischen Ofziers. Jahrbuch für historische Kommunismusforschung,
pp. 215–224.
Behrends, Jan C. 2003. Sowjetische ‘Freunde’ und fremde ‘Russen’. Deutsch-sowjetische
Freundschaft zwischen Ideologie und Alltag (1949–1990).. 75–100.
Brockhausen, Stefanie & Rohowski, Tina 2015. Wünsdorfer stemmen sich gegen
‘fanatische Willkommenskultur’: Aggressive Stimmung beim Infoabend zu
neuem Flüchtlingsheim.
brandenburg/2015/11/buergerversammlung-erstaufnahme-uechtlinge-zossen-
wuensdorf.html, last accessed on July 14, 2016; no longer available.
Brüske, Klaus 1993. Nach 100 Jahren endlich zivil: Wünsdorf spielt Nutzungsvarianten
für das riesige GUS-Gelände durch. Berliner Zeitung, August 27.
Burke, Peter 1991. Geschichte als soziales Gedächtnis. In: Aleida Assmann & Dietrich
Harth (eds.) Mnemosyne: Formen und Funktionen der kulturellen Erinnerung.
Frankfurt am Main: Fischer, pp. 289–304.
Burlakov, Matvei P. 1994. Wir verabschieden uns. Als Freunde: Der Abzug – Aufzeichnungen
des Oberkommandierenden der Westtruppe der sowjetischen Streitkräfte. Bonn:
InnoVatio-Verlag.
Danyel, Jürgen 2015. Alltag Einheit: Ein Fall fürs Museum! Aus Politik und Zeitgeschichte,
Vol. 65, No. 33–34, pp. 26–35.
Degener, Peter 2014a. Sowjet-Marschall als Namenspatron: In Wünsdorf soll eine
Umgehungsstraße ‘Koschewoi-Ring’ heißen. Märkische Allgemeine Zeitung,
April 8.
46
Christoph Lorke
Degener, Peter 2014b. Festakt mit dem russischen Botschafter: Gedenken an Abzug
der russischen Truppen in Wünsdorf. Märkische Allgemeine Zeitung, June 7.
Die Rückkehr 2014 = Die Rückkehr des Stabschefs: Generaloberst Terentjew besuchte
das Museum ‘Roter Stern’. Märkische Allgemeine, February 7. Available at http://.
html, last accessed on August 24, 2017.
Ehlert, Hans 2013. Abgewickelt – Die Nationale Volksarmee der DDR im Vorfeld der
deutschen Einheit. In: Christian Th. Müller & Matthias Rogg (eds.) Das ist
Militärgeschichte! Probleme – Projekte – Perspektiven. Paderborn: Ferdinand
Schöningh, pp. 173–190.
Erll, Astrid 2005. Kollektives Gedächtnis und Erinnerungskulturen: Eine Einführung.
Stuttgart: Metzler.
Fischer, Oliver 2015. Aufgeheizte Stimmung in Wünsdorf. Märkische Allgemeine
Zeitung / Zossener Rundschau, November 27.
Fischer, Silvio 2000. Der frühere Militärstandort Wünsdorf – Ein Ort des Erinnerns?
In: Burkhard Assmus & Hans-Martin Hintz (eds.) Zum Umgang mit historischen
Stätten aus der Zeit des Nationalsozialismus. Berlin: Bundesministerium für
Bildung und Forschung & Deutsches Historisches Museum, pp. 129–147.
Fischer, Silvio 2010. Der frühere Militärstandort Wünsdorf: Ein Ort des Erinnerns.
Museumsblätter: Mitteilungen des Museumsverbandes Brandenburg, Heft 16,
pp. 44–45. Available at
Museumsblaetter/Heft_16/k_16_Fischer.pdf, last accessed on August 23, 2017.
Foertsch, Hartmut 1994. Der Abzug russischer Truppen aus Deutschland: ‘Keiner sagt:
Jungs, kommt bald wieder’. Europäische Sicherheit, Vol. 43, No. 3, pp. 125–127.
Für die Wünsdorfer 1991 = Für die Wünsdorfer bleibt die B 96 weiterhin gesperrt:
Gemeindevertreter gaben ihre Passierscheine zurück. Berliner Zeitung, May 18.
Furman, Alexander 1991. Zerrissen ist die russische Seele: Die Sowjetsoldaten leben
schlecht zu Hause und einsam in Deutschland. Die Zeit, January 4. Available
at, last accessed on
August 23, 2017.
Gallinat, Anselma & Kittel, Sabine 2009. Zum Umgang mit der DDR-Vergangenheit
heute: Ostdeutsche Erfahrungen, Erinnerungen und Identität. In: Thomas
Großbölting (ed.) Friedensstaat, Leseland, Sportnation? DDR-Legenden auf dem
Prüfstand. Berlin: Links, pp. 304–328.
Garnkel, Harold 1956. Conditions of Successful Degradation Ceremonies. American
Journal of Sociology, Vol. 61, No. 5, pp. 420–424..
Geenen, Elke M. 2002. Soziologie des Fremden: Ein gesellschaftstheoretischer Entwurf.
Opladen: Leske & Budrich.
Gehrke, Thilo 2008. Das Erbe der Sowjetarmee in Deutschland: eine Bild- und Text-
dokumentation. Berlin: Köster.
Gießmann, Hans-Joachim 1992. Das unliebsame Erbe: Die Auösung der Militärstruktur
der DDR. Baden-Baden: Nomos-Verlag.
Gravier, Magali 2003. Entrer dans l’administration de l’Allemagne uniée: une approche
anthropologique d’un rituel d’intégration (1990–1999). Revue française de science
politique, Vol. 53, No. 3, pp. 323–350. Available at-
francaise-de-science-politique-2003-3-page-323.htm, last accessed on August 23,
2017.
Folklore 70 47
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
Großbölting, Thomas 2010. Eine zwiespältige Bilanz: Zwanzig Jahre Aufarbeitung der
DDR-Vergangenheit im wiedervereinigten Deutschland. In: Thomas Großbölting
& Raj Kollmorgen & Sascha Möbius & Rüdiger Schmidt (eds.) Das Ende des
Kommunismus: Die Überwindung der Diktaturen in Europa und ihre Folgen.
Essen: Klartext-Verlag, pp. 61–75.
Großbölting, Thomas & Lorke, Christoph (eds.) 2017. Deutschland seit 1990: Wege in
die Vereinigungsgesellschaft. Stuttgart: Franz Steiner-Verlag.
Grothusen, Söhnke & Morais, Vânia & Stöckmann, Hagen (eds.) 2014. Generation und
Raum: Zur symbolischen Ortsbezogenheit generationeller Dynamiken. Göttingen:
Wallstein.
Habbe, Christian 1993. Jottwehdeh und Zaun drum. Spiegel Special, February 1.
Available at, last
accessed on August 23, 2017.
Hénard, Jaqueline 1993. Allerlei Ideen für die Zukunft Wünsdorfs: Hauptquartier der
Russen bei Berlin. Frankfurter Allgemeine Zeitung, July 16.
Hénard, Jacqueline 1994. Der Abschied der Alliierten aus Berlin hat schon begonnen.
Frankfurter Allgemeine Zeitung, March 19.
Hoffmann, Hans-Albert & Stoof, Siegfried 2013. Sowjetische Truppen in Deutschland und
ihr Haupt-quartier in Wünsdorf 1945–1994: Geschichte, Fakten, Hintergründe.
Berlin: Köster.
Höpp, Gerhard 1997. Muslime in der Mark: Als Kriegsgefangene und Internierte in
Wünsdorf und Zossen, 1914–1924. Berlin: Das Arabische Buch.
Jelzin-Besuch 1994 = Jelzin-Besuch: Gute Nacht! Hände hoch! Der Spiegel, May 9.
Available at, last accessed
on August 24, 2017.
Kaiser, Gerhard 1998. Sperrgebiet: Die geheimen Kommandozentralen in Wünsdorf seit
1871. Berlin: Links.
Kaiser, Gerhard & Herrmann, Bernd 2010 [1993]. Vom Sperrgebiet zur Waldstadt:
Die Geschichte der geheimen Kommandozentralen in Wünsdorf und Umgebung.
Berlin: Links.
Kampe, Hans G. 2009. Das Oberkommando der GSSD in Zossen-Wünsdorf: Zentrum
der sowjetischen/russischen Militärpolitik in der DDR. Berlin: Hoppegarten,
Projekt + Verlag Dr. Erwin Meißler.
Keller, Reiner 2016. Die symbolische Konstruktion von Räumen: Sozialkonstruktivistisch-
diskursanalytische Perspektiven. In: Gabriela B. Christmann (ed.) Zur
kommunikativen Konstruktion von Räumen: Theoretische Konzepte und empirische
Analysen. Wiesbaden: Springer, pp. 55–78. Available at
de/book/9783658008666, last accessed on August 24, 2017.
Kollmorgen, Raj 2010. Diskurse der deutschen Einheit. Aus Politik und Zeitgeschichte,
Vol. 60, Nos. 30–31, pp. 6–13. Available at
apuz/32599/deutsche-einheit, last accessed on August 24, 2017.
König, Ewald 2010. Burlakows Westgruppe und der Osten. Available at.
euractiv.de/section/wahlen-und-macht/news/burlakows-westgruppe-und-der-
osten/, last accessed on August 24, 2017.
Kowalczuk, Ilko-Sascha & Wolle, Stefan 2010. Roter Stern über Deutschland: Sowjetische
Truppen in der DDR. Berlin: Links.
48
Christoph Lorke
Kraat, Marion van der 2014. Vor Sperrgebiet zum Alltag: Wünsdorfs schwieriges Erbe.
20 Jahre nach dem Abzug der sowjetischen Streitkräfte ringt die “verbotene
Stadt” um ihre Zukunft. Märkische Allgemeine, August 29. Available at http://-
alltag-wuensdor/201408293834714.html, last accessed on September 28, 2017.
Kratz, Walther 2003. Konversion in Ostdeutschland: Die militärischen Liegenschaften
der abgezogenen Sowjetischen Streitkräfte, ihre Erforschung, Sanierung und
Umwidmung. Berlin: Trafo.
Leonhard, Nina 2016. Integration und Gedächtnis: NVA-Ofziere im vereinigten
Deutschland. Konstanz: UVK.
Leydesdorff, Selma 1996. Gender and Memory. Oxford: Oxford University Press.
Liebold, Edda 1991. Jenseits der grauen Mauer: Die Rote Armee bläst zum Abmarsch –
eine alte Garnison ordnet sich neu. Die Zeit, June 21. Available at.
zeit.de/1991/26/jenseits-der-grauen-mauer, last accessed on August 24, 2017.
Lippold, Frank E. 1991. Im Schnelldurchlauf durch Küche und Quartiere: Bundes-
verteidigungsminister Stoltenberg bei Westgruppe der sowjetischen Truppen.
Berliner Zeitung, April 27.
Locke, Stefan 2014. Niemand geht so ganz. Die Zeit, March 27. Available at.
zeit.de/2014/14/russen-soldaten-abzug-ddr, last accessed on August 24, 2017.
Lohnes, Patricia & Kucera, Katerina 1997. Konversion ehemalig militärisch genutzter
Liegenschaften in den neuen Bundesländern – am Beispiel des Militärstandortes
Wünsdorf. Diploma thesis, University of Kaiserslautern, Germany.
Loth, Wilfried 1998. Stalin’s Unwanted Child: The Soviet Union, the German Question,
and the Founding of the GDR. London: Palgrave Macmillan. DOI: 10.1007/978-
1-349-26400-1.
Maa 1991 = Maa: Schweigen oder sterben. Der Spiegel, November 4. Available at http://, last accessed on August 24, 2017.
Mallwitz, Gudrun 2015. Einstige Russen-Stadt soll 1200 Flüchtlinge aufnehmen. Berliner
Morgenpost, June 14. Available at
article142469462/Einstige-Russen-Stadt-soll-1200-Fluechtlinge-aufnehmen.
html, last accessed on August 24, 2017.
Militär 1994 = Militär: Leichen im See. Der Spiegel, January 24. Available at http://, last accessed on August 24, 2017.
Möhring, Maren 2012. Fremdes Essen: Die Geschichte der ausländischen Gastronomie
in der Bundesrepublik Deutschland. München: Oldenbourg.
Müller, Christian Th. 2005. ‘O’ Sowjetmensch! Beziehungen von sowjetischen
Streitkräften und DDR-Gesellschaft zwischen Ritual und Alltag. In: Christian
Th. Müller & Patrice G. Poutrus (eds.) Ankunft – Alltag – Ausreise: Migration
und interkulturelle Begegnung in der DDR-Gesellschaft. Köln & Weimar & Wien:
Böhlau, pp. 17–134.
Müller, Christian Th. 2011. US-Truppen und Sowjetarmee in Deutschland: Erfahrungen,
Beziehungen, Konikte im Vergleich. Paderborn: Ferdinand Schöningh.
Naumann, Klaus 1996 [1993]. NVA: Anspruch und Wirklichkeit nach ausgewählten
Dokumenten. Hamburg & Berlin & Bonn: Mittler.
Nawrocki, Joachim 1994. Abschied in Würde, Ankunft in Armut. Die Zeit, February 18.
Available at,
last accessed on August 24, 2017.
Folklore 70 49
Memories, (Re)Construction, and Appropriation of Space in Wünsdorf
Obertreis, Julia & Stephan, Anke (eds.) 2009. Erinnerungen nach der Wende: Oral
History und (post)sozialistische Gesellschaften. Essen: Klartext.
Plato, Alexander von 2009. Oral History nach politischen Systembrüchen. Erfahrungen
in Deutschland Ost und West: Einige Annäherungen. In: Julia Obertreis & Anke
Stephan (eds.) Erinnerungen nach der Wende: Oral History und (post)sozialistische
Gesellschaften. Erinnerungen. Essen: Klartext, pp. 63–82.
Pollack, Detlef & Pickel, Gert 1998. Die ostdeutsche Identität – Erbe des DDR-Sozialismus
oder Produkt der Wiedervereinigung? Die Einstellung der Ostdeutschen zu
sozialer Ungleichheit und Demokratie. Aus Politik und Zeitgeschichte, Nos. 41–42,
pp. 9–23.
Reuter, Julia 2002. Ordnungen des Anderen: Zum Problem des Eigenen in der Soziologie
des Fremden. Bielefeld: Transcript-Verlag.
Richter, Stefan 1993. Für Lenin ist es in Wünsdorf längst nach zwölf: Nach dem Abzug der
Russen steht eine 2 700-Seelen-Gemeinde vor dem Problem, einen Militärriesen
zu zivilisieren. Berliner Zeitung, September 4.
Sabrow, Martin 2012. ‘Fußnote der Geschichte’, ‘Kuscheldiktatur’ oder ‘Unrechtsstaat’?
Die Geschichte der DDR zwischen Wissenschaft, Politik und Öffentlichkeit. In:
Katrin Hammerstein & Jan Scheunemann (eds.) Die Musealisierung der DDR:
Wege, Möglichkeiten und Grenzen der Darstellung von Zeitgeschichte in stadt- und
regionalgeschichtlichen Museen. Berlin: Metropol-Verlag, pp. 13–24.
Sabrow, Martin 2014. Die DDR zwischen Geschichte und Gedächtnis. In: Christian
Ernst (ed.) Geschichte im Dialog: DDR-Zeitzeugen in Geschichtskultur und
Bildungspraxis. Schwalbach: Wochenschau-Verlag, pp. 23–37.
Satjukow, Silke 2004. Sowjetische Streitkräfte und DDR-Bevölkerung: Kursorische
Phänomenologie einer Beziehungsgeschichte. In: Hans Ehlert & Matthias Rogg
(eds.) Militär, Staat und Gesellschaft in der DDR: Forschungsfelder, Ergebnisse,
Perspektiven. Berlin: Links, pp. 225–249.
Satjukow, Silke (ed.) 2005. ‘Die Russen kommen!’ Erinnerungen an sowjetische Soldaten
1945–1992. Erfurt: Landeszentrale für politische Bildung Thüringen.
Satjukow, Silke 2008. Besatzer: ‘die Russen’ in Deutschland 1945–1994. Göttingen:
Vandenhoeck & Ruprecht.
Satjukow, Silke 2009. Die ‘Freunde’. In: Martin Sabrow (ed.) Erinnerungsorte der DDR.
München: Beck, pp. 55–67.
Schwelien, Michael 1991. Lieber reich als ruhmreich: Nichts fürchten die Sowjetsoldaten
mehr als den raschen Marschbefehl nach Hause. Die Zeit, July 5. Available at, last accessed on August 24,
2017.
Sörensen, Marie L. S. & Viejo-Rose, Dacia (eds.) 2015. War and Cultural Heritage:
Biographies of Place. Cambridge: Cambridge University Press. Available at
of_Place, last accessed on August 24, 2017.
Sowjettruppen 1990 = Sowjettruppen: Nerz und Matsch. Der Spiegel, December 24.
Available at, last accessed
on August 24, 2017.
Staatsfeiern 1994 = Staatsfeiern: Die unendliche Geschichte. Der Spiegel, March 14.
Available at, last accessed
on August 24, 2017.
Christoph Lorke
Thoß, Bruno (ed.) 2007. Die Geschichte der NVA aus der Sicht des Zeitzeugen und des
Historikers. Potsdam: Militärgeschichtliches Forschungsamt.
Tuchman, Barbara W. 1964. Can History Be Served Up Hot? New York Times, March 8.
Available at.
html, last accessed on August 24, 2017.
Tunbridge, John E. & Ashworth, Gregory J. (eds.) 1996. Dissonant Heritage: The
Management of the Past as a Resource in Conict. Chichester: Wiley.
Unsere Leute 1993 = Unsere Leute sind aggressiver: Spiegel-Interview mit Sicherheitsofzier
Anatolij Olejnikow über die GUS-Kriminalität in Deutschland. Der Spiegel, June
21. Available at, last
accessed on August 24, 2017.
Veblen, Thorstein 1899. The Theory of the Leisure Class: An Economic Study in the
Evolution of Institutions. New York: Macmillan. Available at.
columbia.edu/LCS/theoryleisureclass.pdf, last accessed on August 24, 2017.
Welzer, Harald 2002. Das kommunikative Gedächtnis: Eine Theorie der Erinnerung.
München: Beck-Verlag.
Wieschollek, Stefan 2005.. Bonn: Bonn International Center for Conversion. Available at, last accessed on August 24,
2017.
Wrochem, Oliver von 2003. Die sowjetischen Besatzer: Konstruktionen des Fremden
in der lebensgeschichtlichen Erinnerung.. 57–74.
Zwischenbilanz 1994 = Zwischenbilanz des russischen Abzugs: Verzögerungen
beim Wohnungsbau/Zunehmende Kriminalität. Frankfurter Allgemeine
Zeitung, January 26. Available at
FAZ/19940126/4/f-a-z-frankfurter-allgemeine-zeitung.html, last accessed on
August 24, 2017.
INTERNET SOURCES
Bücher und Bunkerstadt Wünsdorf. Bücherstadt-Tourismus GmbH. Available at www.
buecherstadt.com, last accessed on August 23, 2017.
Die Waldstadt Wünsdorf. Available at, last accessed
on August 23, 2017.
Militärparade in Wünsdorf 1994. Available at
watch?v=ShYQFoh2290, last accessed on August 23, 2017.
Stadt Zossen. Available at, last accessed on August 23, 2017.
VK (social medium). Available at, last accessed on
August 23, 2017.
Wunsdorf, DDR – Posledniaia osen’ / / Letzten Herbst. Available at.
com/watch?v=LEOeTtfCigo, last accessed on August 23, 2017. | https://www.researchgate.net/publication/321439992_After_the_End_of_Little_Moscow_Memories_ReConstruction_and_Appropriation_of_Space_in_Wunsdorf | CC-MAIN-2022-27 | refinedweb | 12,075 | 54.63 |
.
Azure Functions allow you to execute small snippets of code, in the cloud, without concern for cloud infrastructure. These functions are triggered by several different types of event sources, making them the building blocks of an event-driven or "serverless" architecture. They're easy to write, deploy, and connect to other cloud services to create powerful applications.
Azure Functions are also open source!
But did you know they're also... portable?
Function Apps runtimes can run in a container. And containers are managed by Kubernetes. And Kubernetes can run just about anywhere.
Even outside of Azure.
Azure Functions, in Kubernetes, running outside of Azure?
What about the event-driven nature of "serverless" applications with Azure Functions? When Function Apps are fully-managed on Azure, container instances are added (or removed) based on the number of incoming trigger events. This makes scaling to support message load nearly seamless. The Azure Functions runtime can run anywhere but what about the scale controller?
The Horizontal Pod Autoscaler in Kubernetes provides some autoscaling to support spikes in CPU intensity or some other custom application metrics. To replicate the event-based scaling that we're used to with Azure Functions, the HPA needs a little help from an open-source project called KEDA.
KEDA, or Kubernetes-based Event Driven Autoscaler, does exactly as described. It extends (but doesn't duplicate) the functionality of the Horizontal Pod Autoscaler. It supports events triggers from a large variety of sources both internal and external to large cloud providers. KEDA's scaler takes metric values from the event source and creates custom metrics to send to the HPA to support scaling based on the event load.
So, let's do it. Let's make portable Azure Functions.
Assumption is that you already have these things:
Installed Azure Function Core Tools v2
An Azure Subscription (this is for the storage queue, not for Azure Functions)
A Kubernetes cluster. It can be AKS, GKE, EKS, OpenShift, Kubernetes on-prem, whatever and wherever.
kubectlwith
current-contextset to your Kubernetes cluster.
And now we're ready to package up our function apps and take them on the road.
1. Make a directory for the Function App for your Azure Functions
mkdir functions-everywhere cd functions-everywhere
2. Initialize the Functions directory
func init . --docker
--docker flag creates a Dockerfile for a container using a base image that is based on the chosen
--worker-runtime
Choose your runtime and language.
3. Create a new Azure Function and define the trigger type
func new
Use the Azure Queue Storage Trigger
You can rename the function, or just leave the default for this demo.
4. Create an Azure storage account and queue
Create storage account in the portal
Go to your new storage account and under the Queue service heading, select Queues and create a new queue. Take note of the queue name.
5. Update your function with the storage account names
Get the connection string for your new storage account:
az storage account show-connection-string --name <storage-name> --query connectionString
Edit the
local.settings.json file in your function app, which contains the local debug connection string settings. Replace the
{AzureWebJobsStorage} with the connection string value:
local.settings.json
{ "IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "node", "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=yourstorageaccount;AccountKey=shhhh===" } }
Now, open the
function.json file and set the
connection setting value to
AzureWebJobsStorage. This tells the function to pull the connection string from the
AzureWebJobsStorage key we set above.
function.json
{ "bindings": [ { "name": "myQueueItem", "type": "queueTrigger", "direction": "in", "queueName": "<your-queue-name>", "connection": "AzureWebJobsStorage" } ] }
6. Enable the storage queue bundle for the function runtime
Ensure that
host.json contains the extensions bundle to allow Azure Storage Queues binding support.
host.json
{ "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[1.*, 2.0.0)" } }
7. Install KEDA in your cluster
func kubernetes install --namespace keda
Confirm that KEDA is installed:
kubectl get customresourcedefinition NAME AGE scaledobjects.keda.k8s.io 2h
8. Deploy your Function App to Kubernetes
Note: This assumes that you have a Docker account and you've already used
docker login to sign-in through the cli.
func kubernetes deploy --name <function-app-name-lowercase> --registry <your-docker-registry>
This command build the docker container, push it to the specified registry, generate a YAML file, and deploy to your Kubernetes cluster.
If you'd like to save a copy of the YAML deploy file, use the
dry-run flag:
func kubernetes deploy --name <function-app-name-lowercase> --registry <your-docker-registry> --dry-run > func-deployment.yml
9. See your function scaling as messages are added
To add a message to your storage queue, go to your Azure Storage account in the Azure Portal and open the Storage Explorer. Select your storage queue and add a new message.
You should initially see
0 pods since the function has not started scaling yet.
kubectl get deploy
Note: By default, the polling interval set is 30 seconds on the
ScaledObject resource and the cooldown period is 300 seconds.
kubectl get pods -w
After all messages are consumed by the function app, and the cooldown period has elapsed, the last pod should scale back down to
0.
Congrats! You are now using portable Azure Functions.
Discussion (2)
Loved this article! Thank you. We have a large library of azure functions, that is experiencing exponential growth. My position has always been to lean into the advantages of your cloud provider, and now the existence of a KEDA portability model makes it easy to dismiss any of the "cloud vendor lock in" boogeyman arguments from other architects, sales teams. With KEDA I can confidently explain that our library of functions is portable to Kubernetes. And while I'm not interested in managing Kubernetes soon, the option for portability now, enables our continued bold growth into Azure Functions. Thanks.
My mind is blown! | https://practicaldev-herokuapp-com.global.ssl.fastly.net/lynnaloo/the-curious-developer-s-guide-to-portable-azure-functions-109m | CC-MAIN-2021-49 | refinedweb | 981 | 56.55 |
On Tuesday May 9, Jonathan E Brassow wrote: > It looks reasonable at first glance, but I have to think about it > more. Do you have a test case that I can reproduce with? Not really... The customer does, but I don't have precise details. I think is several drives, with several lvs across them, and active NFS activity on these. In this situation 'pvmove' everything off a device which contains a non-initial section of at least one of the lvs. The active traffic on the filesystem in an important part of the test case I think. I have since found that not only didn't the original patch compile, but it wasn't complete either. The mirror_map function sets map_context->ll to a value which is effectively the same as bio_to_region. As I changed bio_to_region, I need to change that assignment to. This is the current patch. Note that the declaration of two structures needed to be moved up for it to compile. NeilBrown Signed-off-by: Neil Brown <neilb suse de> ### Diffstat output ./drivers/md/dm-raid1.c | 63 ++++++++++++++++++++++++------------------------ 1 file changed, 32 insertions(+), 31 deletions(-) diff ./drivers/md/dm-raid1.c~current~ ./drivers/md/dm-raid1.c --- ./drivers/md/dm-raid1.c~current~ 2006-05-12 14:28:37.000000000 +1000 +++ ./drivers/md/dm-raid1.c 2006-05-19 11:15:58.000000000 +1000 @@ -106,12 +106,42 @@ struct region { struct bio_list delayed_bios; }; + +/*----------------------------------------------------------------- + *]; +}; + /* * Conversion fns */ static inline region_t bio_to_region(struct region_hash *rh, struct bio *bio) { - return bio->bi_sector >> rh->region_shift; + return (bio->bi_sector - rh->ms->ti->begin) >> rh->region_shift; } static inline sector_t region_to_sector(struct region_hash *rh, region_t region) @@ -539,35 +569,6 @@ static void rh_start_recovery(struct reg wake(); } -/*----------------------------------------------------------------- - *]; -}; - /* * Every mirror should look like this one. */ @@ -1113,7 +1114,7 @@ static int mirror_map(struct dm_target * struct mirror *m; struct mirror_set *ms = ti->private; - map_context->ll = bio->bi_sector >> ms->rh.region_shift; + map_context->ll = bio_to_region(&ms->rh, bio); if (rw == WRITE) { queue_bio(ms, bio, rw); | http://www.redhat.com/archives/dm-devel/2006-May/msg00083.html | CC-MAIN-2013-20 | refinedweb | 326 | 56.76 |
Introduction: Post-Box Synthesizer
In this project, take an old post-box and an Arduino, to create an incredibly functional monophonic synthesizer. This synthesizer includes such features as:
- Dual oscillators
- 6 wave forms (Sin, Triangle, Left Saw, Right Saw, Square, Flat)
- Noise feature on the main oscillator
- Adjustable mixing of the two oscillators
- Adjustable cents, semitone, and octave for the second oscillator
- LFO from 0 to 10 Hz
- Routing the LFO to semitone, cents and octave control of the second oscillator
- 20 note arpeggio feature with adjustable speed from 0 to 50Hz.
- 5 banks for saving presets
- Internal speaker and 3.5mm aux output with volume control
- LCD
- MIDI input
- UART input
Parts
Filter:
- 2X 4.7mH inductor
- 2X 47nF capacitor
- 1X 100nF capacitor
- 2X 270 ohm resistor
- PC board
MIDI Input:
- 1X Female MIDI connector
- 1X 6N138 opto-isolator
- 1X 220 ohm resistor
- 1X 270 ohm resistor
- 1X 1N194 diode
Audio Output:
- 1X 3.5mm Female audio jack
- 1X 8 ohm speaker
- 1X SPDT switch
- 1X Amplifier (For this I used a ready made breakout from SparkFun)
- 1X 10k ohm potentiometer
User Input/Output:
- 1X Serial Enabled LCD (20x4 Character LCD from SparkFun)
- 6X Tactile Switches
- 4X 10k ohm potentiometers
Misc:
- 1X DC Barrel Jack
- 1X 7805 voltage regulator
- 1X 5 pin male header
- 1X 10k ohm resistor
- 1X push switch (for the reset)
Step 1: Synthesis Method
The synthesis method used in this project is called DDS, direct digital synthesis. With this method, a digital signal, 1's and 0's, can be turned into an analog signal without the addition of a DAC, digital to analog converter. In fact with DDS, there are very few extra components are actually required; only a low pass filter.
The method works by creating a PWM, pulse width modulation, signal and modulating the duty cycle, the amount of time the signal stays on, in proportion to the amplitude of a wave form at a given time. So in the code there is a wave table of one period for various wave forms. The program then steps through the table at different speeds to create different frequencies. The output of the PWM is shown in the image below. As the duty cycle increase, the amplitude of the output wave increases. The filter removes the carrier frequency, the square wave, and leave the clean wave form from the table.
Step 2: The Filter
There are a couple ways to create a filter. You can make an RC or LC filter, as long as it's built in a lowpass configuration with a 12.5 kHz cutoff frequency. I used a 2nd Order Chebyshef filter which removes the carrier frequency extremely well, and leaves a smooth sound for the output signal. The schematic is fairly simple, even though it requires inductors, and only needs 7 components.
First I tried to just solder the leads together, but then I used a PC board to make is easier, and look a little more professional. It makes connecting the input and output easier and keeps all the components for the filter nice and segmented.
Step 3: PWM Code
The first step is to create the wave table. The table is saved into the Atmega328 RAM using the pgmspace library. Each wave table has 256 values from 0 to 255, so each value can be mapped to a byte data type. The sine wave definition is shown below. Each value is the amplitude of the wave at a specific time. This represents one period of the wave. The higher the frequency that is played, the faster the program steps through the table.
#include "avr/pgmspace.h"
//Waveform definitions
PROGMEM prog_uchar waveTable[] = {
//sine wave,
};
To get the Arduino to create the PWM signal, the timer has to be properly initialized. For this I used the C method to setup the timer so that I can better control it. The timer is created so that we have a 32 kHz sampling rate for our audio and the output of the signal is put on 11 of the Arduino. I also enable an overflow interrupt, so that when the timer value goes over 255, the interrupt triggers.);
}
This is the overflow interrupt. When the interrupt occurs I calculate the next value that should be pulled from the wave table and write that value to pin 11. A variable called the phase accumulator keeps track of where the program is in the table.
ISR(TIMER2_OVF_vect)
{(waveTable + icnt + (waveSelect << 8));
if(icnt1++ == 125) { // increment variable c4ms all 4 milliseconds
c4ms++;
icnt1=0;
}
}
That value is calculated using a tuning word which is found by dividing the frequency you want by a reference clock, in this case the 32kHz reference clock.
const double refclk=31376.6; // measured
tword_m=pow(2,32)*dfreq/refclk; // calulate DDS new tuning word
Step 4: Note Effects
The note values are stored to an array. You can find the values here:
double keyFreq[] = {
27.5, 29.1352, 30.8677, //Octave 0
32.7032, 34.6478, 36.7081, 38.8909, 41.2034, 43.6535, 46.2493, 48.9994, 51.9131, 55, 58.2075, 61.7354, //Octave 1
65.4064, 69.2957, 73.4162, 77.7817, 82.4069, 87.3071, 92.4986, 97.9989, 103.826, 110, 116.541, 123.471, //Octave 2
130.813, 138.591, 146.832, 155.563, 164.814, 174.614, 184.997, 195.998, 207.652, 220, 233.082, 246.942, //Octave 3
261.626, 277.183, 293.665, 311.127, 329.628, 349.228, 369.994, 394.995, 415.305, 440, 466.164, 493.883, //Octave 4
523.251, 554.365, 587.330, 622.254, 659.255, 698.456, 739.989, 783.991, 830.609, 880, 932.328, 987.767, //Octave 5
1406.50, 1108.73, 1174.66, 1244.51, 1318.51, 1396.91, 1479.98, 1567.98, 1661.22, 1760, 1864.66, 1975.53, //Octave 6
2093.00, 2217.46, 2349.32, 2489.02, 2637.02, 2793.83, 2959.96, 3135.96, 3322.44, 3520, 3729.31, 3951.07, //Octave 7
4186.01 //Octave 8
};
So notes sent from the MIDI or over UART have an appropriate value, instead of having to be calculated on the fly.
The second oscillator can be detuned from the first in 3 ways.
1. Is using a system called cents, which are fractions of a note. Calculated like this:
centMultiplier = pow(2.0,(cents + dC)/1200.0);
That value is then multiplied to the note frequency.
2. Is using a system called semi, which are full note shifts from -1 to +1 octave
3. Finally by full octaves from -3 to +3
The two oscillators are then mixed by using an adjustable weight.
byte osc1 = ((pgm_read_byte(waveTable + icnt1 + (osc1WaveForm<<8))*weight1)/MAX_WEIGHT); //first osc
byte osc2 = ((pgm_read_byte(waveTable + icnt2 + (osc2WaveForm<<8))*weight2)/MAX_WEIGHT); //second osc
The two values are then summed. The weight value goes from 0 to 16. So you can have entirely the first oscillator, entirely the second, or some mixture in between.
The LFO adjusts the detuning of the second oscillator by adjusting the values in proportion to the amplitude of the wave.So it works in a similar way to the first 2 oscillators but instead of creating sound, it tweaks values.
Step 5: Arpeggiator
The arpeggiator is a system that creates an arpeggio based upon the notes played when in arpeggio mode. When arpeggio mode begins, you play a note. That note becomes the root key. Every key hit afterward is saved to an array, of a max of 20 notes. The value stored to the array is the difference between the note played and the root key.
if(appMode) //add notes to the app array
{
if(appMaxCount == 0) //if just starting app mode
{
rootKey = note - MIDI_OFFSET; //get new root key, all notes in array are relative to this value
}
else
{
app[appMaxCount - 1] = noteSelect - rootKey; //calculate relative note
}
appMaxCount++; //increment number of notes in app array
if(appMaxCount > MAX_APP_NOTES)
{
appMode = false;
appUpdate();
}
}
When playing, the arpeggio array is stepped through at a speed depending on the value from one of the control potentiometers. The value in the array is added to the note being played.
noteSelect = rootKey + app[appCount];
appTimer = millisecs;
appCount++; //move through the array
if(appCount >= appMaxCount)
{
appCount = 0;
}
Step 6: Control
To start, wire the MIDI connector according to the schematic. It's important to note that the MIDI connector is probably upside down in the schematic, make note before you start soldering. The point of the opto-isolator is to keep the signal from the MIDI controller from damaging the control board. The output from the opto-isolator is connected to the serial input, RX, pin on the Arduino.
The MIDI in is serial at 32150 baud. The system is 3 bytes. The first byte is whether or not the note is on or off. The second is the note value and the third is the velocity, but I ignore that.
I handle it with a serial event.
void serialEvent()
{
if(Serial.available() >= 3) //messages in 3 byte packets
{
byte cmd = Serial.read();
byte note = Serial.read();
byte vel = Serial.read();
if(cmd >= 0x80 && cmd <= 0x8F && (rootKey == note - MIDI_OFFSET || noteSelect == note - MIDI_OFFSET)) //note off
{
notePlaying = false;
}
else if(cmd >= 0x90 && cmd <= 0x9F) //note on
{
noteSelect = note - MIDI_OFFSET;
notePlaying = true;
}
}
}
Because the synthesizer is mono, I connected the left and right channels of the audio jack together.
The SPDT switch is used to switch between audio output to the jack or the speaker. The center pin is where the signal from the amplifier is connected. The right pin goes to the audio jack and the left to the speaker. The ground of the audio jack, the center pin, is connected to one of the pins of the speaker, then both are connected to ground.
Step 7: User Control
The user control is composed of 3 parts, the LCD, the switches, and the potentiometers.
If you look at the schematic, all the switches are connected via a common ground. Luckily I had a switch array from an old computer monitor that already had the right number of switches, all connected by common ground. It even had an LED, which isn't necessary but I included it anyway. Without this array each switch would have had to be connected together manually. One side of all the switches is connected to ground, then each switch's other side is connected to a pin on the Arduino. Each of the pins on the Arduino then has an internal pull-up enabled.
The LCD is serial enabled, but because the MIDI in takes the main serial connection, the LCD requires a software serial connection. The software serial is enabled on pin 13, so that is connected to the receiving pin on the LCD. The LCD is also connected to the power and ground on the main board.
The potentiometers are connected to the Arduino's analog input pins 0 through 3. The Arduino's AREF pin is connected to the 5 volts.
In order to avoid sacrificing an entire Arduino board for this project, I programmed the chip first, then remove it to a separate board with a separate crystal. This requires a PC board for the chip and crystal. Now this becomes the control board, having rails for power and ground and all the pins broken out.
Step 8: Putting It Together
Next I wire up the amplifier. I connect the power to the main power on the control board. Then I wire the volume potentiometer to the three spaces on the amp. The nice thing about the breakout board is all of the connections are appropriately labeled. I take the volume potentiometer and connect it through the left side of the box. The output from the filter is connected to the input on the amplifier. The output of the amplifier is connected to the switch. Only the positive output from the output on the amplifier is connected to the middle pin on the audio switch.
I added an external reset switch just in case, next to the volume control potentiometer. It helps when reprogramming the board, or if the synthesizer get stuck.
Wire up the power supply. I used a DC barrel jack and a 7805 voltage regulator. The back of the DC barrel is the positive, so by the schematic, that is connected to the input pin on the 7805. The control board and barrel jack share a common ground. The output from the voltage regulator is then run to the 5 volt line on the control board. The DC jack is glued to the back of the box. I only recommend putting in 9V to the jack, maximum.
The FTDI connector is 5 male header pins connected as shown in the schematic. This allows for serial communication to the synthesizer if you don't have a MIDI controller.
Using the speaker, I marked a space. Then using a compass, I created concentric circles to drill holes for the sound to come through.
Once everything is properly wired, use that hot glue again to secure everything down. I put the MIDI In/Audio Out in the upper right hand corner, the control board in the upper left, speaker lower right, and the filter and amplifier in toward the center.
Add a little paint, and that's it.
Now a little demo...
Attachments
Finalist in the
Musical Instruments Contest
Participated in the
Arduino Contest
Be the First to Share
Recommendations
3 Comments
5 years ago on Introduction
Could anyone please post the complete code so I can Learn how to mount it by assimilating the explanation to the result? That would be really helpful for beginners like myself.. Thanks as bunch this is a great project!
7 years ago on Introduction
You're getting some really great sounds here. Especially when you detune the oscillators. Sounds very comparable to an analog. Good job!
7 years ago on Step 8
Now that is neat a cardboard proto box. | https://www.instructables.com/Post-Box-Synthesizer/ | CC-MAIN-2021-31 | refinedweb | 2,335 | 63.8 |
Hello AskPerf! My name is Jeffrey Worline and I am a Support Escalation Engineer on the Performance team in Texas. We’ve done a number of posts in the past about WMI, and different WMI tools, and today we’re going to take a look at a powerful tool that you can use. If you would like a little help with leveraging the power of WMI but do not know how to write a script or are scripting challenged as I am, then what is needed is a tool that will write one for you. There is such a tool and it is called Scriptomatic 2.0, which is an HTA (HTML Application) that can write rudimentary WMI scripts. The tool is remarkably easy to use and flexible with different scripting languages and output format options. Let’s get started …
Scriptomatic 2.0 allows you to work with all WMI Namespaces, and not just Root\Cimv2. This is especially useful when working with custom applications, providers and namespaces. In addition, you can create scripts to run against multiple computers by entering the system names in a delimited list format or by loading the names from an input text file. Scriptomatic also provides some flexibility with respect to the scripting language you need your script generated in. You have the choice of having the script created in VBScript, Perl, JScript or Python. In addition to the different scripting languages that you can use, you also have the option to generate output in different formats including HTML, Excel and XML.
OK – enough about what the tool can do, let’s see it in action. Once you’ve downloaded Scriptomatic from the Microsoft website, double-click the ScriptomaticV2.HTA file. In the steps below, you’ll notice that I’m running Scriptomatic on a Windows XP system. If you’re using Windows Vista or Windows Server 2008 and UAC is enabled, check out the note at the end of this post about running Scriptomatic on those operating systems. Getting back to our demo, once Scriptomatic is lanched, you should see the following screen:
By default, all of the classes under root\CIMv2 are loaded. To choose a different namespace, click on the WMI Namespace drop down box and you can select from the namespaces available on the local machine.
If you want to get the WMI Namespaces from a remote computer to use instead of the local computer’s namespaces, just click on the “WMI Source” button. Once the window opens, put in the name of the remote computer. Remember that you will need to have permissions to access WMI on the remote system.
Select the namespace, and the classes found in that namespace are populated into the WMI Class dropdown box. Select a class from the WMI Class dropdown and Scriptomatic 2.0 auto-generates a script that returns the information for all the properties of that class. For example, I chose Win32_Processor:
Click on the Run button and unless you have chosen a different output format, you should get a command window with the output of the script file:
Let’s take a look at the output of this script in HTML format:
If you want to save your script, ensure that you do it prior to generating a new script. Scriptomatic re-uses its temp files to store the script, so if you execute a new script it overwrites the data from the old script in the temp files. To save the script, click the “Save” button and provide the full path to the location to save the script. Remember to add the appropriate extension for your script, .vbs for VBScript, .pl for Perl, .py for Python and .js for JScript.
Now, let’s take a quick look at running a script against multiple target machines. There are several different ways to specify that you want to run your script against a group of computers. You can enter the computer names in the “Target Computers” box – as a comma-separated list (or by putting each computer on a separate line) or from an input file.
To load the computer names from a file, click the “Load from File” button and navigate to the text file that has the list of computers. Once you have the names loaded, whether via the input file or by typing them in, click on the “Update Script” button to get the computer names added to the script. Again, you’ll need to ensure that you have appropriate permissions to run WMI queries against the remote systems.
Once you’ve finished working with Scriptomatic, you can either click on the “Quit” button or just close the console window. If you click on the “Quit” button, the Scriptomatic temp files are deleted as part of the application exit process. If you close the console window, the temp files are not deleted.
OK – now, let’s talk about running Scriptomatic 2.0 on Windows Vista and Windows Server 2008 systems. If you try to launch the Scriptomaticv2.HTA file by double-clicking on it, you will most likely encounter an Error 80041003 message. ScriptomaticV2.HTA requires Administrative credentials to run. If you recall from our Basics of UAC post a couple of years ago, when UAC is enabled, all users (except the built-in Administrator account) run as Standard Users. Thus, you have to run ScriptomaticV2.HTA in an elevated context. To do this, launch an elevated command prompt and run the following command: MSHTA.EXE C:\<path to Scriptomatic>\ScriptomaticV2.HTA,
With that, we’ve come to the end of this post. If you’ve never used Scriptomatic before, I hope you find it as useful as I have. A quick word on support – there is no official support for Scriptomatic 2.0. However, as indicated in the Scriptomatic 2.0 Readme, if you do run into issues with the tool, send an email to scripter@microsoft.com.
Additional Resources:
- Download Scriptomatic 2.0
- Scriptomatic 2.0: Readme
- “Hey, Scripting Guy!” downloadable Archive (August 2004 – September 2007)
– Jeffrey Worline
Join the conversationAdd Comment
I can’t download this tool… Abandonware?
Someone who can share it…!?
…or another alternative? | https://blogs.technet.microsoft.com/askperf/2009/02/17/two-minute-drill-scriptomatic-2-0/ | CC-MAIN-2016-36 | refinedweb | 1,031 | 72.46 |
Workflow 7: Retrieving data from remote archives#
This tutorial covers the retrieval of data from the ICOS Carbon Portal and the CEDA archives.
import os import tempfile tmp_dir = tempfile.TemporaryDirectory() os.environ["OPENGHG_PATH"] = tmp_dir.name # temporary directory
ICOS#
It’s easy to retrieve atmospheric gas measurements from the ICOS Carbon Portal using OpenGHG. To do so we’ll use the
retrieve_icos function from
openghg.client.
Checking available data#
You can find the stations available in ICOS using their map interface. Click on a site to see it’s information, then use it’s three letter site code to retrieve data. You can also use the search page to find available data at a given site.
Using
retrieve_icos#
First we’ll import
retrieve_icos from the
client submodule, then we’ll retrieve some data from Weybourne (WAO). The function will first check for any data from WAO already stored in the object store, if any is found it is returned, otherwise it’ll retrieve the data from the ICOS Carbon Portal, this may take a bit longer.
from openghg.client import retrieve_icos
wao_data = retrieve_icos(site="WAO", species="ch4")
Now we can inspect
wao_data, an
ObsData object to see what was retrieved.
wao_data
We can see that we’ve retrieved
ch4 data that covers 2013-04-01 - 2015-07-31. Quite a lot of metadata is saved during the retrieval process, including where the data was retrieved from (
dobj_pid in the metadata), the instruments and their associated metadata and a citation string.
You can see more information about the instruments by going to the link in the
instrument_data section of the metadata
metadata = wao_data.metadata instrument_data = metadata["instrument_data"] citation_string = metadata["citation_string"]
Here we get the instrument name and a link to the instrument data on the ICOS Carbon Portal.
instrument_data
And we can easily get the citation string for the data
citation_string
Viewing the data#
As with any
ObsData object we can quickly plot it to have a look.
NOTE: the plot created below may not show up on the online documentation version of this notebook.
wao_data.plot_timeseries()
Data levels#
Data available on the ICOS Carbon Portal is made available under three different levels (see docs).
- Data level 1: Near Real Time Data (NRT) or Internal Work data (IW). - Data level 2: The final quality checked ICOS RI data set, published by the CFs, to be distributed through the Carbon Portal. This level is the ICOS-data product and free available for users. - Data level 3: All kinds of elaborated products by scientific communities that rely on ICOS data products are called Level 3 data.
By default level 2 data is retrieved but this can be changed by passing
data_level to
retrieve_icos. Below we’ll retrieve some more recent data from WAO.
wao_data_level1 = retrieve_icos(site="WAO", species="CH4", data_level=1)
wao_data_level1
You can see that we’ve now got data from 2021-07-01 - 2022-04-24. The ability to retrieve different level data has been added for convenienve, choose the best option for your workflow.
NOTE: level 1 data may not have been quality checked.
wao_data_level1.plot_timeseries(title="WAO - Level 1 data")
Forcing retrieval#
As ICOS data is cached by OpenGHG you may sometimes need to force a retrieval from the ICOS Carbon Portal.
If you retrieve data using
retrieve_icos and notice that it does not return the most up to date data (compare the dates with those on the portal) you can force a retrieval using
force_retrieval.
new_data = retrieve_icos(site="WAO", species="CH4", data_level=1, force_retrieval=True)
Here you may notice we get a message telling us there is no new data to process, if you force a retrieval and there is no newer data you’ll see this message.
CEDA#
To retrieve data from CEDA you can use the
retrieve_ceda function from
openghg.client. This lets you pull down data from CEDA, process it and store it in the object store. Once the data has been stored successive calls will retrieve the data from the object store.
NOTE: For the moment only surface observations can be retrieved and it is expected that these are already in a NetCDF file. If you find a file that can’t be processed by the function please open an issue on GitHub and we’ll do our best to add support that file type.
To pull data from CEDA you’ll first need to find the URL of the data. To do this use the CEDA data browser and copy the link to the file (right click on the download button and click copy link / copy link address). You can then pass that URL to
retrieve_ceda, it will then download the data, do some standardisation and checks and store it in the object store.
We don’t currently support downloading restricted data that requires a login to access. If you’d find this useful please open an issue at the link given above.
Now we’re ready to retrieve the data.
from openghg.client import retrieve_ceda
url = ""
hfd_data = retrieve_ceda(url=url)
Now we’ve got the data, we can use it as any other
ObsData object, using
data and
metadata.
hfd_data.plot_timeseries()
Retrieving a second time#
The second time we (or another use) retrieves the data it will be pulled from the object store, this should be faster than retrieving from CEDA. To get the same data again use the
site,
species and
inlet arguments.
hfd_data2 = retrieve_ceda(site="hfd", species="co2")
hfd_data2
Cleanup the temporary object store#
tmp_dir.cleanup() | https://docs.openghg.org/tutorials/local/7_Retrieving_remote_data.html | CC-MAIN-2022-33 | refinedweb | 915 | 60.24 |
📅 2022-May-24 ⬩ ✍️ Ashwin Nanjappa ⬩ 🏷️ cheatsheet, ruby ⬩ 📚 Archive
Below are notes I took while learning Ruby, written from the point of view of a Python programmer.
$ irb irb(main):001:0> x = 10 irb(main):002:0> x => 10
$ ruby foobar.rb
$ sudo apt install ruby2.7-doc $ ri <whatever you want> $ ri print
print("Hello!\n") print("Value of x is: ", x)
Note that unlike Python, print does not add spaces or newlines automatically. Spaces and newlines have to be explicitly specified. WYSIWYG strictly.
require "foo/bar" require_relative "where/is/it"
.rbRuby file with its own named
moduleblock inside it, inside whose namespace we can have variables, methods or classes.
# foobar.rb module FooBar # Vars, methods, classes go here def foobar(x, y) end end # Accessed from another file as ... require_relative "foobar" FooBar.foobar(10, 20)
initializeis the ctor. Instance variables have the
@prefix and cannot be accessed outside the class.
class Foobar def initialize(x, y) @x = x @y = y end def another_func() end end
attr_readerdefinitions in the class:
class Foobar attr_reader :x attr_reader :y def initialize(x, y) @x = x @y = y end end f = Foobar(10, 20) f.x # Works!
selfto refer one’s own methods in the class:
class Foobar def func1() end def func2() self.func1() end end
x = [] y = ["foo", 1, nil] x[0] = y[0] x.append(y[2])
alist = [1, 2, 3] for x in alist print(x, "\n") end
empty_hash = {} ahash = {"x": 1, "y": 2}
ahash.each do |k, v| print(k, v) end
if ahash.key?("foobar") print("Foobar present in hash") end
"123".to_f 123.to_f
> x = 3.14 > x.to_i => 3
> x = 3.99 > x.round => 4
"x" == "x" # Always true "x".equal?("x") # false
:colon operator to create a symbol, a trick to have a single copy of literal strings:
:foo.equal?(:foo) # true
Object, so available in every Ruby object. | https://codeyarns.com/tech/2022-05-24-ruby-cheatsheet.html | CC-MAIN-2022-40 | refinedweb | 315 | 68.16 |
posix_trace_get_filter, posix_trace_set_filter - retrieve and set the filter of an initialized trace stream (TRACING)
[TRC TEF]
#include <trace.h>#include <trace.h>
int posix_trace_get_filter(trace_id_t trid, trace_event_set_t *set);
int posix_trace_set_filter(trace_id_t trid,
const trace_event_set_t *set, int how);
The posix_trace_get_filter() function shall retrieve, into the argument pointed to by set, the actual trace event filter from the trace stream specified by trid.
The posix_trace_set_filter() function shall change the set of filtered trace event types after a trace stream identified by the trid argument is created. This function may be called prior to starting the trace stream, or while the trace stream is active. By default, if no call is made to posix_trace_set_filter(), all trace events shall be recorded (that is, none of the trace event types are filtered out).
If this function is called while the trace is in progress, a special system trace event, POSIX_TRACE_FILTER, shall be recorded in the trace indicating both the old and the new sets of filtered trace event types (see Trace and Trace Event Filter Options: System Trace Events and Trace, Trace Log, and Trace Event Filter Options: System Trace Events ).
If the posix_trace_set_filter() function is interrupted by a signal, an error shall be returned and the filter shall not be changed. In this case, the state of the trace stream shall not be changed.
The value of the argument how indicates the manner in which the set is to be changed and shall have one of the following values, as defined in the <trace.h> header:
-.
None.
Trace and Trace Event Filter Options: System Trace Events, Trace, Trace Log, and Trace Event Filter Options: System Trace Events, posix_trace_eventset_add(), the Base Definitions volume of IEEE Std 1003.1-2001, <trace.h>
First released in Issue 6. Derived from IEEE Std 1003.1q-2000.
IEEE PASC Interpretation 1003.1 #123 is applied. | http://pubs.opengroup.org/onlinepubs/009695399/functions/posix_trace_set_filter.html | CC-MAIN-2013-20 | refinedweb | 305 | 67.18 |
Latest members | More ...
Introduction and Goal
Still new to LINQ below are some real quick starters
Deep dive in to how LINQ query works
Steps involved to write compiled LINQ queries
Performance comparison
Analyzing the results
Hardware and software configuration used for test conduction
Source code. Watch my 500 videos on WCF, WPF, LINQ, Design patterns, WWF, Silverlight, UML @
Are you a complete newbie LINQ FAQ part II :- Want to define 1-* and *-1 using LINQ Issues of multiple trips handled in this article Do not know how to call stored procedures using LINQ
Before we get in to how we can improve LINQ query performance, let’s first try to understand what are the various steps involved in a LINQ query execution. All LINQ queries are first converted to SQL statements. This conversion also involves checking of LINQ query syntaxes and translating this query to SQL.Below is a simple LINQ query which selects data from a customer table. This LINQ query is then transformed in to necessary SQL statements by the LINQ engine.
The checking of syntaxes and generating SQL query accordingly is a bit of tedious job. This task is performed every time we fire LINQ query. So if we can cache the LINQ query plan we can execute much faster.LINQ has provided something called as compiled LINQ queries. In compiled LINQ queries the plan is cached in a static class. As we all know that static class is global cache. So LINQ uses the query plan from the static class object rather than building the preparing the query plan from scratch.
Figure: - LINQ Query Caching
In all there are 4 steps which need to be performed right from the time LINQ queries are built till they are fired. By using compiled LINQ queries the 4 steps are reduced to 2 steps.
Figure: - Query plan bypasses many steps
The first thing is to import Data.Linq namespace.
Import namespace using System.Data.Linq;
The syntax to write compiled queries is a bit cryptic. So let us break those syntaxes in small pieces and then we will try to see how the complete syntax looks like. To execute a compiled function we need to write function to pointer. This function should be static so that LINQ engine can use the query plan stored in those static class objects.Below is how we define the function it starts with ‘public static’ stating that this function is static. Then we use the ‘Func’ keyword to define the input parameters and output parameters. Below is how the parameter sequence needs to be defined:-• The first parameter should be a data context. So we have defined the data type as ‘DataContext’.• Followed by 1 or many input parameters currently we have only one i.e. customer code so we have defined the second parameter data type as string.• Once we are done with all input parameters we need to define the data type of the output. Currently we have defined the output data type as ‘IQueryable’.We have given a name to this delegate function as ‘getCustomers’.
public static Func<DataContext, string, IQueryable<clsCustomerEntity>> getCustomers
We need to call method ‘Compiled’ of static class ‘CompiledQuery’ with the datacontext object and necessary define input parameters followed by the LINQ query. For the below snippet we have not specified the LINQ query to minimize complications.
CompiledQuery.Compile((DataContext db, string strCustCode)=> Your LINQ Query );
So now. So we have taken the above defined function and wrapped that function in a static class ‘ is returning data type as ‘IEnumerable’. So we have to define an ‘IEnumerable’ customer entity which will be flourished through the ‘getCustomers’ delegate function. We can loop through the customer entity using ‘clsCustomerEntity’ class. also with the article. Below is a simple screen shot of the same :-
So what we have done in this project is we have executed LINQ SQL without query compilation and with query compilation. We have recorded the time using ‘System.Diagnostic.StopWatch’ class. So here’s how the performance recording has taken place. We start the stop watch, run the LINQ SQL without compile and then we stop the watch and record the timings. In the same way we have recorded the performance LINQ query with compilation.
So we create the data context object and start the stop watch.
System.Diagnostics.Stopwatch objStopWatch = new System.Diagnostics.Stopwatch();
DataContext objContext = new DataContext(strConnectionString);
objStopWatch.Start();
We run the LINQ query with out compilation , after execution stop the watch and record the time differences.
var MyQuery = from objCustomer in objContext.GetTable<clsCustomerEntity>()where objCustomer.CustomerCode == txtCustomerCode.Textselect again start the stop watch, run LINQ query with compilation and record the time taken for the same. time of execution during first time and as well as subsequent times. At least 8 recordings are needed so that any kinds of .NET run time performance are averaged out.There are two important points we can conclude from the experiment:-• We need to excuse the first reading as there can be lot of.NET framework object initialization. It can lead to lot of wrong conclusions as there is lot of noise associated in the first run.• The subsequent readings have the real meat difference. The average difference between then is 5 times. In other words LINQ query executed using no compilation was 5 MS slower than compiled LINQ queries.
Below is a graphical representation of the same you can see how compiled queries have better performance than non-compiled ones.
• Web application and database application where on different boxes.• Web application was running on windows XP using simple personal web server provided by VS 2008 (sorry for that guys but did not have any options at that moment). Web application PC hardware configuration was 2 GB RAM, P4, 80 GB hard disk.• Database was SQL 2005 on windows 2003 server with 2 GB RAM , P4 , 80 GB hard disk
You can download the Source Code from top of this article.
If you like this article, subscribe to our RSS Feed. You can also subscribe via email to our Interview Questions, Codes and Forums section. | http://www.dotnetfunda.com/articles/article469-how-to-improve-your-linq-query-performance-by-5-x-times.aspx | CC-MAIN-2013-20 | refinedweb | 1,022 | 64.41 |
Get the highlights in your inbox every week.
Grafana Tempo is a new open source, high-volume distributed tracing backend.
Opensource.com
Subscribe now
Grafana's Tempo is an easy-to-use, high-scale, distributed tracing backend from Grafana Labs. Tempo has integrations with Grafana, Prometheus, and Loki and requires only object storage to operate, making it cost-efficient and easy to operate.
I've been involved with this open source project since its inception, so I'll go over some of the basics about Tempo and show why the cloud-native community has taken notice of it.
Distributed tracing
It's common to want to gather telemetry on requests made to an application. But in the modern server world, a single application is regularly split across many microservices, potentially running on several different nodes.
Distributed tracing is a way to get fine-grained information about the performance of an application that may consist of discreet services. It provides a consolidated view of the request's lifecycle as it passes through an application. Tempo's distributed tracing can be used with monolithic or microservice applications, and it gives you request-scoped information, making it the third pillar of observability (alongside metrics and logs).
The following is an example of a Gantt chart that distributed tracing systems can produce about applications. It uses the Jaeger HotROD demo application to generate traces and stores them in Grafana Cloud's hosted Tempo. This chart shows the processing time for the request, broken down by service and function.
tempo_gantt.png
Reducing index size
Traces have a ton of information in a rich and well-defined data model. Usually, there are two interactions with a tracing backend: filtering for traces using metadata selectors like the service name or duration, and visualizing a trace once it's been filtered.
To enhance search, most open source distributed tracing frameworks index a number of fields from the trace, including the service name, operation name, tags, and duration. This results in a large index and pushes you to use a database like Elasticsearch or Cassandra. However, these can be tough to manage and costly to operate at scale, so my team at Grafana Labs set out to come up with a better solution.At Grafana, our on-call debugging workflows start with drilling down for the problem using a metrics dashboard (we use Cortex, a Cloud Native Computing Foundation incubating project for scaling Prometheus, to store metrics from our application), sifting through the logs for the problematic service (we store our logs in Loki, which is like Prometheus, but for logs), and then viewing traces for a given request. We realized that all the indexing information we need for the filtering step is available in Cortex and Loki. However, we needed a strong integration for trace discoverability through these tools and a complimentary store for key-value lookup by trace ID.
This was the start of the Grafana Tempo project. By focusing on retrieving traces given a trace ID, we designed Tempo to be a minimal-dependency, high-volume, cost-effective distributed tracing backend.
Easy to operate and cost-effective
Tempo uses an object storage backend, which is its only dependency. It can be used in either single binary or microservices mode (check out the examples in the repo on how to get started easily). Using object storage also means you can store a high volume of traces from applications without any sampling. This ensures that you never throw away traces for those one-in-a-million requests that errored out or had higher latencies.
Strong integration with open source tools
Grafana 7.3 includes a Tempo data source, which means you can visualize traces from Tempo in the Grafana UI. Also, Loki 2.0's new query features make trace discovery in Tempo easy. And to integrate with Prometheus, the team is working on adding support for exemplars, which are high-cardinality metadata information you can add to time-series data. The metric storage backends do not index these, but you can retrieve and display them alongside the metric value in the Grafana UI. While exemplars can store various metadata, trace-IDs are stored to integrate strongly with Tempo in this use case.
This example shows using exemplars with a request latency histogram where each exemplar data point links to a trace in Tempo.
tempo_exemplar.png
Consistent metadata
Telemetry data emitted from applications running as containerized applications generally has some metadata associated with it. This can include information like the cluster ID, namespace, pod IP, etc. This is great for providing on-demand information, but it's even better if you can use the information contained in metadata for something productive.
For instance, you can use the Grafana Cloud Agent to ingest traces into Tempo, and the agent leverages the Prometheus Service Discovery mechanism to poll the Kubernetes API for metadata information and adds these as tags to spans emitted by the application. Since this metadata is also indexed in Loki, it makes it easy for you to jump from traces to view logs for a given service by translating metadata into Loki label selectors.
The following is an example of consistent metadata that can be used to view the logs for a given span in a trace in Tempo.
Cloud-native
Grafana Tempo is available as a containerized application, and you can run it on any orchestration engine like Kubernetes, Mesos, etc. The various services can be horizontally scaled depending on the workload on the ingest/query path. You can also use cloud-native object storage, such as Google Cloud Storage, Amazon S3, or Azure Blog Storage with Tempo. For further information, read the architecture section in Tempo's documentation.
Try Tempo
If this sounds like it might be as useful for you as it has been for us, clone the Tempo repo or sign up for Grafana Cloud and give it a try. | https://opensource.com/article/21/2/tempo-distributed-tracing | CC-MAIN-2021-21 | refinedweb | 987 | 50.67 |
snd_pcm_plugin_info()
Get information about a PCM channel's capabilities (plugin-aware)
Synopsis:
#include <sys/asoundlib.h> int snd_pcm_plugin_plugin_info() fills in with information about the PCM channel.
Before calling this function, set the info structure's channel member to specify the direction. This function sets all the other members.
Description:
The snd_pcm_plugin_info() function fills the info structure with data about the PCM channel selected by handle.
This function and the nonplugin version, snd_pcm_channel (errno is set).
Errors:
- -EINVAL
- The state of handle is invalid or an invalid state change occurred. You can call snd_pcm_channel_status() to check if the state change was invalid.
Classification:
QNX Neutrino
Caveats:
This function is not thread safe if handle (snd_pcm_t) is used across multiple threads.
This function is the plugin-aware version of snd_pcm_channel_info(). | https://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.audio.lib_ref/topic/snd_pcm_plugin_info.html | CC-MAIN-2019-35 | refinedweb | 128 | 50.63 |
Timeline
12/11/13:
- 19:27 Ticket #19659 (Foreign keys not generated properly on SQLite) closed by
- duplicate: It is.
- 18:31 Ticket #21596 (Add method to formset to add a form) created by
- It is very complicated to add a form to a formset with the formset api. …
- 14:16 Changeset [9c5f59f4]stable/1.7.x by
- Brought comments in sync with the code in BaseAppCache.
- 12:24 Ticket #21462 (Making assertNumQueries print the list of queries executed on failure) closed by
- fixed: In 5cd6477fd6ea31eeb4d281e8e431b7a5fb8038a1: […]
- 12:24 Changeset [5cd6477]stable/1.7.x by
- Fixed #21462 -- Made assertNumQueries print executed queries on failure.
- 11:49 Changeset [474e7dd6]stable/1.4.x by
- [1.4.x] Fixed #21594 -- Added note about model formsets deleting objects. …
- 11:43 Changeset [a53820b]stable/1.5.x by
- [1.5.x] Fixed #21594 -- Added note about model formsets deleting objects. …
- 11:43 Ticket #21595 (Automatically call as_view() when urlpatterns encounter a CBV.) created by
- Calling as_view() for each CBV is not DRY and adds a fair amount of …
- 11:40 Ticket #21594 (Add note to docs about model formsets deleting objects on save with ...) closed by
- fixed: In de1d5d5df5238136e8cd114e36065857bee1ace4: […]
- 11:39 Changeset [de1d5d5]stable/1.6.x by
- [1.6.x] Fixed #21594 -- Added note about model formsets deleting objects. …
- 10:15 Ticket #21593 (Can't restrict formfield_for_manytomany queryset if the m2m field is ...) closed by
- duplicate: This is a duplicate of #21405. The fix was …
- 09:50 Ticket #21563 (calling hasattr(model_instance, fieldname) raises DoesNotExist when False) closed by
- fixed: In 75924cfa6dca95aa1f02e38802df285271dc7c14: […]
- 09:49 Changeset [75924cfa]stable/1.7.x by
- Fixed #21563 -- Single related object descriptors should work with …
- 09:40 Ticket #21594 (Add note to docs about model formsets deleting objects on save with ...) created by
- Some users may interpret commit=False to mean that no changes are …
- 09:38 Ticket #21593 (Can't restrict formfield_for_manytomany queryset if the m2m field is ...) created by
- This is an example showing the odd behaviour: […] The Users field is …
- 09:34 Changeset [7a2910d]stable/1.6.x by
- [1.6.x] Additions and edits to the 1.6.1 release notes.
- 07:37 Changeset [ebf55d3]stable/1.6.x by
- [1.6.x] Added release note for #21443
- 07:13 Changeset [b953b27]stable/1.6.x by
- [1.6.x] Added release note for #21358
- 07:06 Changeset [3f9d00e]stable/1.6.x by
- [1.6.x] Added release note for #21473
- 06:51 Ticket #21473 (Cookie based language detection no longer practical) closed by
- fixed: In c558a43fd6bbcea9972b66965f7e8619bc247df1: […]
- 06:49 Changeset [c558a43]stable/1.6.x by
- [1.6.x] Fixed #21473 -- Limited language preservation to logout Current …
- 06:32 Changeset [d32637d8]stable/1.6.x by
- [1.6.x] Fixed #21510 -- Readded search reset link in changelist search bar …
- 06:31 Ticket #21510 (Admin change list search field is missing the "show all" link) closed by
- fixed: In c7c647419cb857fe53cf1368c10223c6e042c216: […]
- 06:30 Changeset [c7c6474]stable/1.7.x by
- Fixed #21510 -- Readded search reset link in changelist search bar Thanks …
- 06:19 Changeset [5db028a]stable/1.7.x by
- Fix altering of SERIAL columns and InnoDB being picky about FK changes
- 06:10 CMSAppsComparison edited by
- (diff)
- 05:23 Changeset [cee4fe73]stable/1.7.x by
- Better default name for migrations we can't give nice names to
- 05:16 Changeset [248fdb1]stable/1.7.x by
- Change FKs when what they point to changes
- 05:12 Changeset [f3582a0]stable/1.7.x by
- Fix sqlmigrate's output for parameters
12/10/13:
- 16:25 Ticket #21592 (formset.ordered_forms should try to return ordered forms if is_valid() is ...) created by
- I am not sure if this should be a bug report or a feature request. I …
- 14:27 Ticket #21591 (get_messages is not covered in the documentation) created by
- Currently theres a get_messages method in the django messages framework …
- 12:33 Ticket #21589 (syncdb fails aparently in the createsuperuser stage) closed by
- invalid: This seems like it's probably an issue between the version of Django you …
- 12:32 Changeset [072e25e]stable/1.7.x by
- Moved imports to the top of the defaultfilters module.
- 12:14 Changeset [64483b4]stable/1.6.x by
- [1.6.x] Updated translations from Transifex
- 12:12 Changeset [a281484]stable/1.7.x by
- Fixed E124 pep8 warnings.
- 08:56 Ticket #21590 (Don't require forms clean_* methods to return a value) created by
- Adding custom validation to forms isn't as DRY as it colud be: […] …
- 06:45 Version1.7Roadmap edited by
- (diff)
- 06:44 Version1.7Roadmap edited by
- Lagging underscore (diff)
- 06:43 Version1.7Roadmap edited by
- Added the links to the discussions (diff)
- 04:55 Ticket #21589 (syncdb fails aparently in the createsuperuser stage) created by
- This is pretty long a complicated, sorry Tool Versions: OS: Linux SuSE11 …
- 04:25 Changeset [d6d700f]stable/1.6.x by
- [1.6.x] Fixed #21560 -- Added missing 'slug' field in example code. …
- 02:50 Ticket #21588 ("Modifying upload handlers on the fly" documentation doesn't replicate ...) created by
- In the documentation for …
- 01:14 Ticket #21560 (missing 'slug' field in example code) closed by
- fixed: In 744aac6dace325752e3b1c7c8af64a7bc655186f: […]
- 01:14 Changeset [0873200]stable/1.7.x by
- Merge pull request #2058 from c-schmitt/fixes_21560 Fixed #21560 -- …
12/09/13:
- 13:54 Changeset [744aac6d]stable/1.7.x by
- Fixed #21560 -- missing 'slug' field in example code I updated the …
- 12:19 Ticket #21587 (Make generic RedirectView default to permanent=False) created by
- Having been bitten by this, and seeing some other reports of it, it seems …
- 10:36 Ticket #21586 (SQL Anywhere driver project has moved) closed by
- duplicate: While submitting this, I kept getting capcha errors so I submitted a …
- 10:34 Ticket #21586 (SQL Anywhere driver project has moved) created by
- Hi there... I work for SAP and I'm the owner of the Sybase SQL Anywhere …
- 10:34 Ticket #21585 (SQL Anywhere driver project has moved) created by
- Hi there... I work for SAP and I'm the owner of the Sybase SQL Anywhere …
- 10:31 Ticket #21584 (prefetch_related child queryset does not update on create) created by
- When a child foreign key relationship has been prefetched, calling the …
- 04:43 Ticket #21583 (Offline HTML docs have wrong version (1.5.4, should be 1.6)) closed by
- duplicate: Duplicate of #21400
- 04:00 Ticket #21583 (Offline HTML docs have wrong version (1.5.4, should be 1.6)) created by
- Docs downloaded from …
- 01:57 Ticket #21582 (URL namespaces and included URLconfs: the example might be confusing) created by
- In the example, …
12/08/13:
- 23:52 Ticket #21581 (collecstatic --clear is too lax about warning users) created by
- STATIC_ROOT is not set in the settings.py that ships with the default …
- 13:15 Ticket #21580 (Unclear why shortcut function "render" can not return TemplateResponse ...) created by
- The reason is - apparently - that this is pointless as TemplateResponse …
- 10:30 Ticket #21579 (i18n_patterns redirect not working with script prefix (sub path)) created by
- Assume the following url pattern: […] "en" is the default language. …
- 10:18 Changeset [c047dda]stable/1.7.x by
- Removed an erroneous leading slash introduced by a626bdf648a.
- 09:48 Changeset [ef9832f1]stable/1.6.x by
- [1.6.x] Updated a bunch of hyperlinks in documentation Backport of …
- 09:40 Changeset [626bdf6]stable/1.7.x by
- Updated a bunch of hyperlinks in documentation
- 06:14 Changeset [f876552f]stable/1.7.x by
- (Re-)added GeoDjango instructions for building pysqlite2 correctly. This …
- 05:23 Changeset [3f900a1e]stable/1.7.x by
- Merge pull request #2052 from loic/setup.cfg Made flake8 ignore the .git …
- 05:16 Changeset [27dc790]stable/1.7.x by
- Made flake8 ignore the .git directory.
- 05:06 Changeset [c78bd9ef]stable/1.7.x by
- Merge pull request #2048 from loic/ValidationError.message_dict Trigger …
- 04:22 Ticket #20968 (Error creating Indexes on Syncdb) closed by
- needsinfo: Tried to reproduce this to not avail (tested with the GeoDjango tutorial …
- 01:00 Ticket #21577 (to_python() of django.db.models.fields.__init__.DateField not detecting ...) closed by
- duplicate: The symptoms reported here are identical to #21523.
- 00:28 Ticket #21578 (manage.py dumpdata --format=yaml produces naive datetimes in fixtures) created by
- The following runtime warnings are produced when --format=yaml is used …
Note: See TracTimeline for information about the timeline view. | https://code.djangoproject.com/timeline?from=2013-12-11T07%3A32%3A46-08%3A00&precision=second | CC-MAIN-2014-15 | refinedweb | 1,390 | 66.74 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.