text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
By request from a couple of my readers, I am back to show more speech recognition using C#. In my first speech recognition article “Simple Speech Recognition Using C#“, I introduced you to the Speech Recognition Engine provided by the .NET framework. In that article, I showed you how to setup a new SRE and to accept any voice input and display it in a rich text box. The very next day, I took that application one step further in my article “Simple Speech Recognition Using C# – Part 2” by introducing you to grammars. Grammars are basically a list of input options you want your application to listen for. Adding grammars will cause your application to listen for only those options and nothing else. The grammar used in that article was built using the Choices object and by providing that object with a list of hardcoded options. Today, I want to show you how to replace the Choices object with a grammar XML file. Let’s begin.
The first thing you are going to need for this is of course your grammar file. The grammar file is simply an XML file that contains a list of rules where each rule has a list of items that the SRE will listen for. Each rule must contain an id which will be used to tell the SRE which list of options to listen for. In the example below, you will see that I have defined 2 rules. You will also notice that I have added a scope attribute with the value “public” to each rule. If you do not specify the “root” attribute in the first “<grammar>” tag of your XML file, you will need to add a public scope to your rules, otherwise your SRE will not be able to read the rules.
<grammar xmlns="" xmlns: <rule id="command1" scope="public"> <one-of> <item>start</item> <item>stop</item> <item>continue</item> </one-of> </rule> <rule id="command2" scope="public"> <one-of> <item>test</item> <item>help</item> <item>hello</item> </one-of> </rule> </grammar>
For this example, we are going to be using the first rule from our grammar XML file. In order for us to do that, we’ll need to create a new SpeechRecognitionEngine, tell it which audio device to use, and give it a callback handler where the recognized text will be processed and handled. To keep it simple, we’ll just copy the construction of our SRE from our other articles.; };
Now that we have our SRE constructed, it’s time to build our Grammar object. In the last article, you will that we built a new GrammarBuilder object and constructed our Grammar object using this grammar builder. This time, we’re going to replace that grammar builder with a string indicating the name of the grammar XML file we’ll be using. After that, we’ll go ahead and load our grammar object into our SRE and tell it to begin listening by calling the RecognizeAsync method like we did in the other articles.
Grammar g = new Grammar("grammar.xml", "command1"); recognitionEngine.LoadGrammar(g); recognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
In the snippet above, you’ll see that I passed 2 arguments to my Grammar object’s constructor. The first parameter is the name of the XML file that contains my grammar rules. The second parameter is the id of the rule I want to use from that XML file. If you want to omit the second parameter, you will need to include the “root” parameter on the “<grammar>” element of your XML file. If you do that, the first element of your XML file would look like:
<grammar …. root=”command1″>
That’s it! You are now ready to start using your grammar XML file in your SRE application. Here is a screenshot of the code below in action.
And, here is the complete code I used to make this happen.
using System; using System.Text; using System.Windows.Forms; using System.Speech.Recognition; namespace SpeechRecognition { public partial class MainForm : Form { public MainForm() { InitializeComponent();; }; Grammar g = new Grammar("grammar.xml", "command1"); recognitionEngine.LoadGrammar(g); recognitionEngine.RecognizeAsync(RecognizeMode.Multiple); } } }
PayPal will open in a new tab.
Thanks for this wonderful tutorial. I have one more question, how could we use this namespace to make Arabic recognition??
Hmmm? That’s a tough one. I’ve never tried doing speech recognition with languages other than English. The only way I can think of to do something like that would be to create multiple instances of your SRE and swap between them depending on the language you are listening for. In your XML file, you would change the IDs of your rules to something like “english” and “arabic”. Then, you would list your words / items for each language. In your code, you will need to add a reference System.Globalization like this:
using System.Globalization;
Next, you will need to create a new CultureInfo object (System.Globalization.CultureInfo) for each culture like this:
CultureInfo englishCulture = new CultureInfo(“en-US”);
CultureInfo arabicCulture = new CultureInfo(“ar-EG”);
Then, you would create a new SRE for each culture info object and an SRE that will be the main SRE you’ll be using like so:
SpeechRecognitionEngine mainSRE = new SpeechRecognitionEngine();
SpeechRecognitionEngine englishSRE = new SpeechRecognitionEngine(englishCulture);
SpeechRecognitionEngine arabicSRE = new SpeechRecognitionEngine(arabicCulture);
After that, you could add a combobox that lists your different languages. When a user selects a different language, swap out your SRE with the one that corresponds to the culture / language they selected.
if(cmbLanguage.Text.Equals(“Arabic”))
mainSRE = arabicSRE;
else
mainSRE = englishSRE;
However, there are 2 things to keep in mind here. 1) This is all hypothetical. Alhtough I know the code above compiles, I have no way of testing whether this theory actually works in practice since I only speak English. 2) The method used in this article only listens for and recognizes the words you have listed in your XML grammar file. If phrases other than the ones listed in your XML file are spoken, the app will simply ignore them. So, if you need something that can listen for specific commands like the ones listen in your XML file and you need it to also listen for freely spoken words, you’ll need more of a hybrid approach as this example alone will not work.
The code of the other language does not work. Could you make a project using another language like French or something ???
Again a great article LuCuS!!..I also apply it and it works great.Then i have a question to ask you…why we use xml grammers for speech recognition? Is this increases the efficiency of recognizing words??
XML grammars are mostly for building a system that looks for specific commands, texts, or phrases. A few of my other readers are working on an application that can accept several different spoken languages and convert the input into output of a different language. Another reader is using grammars to provide her application to be controlled using different languages as well. To do that, she’ll be including a rule for each language that her app will support. Then, the user will be able to pick their language from a combobox. Other readers use grammars to group together specific commands. For example, one list of commands might control the application itself whereas another rule (list of commands) might control the OS. One of the best things about using grammars is that commands can be added or removed during runtime with ease and persisted to the filesystem as opposed to having the commands hardcoded in your app where they will be reset every time you restart the app since it’s all stored in memory. For example, I have another reader that is working on a voice controlled robot. By using grammars, he can add new commands for his robot to recognize without the need for re-building his application. Instead, he just adds the new commands to the XML file. At some point, he said he plans on speaking new commands to the robot and it will add these new commands to the XML for him. When that happens, the robot will not know what to do when it hears those commands. With the help of a neural network or something similar, he can teach his robot how to learn on its own what to do when it encounters these commands again in the future.
Hi,
I’m curently researching options for a speech recognition engine to support my businesses Medical transcription workflow. all roads seem to lead to Nuance and this is not a company I wish to get involved with or to relay on. Having seen you various bloggs on speech recognition I wonder if you could me in the direction of some alternatives?
nige
Hi LuCuS,
how can a spoken word can be compared with an audio file?Consider a case,I say “This is an example” by microphone and save it as a .mp3 file.Then i will save it in a database.Then when my application will run,if i say “This is an example” it can detect that by comparing with that audio file from database.Though i don’t try to make that application,but i like to hear your advice as an expert that if my concept is right or is it possible to do that and how can i compare those 2 values?
There are several ways you could approach this type of application. One way would be to create a neural network & train it using different voices saying the same phrase. Another way would be to use a markov model. Another solution would be to convert the audio into wave frequencies & measure the peaks, valleys, & distance between. This approach wouldn’t be very accurate, but could possibly yield some decent results. The markov model would be more efficient than a raw neural network & would probably be the method I would go with because it would return better results out of all the solutions. I’ve used markov chains in all kinds of applications such as data mining, computer vision, etc… They’re reasonably easy to understand & build. If you do decide to play with neural nets, you should definitely look into support vector machines. They are 100x faster than pure neural nets.
I know you post it few years ago, but i didn’t find a solution of my problem… So I hope your still here.
Do you know if it’s possible to use a specific microphone and not the default microphone?
Thanks a lot
You can use any microphone you want. I’ve used microphones built into my laptops, USB microphones, and even those 3.5mm jack microphones. Basically you just need a way to get audio into the app.
Thanks for your answer.
I found how to use a specific microphone. I select the microphone I want to use and I define it as the default microphone. In that way, I can use the function SpeechRecognitionEngine.SetInputToDefaultAudioDevice .
The other way is to use a stream with the function SetInputToAudioStream.
Like in this link :
(it’s explain at Starting Recording in the note)
but I did not manage to run it.
So, if you know how to use a SpeechRecognitionEngine without SetInputToDefaultAudioDevice and in real-time that will be really great.
I’m very grateful to you. | http://www.prodigyproductionsllc.com/articles/programming/speech-recognition-with-c-and-xml-grammars/ | CC-MAIN-2017-04 | refinedweb | 1,902 | 62.78 |
Hi Chris,
Thanks for following up on this. How does something like the following
sound? I'm going to phrase this in terms of existing classes. I suspect
that we'd want to use some simpler classes if we implemented this--for
instance, I'm not happy with ScanQualifier: as your example points out,
ScanQualifier pulls in the Derby type system, which is a lot of
machinery you don't need to worry about. But here's a sketch:
1) Derby would expose some interface which your ResultSet would implement:
public interface DerbyScan
{
/**
* Setup that is called before Derby calls next() to get the first row
*
* @param referencedColumns Columns which Derby may retrieve. These
are column positions as declared at CREATE FUNCTION time.
* @param restriction Array of simple comparisons to constant
values. Each comparison applies to a single column.
*/
public void initScan( FormatableBitSet referencedColumns,
ScanQualifier[] restriction ) throws SQLException;
}
2) You would code something like this:
public class MyResultSet implements ResultSet, DerbyScan { ... }
3) And your CREATE FUNCTION statement would bind a function to a method
like the following, which you would code also:
public static MyResultSet legacyRealtyData() throws SQLException { ... }
Is this headed in the right direction?
Thanks,
-Rick
Chris Goodacre wrote:
> Rick,
> Sorry it's taken me so long to reply on this. I just today got back to this in earnest.
I'll try to walk through an example, imagining that I have an array of ScanQualifiers that
gets passed to my table function's method, just to make sure I understand this.
>
> public static ResultSet read(ScanQualifier[] qualifiers) {
> // ... impl
> }
>
> So, if I were to go back to my original example:
>
> select house_number, street, city from table (legacy_realty_data()) where price <
500000
>
> a) I think that an array with only a single ScanQualifier object would be passed to my
read(...) method.
> b) I can see where the operator for the ScanQualifier object would be some negative number
> c) The column id would reference the column # (basically) of the price column from the
table definition of my CREATE FUNCTION statement.
> d) The result of getOrderable() on the scanqualifier object would return me a DataValueDescriptor.
> e) I could interrogate the DataValueDescriptor to get the value (500000) in a type/manner
that I could use to pass on to my legacy system
>
> I could use this information to restrict the number of rows that come back. That's
good.
>
> It would still be nice if I could restrict the number of columns I'm requesting up front.
It's expensive to go back and forth to this system, so I would rather make one read (all
relevant rows, all relevant columns) and take the chance that the user only uses some of the
rows from the result set.
>
> Would it be possible to use a ScanQualifier (or something like it) to inform the table
procedure methods which specific (non-calculated) columns are in the query?
>
> -chris
>
>
>
> ----- Original Message ----
> From: Rick Hillegas <Richard.Hillegas@Sun.COM>
> To: Derby Discussion <derby-user@db.apache.org>
> Sent: Monday, July 20, 2009 3:08:31 PM
> Subject: Re: Question about TableFunctions in Derby
>
> Hi Chris,
>
> Reducing the number of column probes may be possible without any changes to Derby: When
your ResultSet is asked to get a column, it can remember that request. On later rows, your
ResultSet can ask the external data source for all of the column positions it has remembered
so far. In the query you gave, this would play out like this:
>
> 1) On the first row, your ResultSet would make 3 calls to the external data source, one
for each column. But the ResultSet would remember which columns were requested.
>
> 2) For each of the remaining N-1 rows, your ResultSet would call the external data source
only once, asking the external data source for all three columns in a single batch. That batch
could then be cached and the individual columns could be returned to Derby when Derby called
the getXXX() methods.
>
> Positioning and restricting the rows themselves (the WHERE clause fragments) is tricker.
It probably requires help from Derby, as you suggest. We could design some interface by which
Derby would pass the ResultSet a list of org.apache.derby.iapi.sql.execute.ScanQualifier.
Your ResultSet could then forward those directives to the external data source.
>
> What do you think?
> -Rick
>
>
> Chris Goodacre wrote:
>
>> Rick, thanks for your suggestions. Perhaps I am being obtuse, but when you."
>>
>> Does that mean that I make a separate request to the legacy system each time getXXX()
is called - i.e. lazily initialize each column in the result set? I think this has to be
the only way to do it, since I don't know which columns will be requested at the time the
read() method of my tablefunction is invoked.
>> Making (in this case) 3 calls to the legacy system to get 1 column for N rows is
certainly better than making 1 call to the legacy system to get 1000 columns for N rows and
then throwing away 997*N values/cells, but still not quite as nice as I'd like.
>> If I were making a wish - I'd wish for some sort of parsed representation of the
query get passed to the read method (or to some other method - similar to, or even as part
of, the query optimization interface). Ideally, this structured representation would have
the list of columns belonging to the table function from the select list, and the where clause
components specific to the table function only (i.e. mytablefunction.price > 50000 but
NOT mytablefunction.price < myrealtable.value).
>>
>> In the absence of this, when the VTIResultSet class passes the ActivationHolder to
the derby class which invokes the read() method reflectively, why can't that class pass the
activation context (it knows it is dealing with a derby table function, it knows the class
name, it has access to the result set descriptor, if not the where clause) pass this information
along to the user's table function class? I would happily implement an interface in this
class (not sure why read() has to be static) to get this information prior to resultset instantiation.
>>
>> -chris
>>
>>
>>
>> ----- Original Message ----
>> From: Rick Hillegas <Richard.Hillegas@Sun.COM>
>> To: Derby Discussion <derby-user@db.apache.org>
>> Sent: Monday, July 20, 2009 10:55:33 AM
>> Subject: Re: Question about TableFunctions in Derby
>>
>> Hi Chris,
>>
>> Some comments inline...
>>
>> Chris Goodacre wrote:
>>
>>
>>> I've read the Derby developer's guide and Rick Hillegas's informative white paper
()
on Table Functions, but am still struggling with the following issue:
>>>
>>> I am trying to create an RDB abstraction for a large CICS/VSAM-based legacy system
and blend it with our newer, RDB-based tier. This seems like a good application of TableFunctions.
The VSAM data is made available to me via an IP-based proprietary messaging interface. There
are lots of different files here, but due to some historical forces, most of the data I'm
interested in resides in 4 VSAM files.
>>>
>>> Unfortunately, each of those VSAM files has over a 1000 fields in it.
>>>
>>> Now eventually, it might be possible to fully model a single VSAM file into (for
the sake of argument) 50 tables; each table/row representing a small slice of a single VSAM
record.
>>>
>>> In the meantime, for both this proof-of-concept and as a migration path to our
existing clients, I'd like to represent each VSAM file as a table (subject to the 1024 column
SQL limitation per table). This will be a highly-denormalized and decidedly non-relational
view of the data, but it will be easy to demonstrate and immediately recognizable to our customers.
>>>
>>> However, I can't seem to get around the problem of data granularity. For example,
if my customer executes:
>>>
>>> select house_number, street, city from table (legacy_realty_data()) where price
< 500.
>>
>> The WHERE clause is a little trickier. You are right, Derby will read all rows from
the ResultSet and throw away the rows which don't satisfy the WHERE clause. What you want
to do is push the qualification through the table function to the external data source. I
don't see any way to do this other than adding some more arguments to your table function.
For instance, if you could push the qualification through to the external data source, then
you could get efficient behavior from something like the following:
>>
>> select house_number, street, city
>> from table( legacy_realty_data( 500000 ) ) s;
>>
>> Hope this helps,
>> -Rick
>>
>>
>>
>>> I don't appear to have any visibility to the actual query inside my legacy_realty_data
TableFunction, so I have to go get all 1000 fields for however many listings are present where
price< 500000 even though only three columns will be requested. Am I missing something?
Aside from having the user repeat the columns as parameters to the table function (which
looks awkward to say the least), I can't see a way around this based on my limited knowledge
of Derby.
>>>
>>> Is there a way to only retrieve the columns that the user is querying for?
>>>
>>> Looking forward to your help/advice.
>>>
>>> -chris
>>>
>>>
>>> | http://mail-archives.us.apache.org/mod_mbox/db-derby-user/200908.mbox/%3C4A8EB96A.6050203@sun.com%3E | CC-MAIN-2019-35 | refinedweb | 1,517 | 58.32 |
apache
/
hadoop
/
refs/tags/release-0.21.0-rc2
/
.
/
CHANGES.txt
blob: 51a15e65e9b891b3e0239a3f60f40d55b1aa13bd [
file
] [
log
] [
blame
]
Hadoop Change Log
Release 0.21.0 - 2010-08-13
INCOMPATIBLE CHANGES
HADOOP-4895. Remove deprecated methods DFSClient.getHints(..) and
DFSClient.isDirectory(..). (szetszwo)
HADOOP-4941. Remove deprecated FileSystem methods: getBlockSize(Path f),
getLength(Path f) and getReplication(Path src). (szetszwo)
HADOOP-4648. Remove obsolete, deprecated InMemoryFileSystem and
ChecksumDistributedFileSystem. (cdouglas via szetszwo)
HADOOP-4940. Remove a deprecated method FileSystem.delete(Path f). (Enis
Soztutar via szetszwo)
HADOOP-4010. Change semantics for LineRecordReader to read an additional
line per split- rather than moving back one character in the stream- to
work with splittable compression codecs. (Abdul Qadeer via cdouglas)
HADOOP-5094. Show hostname and separate live/dead datanodes in DFSAdmin
report. (Jakob Homan via szetszwo)
HADOOP-4942. Remove deprecated FileSystem methods getName() and
getNamed(String name, Configuration conf). (Jakob Homan via szetszwo)
HADOOP-5486. Removes the CLASSPATH string from the command line and instead
exports it in the environment. (Amareshwari Sriramadasu via ddas)
HADOOP-2827. Remove deprecated NetUtils::getServerAddress. (cdouglas)
HADOOP-5681. Change examples RandomWriter and RandomTextWriter to
use new mapreduce API. (Amareshwari Sriramadasu via sharad)
HADOOP-5680. Change org.apache.hadoop.examples.SleepJob to use new
mapreduce api. (Amareshwari Sriramadasu via sharad)
HADOOP-5699. Change org.apache.hadoop.examples.PiEstimator to use
new mapreduce api. (Amareshwari Sriramadasu via sharad)
HADOOP-5720. Introduces new task types - JOB_SETUP, JOB_CLEANUP
and TASK_CLEANUP. Removes the isMap methods from TaskID/TaskAttemptID
classes. (ddas)
HADOOP-5668. Change TotalOrderPartitioner to use new API. (Amareshwari
Sriramadasu via cdouglas)
HADOOP-5738. Split "waiting_tasks" JobTracker metric into waiting maps and
waiting reduces. (Sreekanth Ramakrishnan via cdouglas)
HADOOP-5679. Resolve findbugs warnings in core/streaming/pipes/examples.
(Jothi Padmanabhan via sharad)
HADOOP-4359. Support for data access authorization checking on Datanodes.
(Kan Zhang via rangadi)
HADOOP-5690. Change org.apache.hadoop.examples.DBCountPageView to use
new mapreduce api. (Amareshwari Sriramadasu via sharad)
HADOOP-5694. Change org.apache.hadoop.examples.dancing to use new
mapreduce api. (Amareshwari Sriramadasu via sharad)
HADOOP-5696. Change org.apache.hadoop.examples.Sort to use new
mapreduce api. (Amareshwari Sriramadasu via sharad)
HADOOP-5698. Change org.apache.hadoop.examples.MultiFileWordCount to
use new mapreduce api. (Amareshwari Sriramadasu via sharad)
HADOOP-5913. Provide ability to an administrator to stop and start
job queues. (Rahul Kumar Singh and Hemanth Yamijala via yhemanth)
MAPREDUCE-711. Removed Distributed Cache from Common, to move it
under Map/Reduce. (Vinod Kumar Vavilapalli via yhemanth)
HADOOP-6201. Change FileSystem::listStatus contract to throw
FileNotFoundException if the directory does not exist, rather than letting
this be implementation-specific. (Jakob Homan via cdouglas)
HADOOP-6230. Moved process tree and memory calculator related classes
from Common to Map/Reduce. (Vinod Kumar Vavilapalli via yhemanth)
HADOOP-6203. FsShell rm/rmr error message indicates exceeding Trash quota
and suggests using -skpTrash, when moving to trash fails.
(Boris Shkolnik via suresh)
HADOOP-6303. Eclipse .classpath template has outdated jar files and is
missing some new ones. (cos)
HADOOP-6396. Fix uninformative exception message when unable to parse
umask. (jghoman)
HADOOP-6299. Reimplement the UserGroupInformation to use the OS
specific and Kerberos JAAS login. (omalley)
HADOOP-6686. Remove redundant exception class name from the exception
message for the exceptions thrown at RPC client. (suresh)
HADOOP-6701. Fix incorrect exit codes returned from chmod, chown and chgrp
commands from FsShell. (Ravi Phulari via suresh)
NEW FEATURES
HADOOP-6332. Large-scale Automated Test Framework. (sharad, Sreekanth
Ramakrishnan, at all via cos)
HADOOP-4268. Change fsck to use ClientProtocol methods so that the
corresponding permission requirement for running the ClientProtocol
methods will be enforced. (szetszwo)
HADOOP-3953. Implement sticky bit for directories in HDFS. (Jakob Homan
via szetszwo)
HADOOP-4368. Implement df in FsShell to show the status of a FileSystem.
(Craig Macdonald via szetszwo)
HADOOP-3741. Add a web ui to the SecondaryNameNode for showing its status.
(szetszwo)
HADOOP-5018. Add pipelined writers to Chukwa. (Ari Rabkin via cdouglas)
HADOOP-5052. Add an example computing exact digits of pi using the
Bailey-Borwein-Plouffe algorithm. (Tsz Wo (Nicholas), SZE via cdouglas)
HADOOP-4927. Adds a generic wrapper around outputformat to allow creation of
output on demand (Jothi Padmanabhan via ddas)
HADOOP-5144. Add a new DFSAdmin command for changing the setting of restore
failed storage replicas in namenode. (Boris Shkolnik via szetszwo)
HADOOP-5258. Add a new DFSAdmin command to print a tree of the rack and
datanode topology as seen by the namenode. (Jakob Homan via szetszwo)
HADOOP-4756. A command line tool to access JMX properties on NameNode
and DataNode. (Boris Shkolnik via rangadi)
HADOOP-4539. Introduce backup node and checkpoint node. (shv)
HADOOP-5363. Add support for proxying connections to multiple clusters with
different versions to hdfsproxy. (Zhiyong Zhang via cdouglas)
HADOOP-5528. Add a configurable hash partitioner operating on ranges of
BinaryComparable keys. (Klaas Bosteels via shv)
HADOOP-5257. HDFS servers may start and stop external components through
a plugin interface. (Carlos Valiente via dhruba)
HADOOP-5450. Add application-specific data types to streaming's typed bytes
interface. (Klaas Bosteels via omalley)
HADOOP-5518. Add contrib/mrunit, a MapReduce unit test framework.
(Aaron Kimball via cutting)
HADOOP-5469. Add /metrics servlet to daemons, providing metrics
over HTTP as either text or JSON. (Philip Zeyliger via cutting)
HADOOP-5467. Introduce offline fsimage image viewer. (Jakob Homan via shv)
HADOOP-5752. Add a new hdfs image processor, Delimited, to oiv. (Jakob
Homan via szetszwo)
HADOOP-5266. Adds the capability to do mark/reset of the reduce values
iterator in the Context object API. (Jothi Padmanabhan via ddas)
HADOOP-5745. Allow setting the default value of maxRunningJobs for all
pools. (dhruba via matei)
HADOOP-5643. Adds a way to decommission TaskTrackers while the JobTracker
is running. (Amar Kamat via ddas)
HADOOP-4829. Allow FileSystem shutdown hook to be disabled.
(Todd Lipcon via tomwhite)
HADOOP-5815. Sqoop: A database import tool for Hadoop.
(Aaron Kimball via tomwhite)
HADOOP-4861. Add disk usage with human-readable size (-duh).
(Todd Lipcon via tomwhite)
HADOOP-5844. Use mysqldump when connecting to local mysql instance in Sqoop.
(Aaron Kimball via tomwhite)
HADOOP-5976. Add a new command, classpath, to the hadoop script. (Owen
O'Malley and Gary Murry via szetszwo)
HADOOP-6120. Add support for Avro specific and reflect data.
(sharad via cutting)
HADOOP-6226. Moves BoundedByteArrayOutputStream from the tfile package to
the io package and makes it available to other users (MAPREDUCE-318).
(Jothi Padmanabhan via ddas)
HADOOP-6105. Adds support for automatically handling deprecation of
configuration keys. (V.V.Chaitanya Krishna via yhemanth)
HADOOP-6235. Adds new method to FileSystem for clients to get server
defaults. (Kan Zhang via suresh)
HADOOP-6234. Add new option dfs.umaskmode to set umask in configuration
to use octal or symbolic instead of decimal. (Jakob Homan via suresh)
HADOOP-5073. Add annotation mechanism for interface classification.
(Jakob Homan via suresh)
HADOOP-4012. Provide splitting support for bzip2 compressed files. (Abdul
Qadeer via cdouglas)
HADOOP-6246. Add backward compatibility support to use deprecated decimal
umask from old configuration. (Jakob Homan via suresh)
HADOOP-4952. Add new improved file system interface FileContext for the
application writer (Sanjay Radia via suresh)
HADOOP-6170. Add facility to tunnel Avro RPCs through Hadoop RPCs.
This permits one to take advantage of both Avro's RPC versioning
features and Hadoop's proven RPC scalability. (cutting)
HADOOP-6267. Permit building contrib modules located in external
source trees. (Todd Lipcon via cutting)
HADOOP-6240. Add new FileContext rename operation that posix compliant
that allows overwriting existing destination. (suresh)
HADOOP-6204. Implementing aspects development and fault injeciton
framework for Hadoop (cos)
HADOOP-6313. Implement Syncable interface in FSDataOutputStream to expose
flush APIs to application users. (Hairong Kuang via suresh)
HADOOP-6284. Add a new parameter, HADOOP_JAVA_PLATFORM_OPTS, to
hadoop-config.sh so that it allows setting java command options for
JAVA_PLATFORM. (Koji Noguchi via szetszwo)
HADOOP-6337. Updates FilterInitializer class to be more visible,
and the init of the class is made to take a Configuration argument.
(Jakob Homan via ddas)
Hadoop-6223. Add new file system interface AbstractFileSystem with
implementation of some file systems that delegate to old FileSystem.
(Sanjay Radia via suresh)
HADOOP-6433. Introduce asychronous deletion of files via a pool of
threads. This can be used to delete files in the Distributed
Cache. (Zheng Shao via dhruba)
HADOOP-6415. Adds a common token interface for both job token and
delegation token. (Kan Zhang via ddas)
HADOOP-6408. Add a /conf servlet to dump running configuration.
(Todd Lipcon via tomwhite)
HADOOP-6520. Adds APIs to read/write Token and secret keys. Also
adds the automatic loading of tokens into UserGroupInformation
upon login. The tokens are read from a file specified in the
environment variable. (ddas)
HADOOP-6419. Adds SASL based authentication to RPC.
(Kan Zhang via ddas)
HADOOP-6510. Adds a way for superusers to impersonate other users
in a secure environment. (Jitendra Nath Pandey via ddas)
HADOOP-6421. Adds Symbolic links to FileContext, AbstractFileSystem.
It also adds a limited implementation for the local file system
(RawLocalFs) that allows local symlinks. (Eli Collins via Sanjay Radia)
HADOOP-6577. Add hidden configuration option "ipc.server.max.response.size"
to change the default 1 MB, the maximum size when large IPC handler
response buffer is reset. (suresh)
HADOOP-6568. Adds authorization for the default servlets.
(Vinod Kumar Vavilapalli via ddas)
HADOOP-6586. Log authentication and authorization failures and successes
for RPC (boryas)
HADOOP-6580. UGI should contain authentication method. (jnp via boryas)
HADOOP-6657. Add a capitalization method to StringUtils for MAPREDUCE-1545.
(Luke Lu via Steve Loughran)
HADOOP-6692. Add FileContext#listStatus that returns an iterator.
(hairong)
HADOOP-6869. Functionality to create file or folder on a remote daemon
side (Vinay Thota via cos)
IMPROVEMENTS
HADOOP-6798. Align Ivy version for all Hadoop subprojects. (cos)
HADOOP-6777. Implement a functionality for suspend and resume a process.
(Vinay Thota via cos)
HADOOP-6772. Utilities for system tests specific. (Vinay Thota via cos)
HADOOP-6771. Herriot's artifact id for Maven deployment should be set to
hadoop-core-instrumented (cos)
HADOOP-6752. Remote cluster control functionality needs JavaDocs
improvement (Balaji Rajagopalan via cos).
HADOOP-4565. Added CombineFileInputFormat to use data locality information
to create splits. (dhruba via zshao)
HADOOP-4936. Improvements to TestSafeMode. (shv)
HADOOP-4985. Remove unnecessary "throw IOException" declarations in
FSDirectory related methods. (szetszwo)
HADOOP-5017. Change NameNode.namesystem declaration to private. (szetszwo)
HADOOP-4794. Add branch information from the source version control into
the version information that is compiled into Hadoop. (cdouglas via
omalley)
HADOOP-5070. Increment copyright year to 2009, remove assertions of ASF
HADOOP-5037. Deprecate static FSNamesystem.getFSNamesystem(). (szetszwo)
HADOOP-5088. Include releaseaudit target as part of developer test-patch
target. (Giridharan Kesavan via nigel)
HADOOP-2721. Uses setsid when creating new tasks so that subprocesses of
this process will be within this new session (and this process will be
the process leader for all the subprocesses). Killing the process leader,
or the main Java task in Hadoop's case, kills the entire subtree of
processes. (Ravi Gummadi via ddas)
HADOOP-5097. Remove static variable JspHelper.fsn, a static reference to
a non-singleton FSNamesystem object. (szetszwo)
HADOOP-3327. Improves handling of READ_TIMEOUT during map output copying.
(Amareshwari Sriramadasu via ddas)
HADOOP-5124. Choose datanodes randomly instead of starting from the first
datanode for providing fairness. (hairong via szetszwo)
HADOOP-4930. Implement a Linux native executable that can be used to
launch tasks as users. (Sreekanth Ramakrishnan via yhemanth)
HADOOP-5122. Fix format of fs.default.name value in libhdfs test conf.
(Craig Macdonald via tomwhite)
HADOOP-5038. Direct daemon trace to debug log instead of stdout. (Jerome
Boulon via cdouglas)
HADOOP-5101. Improve packaging by adding 'all-jars' target building core,
tools, and example jars. Let findbugs depend on this rather than the 'tar'
target. (Giridharan Kesavan via cdouglas)
HADOOP-4868. Splits the hadoop script into three parts - bin/hadoop,
bin/mapred and bin/hdfs. (Sharad Agarwal via ddas)
HADOOP-1722. Adds support for TypedBytes and RawBytes in Streaming.
(Klaas Bosteels via ddas)
HADOOP-4220. Changes the JobTracker restart tests so that they take much
less time. (Amar Kamat via ddas)
HADOOP-4885. Try to restore failed name-node storage directories at
checkpoint time. (Boris Shkolnik via shv)
HADOOP-5209. Update year to 2009 for javadoc. (szetszwo)
HADOOP-5279. Remove unnecessary targets from test-patch.sh.
(Giridharan Kesavan via nigel)
HADOOP-5120. Remove the use of FSNamesystem.getFSNamesystem() from
UpgradeManagerNamenode and UpgradeObjectNamenode. (szetszwo)
HADOOP-5222. Add offset to datanode clienttrace. (Lei Xu via cdouglas)
HADOOP-5240. Skip re-building javadoc when it is already
up-to-date. (Aaron Kimball via cutting)
HADOOP-5042. Add a cleanup stage to log rollover in Chukwa appender.
(Jerome Boulon via cdouglas)
HADOOP-5264. Removes redundant configuration object from the TaskTracker.
(Sharad Agarwal via ddas)
HADOOP-5232. Enable patch testing to occur on more than one host.
(Giri Kesavan via nigel)
HADOOP-4546. Fix DF reporting for AIX. (Bill Habermaas via cdouglas)
HADOOP-5023. Add Tomcat support to HdfsProxy. (Zhiyong Zhang via cdouglas)
HADOOP-5317. Provide documentation for LazyOutput Feature.
(Jothi Padmanabhan via johan)
HADOOP-5455. Document rpc metrics context to the extent dfs, mapred, and
jvm contexts are documented. (Philip Zeyliger via cdouglas)
HADOOP-5358. Provide scripting functionality to the synthetic load
generator. (Jakob Homan via hairong)
HADOOP-5442. Paginate jobhistory display and added some search
capabilities. (Amar Kamat via acmurthy)
HADOOP-4842. Streaming now allows specifiying a command for the combiner.
(Amareshwari Sriramadasu via ddas)
HADOOP-5196. avoiding unnecessary byte[] allocation in
SequenceFile.CompressedBytes and SequenceFile.UncompressedBytes.
(hong tang via mahadev)
HADOOP-4655. New method FileSystem.newInstance() that always returns
a newly allocated FileSystem object. (dhruba)
HADOOP-4788. Set Fair scheduler to assign both a map and a reduce on each
heartbeat by default. (matei)
HADOOP-5491. In contrib/index, better control memory usage.
(Ning Li via cutting)
HADOOP-5423. Include option of preserving file metadata in
SequenceFile::sort. (Michael Tamm via cdouglas)
HADOOP-5331. Add support for KFS appends. (Sriram Rao via cdouglas)
HADOOP-4365. Make Configuration::getProps protected in support of
meaningful subclassing. (Steve Loughran via cdouglas)
HADOOP-2413. Remove the static variable FSNamesystem.fsNamesystemObject.
(Konstantin Shvachko via szetszwo)
HADOOP-4584. Improve datanode block reports and associated file system
scan to avoid interefering with normal datanode operations.
(Suresh Srinivas via rangadi)
HADOOP-5502. Documentation for backup and checkpoint nodes.
(Jakob Homan via shv)
HADOOP-5485. Mask actions in the fair scheduler's servlet UI based on
value of webinterface.private.actions.
(Vinod Kumar Vavilapalli via yhemanth)
HADOOP-5581. HDFS should throw FileNotFoundException when while opening
a file that does not exist. (Brian Bockelman via rangadi)
HADOOP-5509. PendingReplicationBlocks does not start monitor in the
constructor. (shv)
HADOOP-5494. Modify sorted map output merger to lazily read values,
rather than buffering at least one record for each segment. (Devaraj Das
via cdouglas)
HADOOP-5396. Provide ability to refresh queue ACLs in the JobTracker
without having to restart the daemon.
(Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli via yhemanth)
HADOOP-4490. Provide ability to run tasks as job owners.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-5697. Change org.apache.hadoop.examples.Grep to use new
mapreduce api. (Amareshwari Sriramadasu via sharad)
HADOOP-5625. Add operation duration to clienttrace. (Lei Xu via cdouglas)
HADOOP-5705. Improve TotalOrderPartitioner efficiency by updating the trie
construction. (Dick King via cdouglas)
HADOOP-5589. Eliminate source limit of 64 for map-side joins imposed by
TupleWritable encoding. (Jingkei Ly via cdouglas)
HADOOP-5734. Correct block placement policy description in HDFS
Design document. (Konstantin Boudnik via shv)
HADOOP-5657. Validate data in TestReduceFetch to improve merge test
coverage. (cdouglas)
HADOOP-5613. Change S3Exception to checked exception.
(Andrew Hitchcock via tomwhite)
HADOOP-5717. Create public enum class for the Framework counters in
org.apache.hadoop.mapreduce. (Amareshwari Sriramadasu via sharad)
HADOOP-5217. Split AllTestDriver for core, hdfs and mapred. (sharad)
HADOOP-5364. Add certificate expiration warning to HsftpFileSystem and HDFS
proxy. (Zhiyong Zhang via cdouglas)
HADOOP-5733. Add map/reduce slot capacity and blacklisted capacity to
JobTracker metrics. (Sreekanth Ramakrishnan via cdouglas)
HADOOP-5596. Add EnumSetWritable. (He Yongqiang via szetszwo)
HADOOP-5727. Simplify hashcode for ID types. (Shevek via cdouglas)
HADOOP-5500. In DBOutputFormat, where field names are absent permit the
number of fields to be sufficient to construct the select query. (Enis
Soztutar via cdouglas)
HADOOP-5081. Split TestCLI into HDFS, Mapred and Core tests. (sharad)
HADOOP-5015. Separate block management code from FSNamesystem. (Suresh
Srinivas via szetszwo)
HADOOP-5080. Add new test cases to TestMRCLI and TestHDFSCLI
(V.Karthikeyan via nigel)
HADOOP-5135. Splits the tests into different directories based on the
package. Four new test targets have been defined - run-test-core,
run-test-mapred, run-test-hdfs and run-test-hdfs-with-mr.
(Sharad Agarwal via ddas)
HADOOP-5771. Implements unit tests for LinuxTaskController.
(Sreekanth Ramakrishnan and Vinod Kumar Vavilapalli via yhemanth)
HADOOP-5419. Provide a facility to query the Queue ACLs for the
current user.
(Rahul Kumar Singh via yhemanth)
HADOOP-5780. Improve per block message prited by "-metaSave" in HDFS.
(Raghu Angadi)
HADOOP-5823. Added a new class DeprecatedUTF8 to help with removing
UTF8 related javac warnings. These warnings are removed in
FSEditLog.java as a use case. (Raghu Angadi)
HADOOP-5824. Deprecate DataTransferProtocol.OP_READ_METADATA and remove
the corresponding unused codes. (Kan Zhang via szetszwo)
HADOOP-5721. Factor out EditLogFileInputStream and EditLogFileOutputStream
into independent classes. (Luca Telloli & Flavio Junqueira via shv)
HADOOP-5838. Fix a few javac warnings in HDFS. (Raghu Angadi)
HADOOP-5854. Fix a few "Inconsistent Synchronization" warnings in HDFS.
(Raghu Angadi)
HADOOP-5369. Small tweaks to reduce MapFile index size. (Ben Maurer
via sharad)
HADOOP-5858. Eliminate UTF8 and fix warnings in test/hdfs-with-mr package.
(shv)
HADOOP-5866. Move DeprecatedUTF8 from o.a.h.io to o.a.h.hdfs since it may
not be used outside hdfs. (Raghu Angadi)
HADOOP-5857. Move normal java methods from hdfs .jsp files to .java files.
(szetszwo)
HADOOP-5873. Remove deprecated methods randomDataNode() and
getDatanodeByIndex(..) in FSNamesystem. (szetszwo)
HADOOP-5572. Improves the progress reporting for the sort phase for both
maps and reduces. (Ravi Gummadi via ddas)
HADOOP-5839. Fix EC2 scripts to allow remote job submission.
(Joydeep Sen Sarma via tomwhite)
HADOOP-5877. Fix javac warnings in TestHDFSServerPorts, TestCheckpoint,
TestNameEditsConfig, TestStartup and TestStorageRestore.
(Jakob Homan via shv)
HADOOP-5438. Provide a single FileSystem method to create or
open-for-append to a file. (He Yongqiang via dhruba)
HADOOP-5472. Change DistCp to support globbing of input paths. (Dhruba
Borthakur and Rodrigo Schmidt via szetszwo)
HADOOP-5175. Don't unpack libjars on classpath. (Todd Lipcon via tomwhite)
HADOOP-5620. Add an option to DistCp for preserving modification and access
times. (Rodrigo Schmidt via szetszwo)
HADOOP-5664. Change map serialization so a lock is obtained only where
contention is possible, rather than for each write. (cdouglas)
HADOOP-5896. Remove the dependency of GenericOptionsParser on
Option.withArgPattern. (Giridharan Kesavan and Sharad Agarwal via
sharad)
HADOOP-5784. Makes the number of heartbeats that should arrive a second
at the JobTracker configurable. (Amareshwari Sriramadasu via ddas)
HADOOP-5955. Changes TestFileOuputFormat so that is uses LOCAL_MR
instead of CLUSTER_MR. (Jothi Padmanabhan via das)
HADOOP-5948. Changes TestJavaSerialization to use LocalJobRunner
instead of MiniMR/DFS cluster. (Jothi Padmanabhan via das)
HADOOP-2838. Add mapred.child.env to pass environment variables to
tasktracker's child processes. (Amar Kamat via sharad)
HADOOP-5961. DataNode process understand generic hadoop command line
options (like -Ddfs.property=value). (Raghu Angadi)
HADOOP-5938. Change org.apache.hadoop.mapred.jobcontrol to use new
api. (Amareshwari Sriramadasu via sharad)
HADOOP-2141. Improves the speculative execution heuristic. The heuristic
is currently based on the progress-rates of tasks and the expected time
to complete. Also, statistics about trackers are collected, and speculative
tasks are not given to the ones deduced to be slow.
(Andy Konwinski and ddas)
HADOOP-5952. Change "-1 tests included" wording in test-patch.sh.
(Gary Murry via szetszwo)
HADOOP-6106. Provides an option in ShellCommandExecutor to timeout
commands that do not complete within a certain amount of time.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-5925. EC2 scripts should exit on error. (tomwhite)
HADOOP-6109. Change Text to grow its internal buffer exponentially, rather
than the max of the current length and the proposed length to improve
performance reading large values. (thushara wijeratna via cdouglas)
HADOOP-2366. Support trimmed strings in Configuration. (Michele Catasta
via szetszwo))
HADOOP-6148. Implement a fast, pure Java CRC32 calculator which outperforms
java.util.zip.CRC32. (Todd Lipcon and Scott Carey via szetszwo)
HADOOP-6146. Upgrade to JetS3t version 0.7.1. (tomwhite)
HADOOP-6161. Add get/setEnum methods to Configuration. (cdouglas)
HADOOP-6160. Fix releaseaudit target to run on specific directories.
(gkesavan)
HADOOP-6169. Removing deprecated method calls in TFile. (hong tang via
mahadev)
HADOOP-6176. Add a couple package private methods to AccessTokenHandler
for testing. (Kan Zhang via szetszwo)
HADOOP-6182. Fix ReleaseAudit warnings (Giridharan Kesavan and Lee Tucker
via gkesavan)
HADOOP-6173. Change src/native/packageNativeHadoop.sh to package all
native library files. (Hong Tang via szetszwo)
HADOOP-6184. Provide an API to dump Configuration in a JSON format.
(V.V.Chaitanya Krishna via yhemanth)
HADOOP-6224. Add a method to WritableUtils performing a bounded read of an
encoded String. (Jothi Padmanabhan via cdouglas)
HADOOP-6133. Add a caching layer to Configuration::getClassByName to
alleviate a performance regression introduced in a compatibility layer.
(Todd Lipcon via cdouglas)
HADOOP-6252. Provide a method to determine if a deprecated key is set in
config file. (Jakob Homan via suresh)
HADOOP-5879. Read compression level and strategy from Configuration for
gzip compression. (He Yongqiang via cdouglas)
HADOOP-6216. Support comments in host files. (Ravi Phulari and Dmytro
Molkov via szetszwo)
HADOOP-6217. Update documentation for project split. (Corinne Chandel via
omalley)
HADOOP-6268. Add ivy jar to .gitignore. (Todd Lipcon via cdouglas)
HADOOP-6270. Support deleteOnExit in FileContext. (Suresh Srinivas via
szetszwo)
HADOOP-6233. Rename configuration keys towards API standardization and
backward compatibility. (Jithendra Pandey via suresh)
HADOOP-6260. Add additional unit tests for FileContext util methods.
(Gary Murry via suresh).
HADOOP-6309. Change build.xml to run tests with java asserts. (Eli
Collins via szetszwo)
HADOOP-6326. Hundson runs should check for AspectJ warnings and report
failure if any is present (cos)
HADOOP-6329. Add build-fi directory to the ignore lists. (szetszwo)
HADOOP-5107. Use Maven ant tasks to publish the subproject jars.
(Giridharan Kesavan via omalley)
HADOOP-6343. Log unexpected throwable object caught in RPC. (Jitendra Nath
Pandey via szetszwo)
HADOOP-6367. Removes Access Token implementation from common.
(Kan Zhang via ddas)
HADOOP-6395. Upgrade some libraries to be consistent across common, hdfs,
and mapreduce. (omalley)
HADOOP-6398. Build is broken after HADOOP-6395 patch has been applied (cos)
HADOOP-6413. Move TestReflectionUtils to Common. (Todd Lipcon via tomwhite)
HADOOP-6283. Improve the exception messages thrown by
FileUtil$HardLink.getLinkCount(..). (szetszwo)
HADOOP-6279. Add Runtime::maxMemory to JVM metrics. (Todd Lipcon via
cdouglas)
HADOOP-6305. Unify build property names to facilitate cross-projects
modifications (cos)
HADOOP-6312. Remove unnecessary debug logging in Configuration constructor.
(Aaron Kimball via cdouglas)
HADOOP-6366. Reduce ivy console output to ovservable level (cos)
HADOOP-6400. Log errors getting Unix UGI. (Todd Lipcon via tomwhite)
HADOOP-6346. Add support for specifying unpack pattern regex to
RunJar.unJar. (Todd Lipcon via tomwhite)
HADOOP-6422. Make RPC backend plugable, protocol-by-protocol, to
ease evolution towards Avro. (cutting)
HADOOP-5958. Use JDK 1.6 File APIs in DF.java wherever possible.
(Aaron Kimball via tomwhite)
HADOOP-6222. Core doesn't have TestCommonCLI facility. (cos)
HADOOP-6394. Add a helper class to simplify FileContext related tests and
improve code reusability. (Jitendra Nath Pandey via suresh)
HADOOP-4656. Add a user to groups mapping service. (boryas, acmurthy)
HADOOP-6435. Make RPC.waitForProxy with timeout public. (Steve Loughran
via tomwhite)
HADOOP-6472. add tokenCache option to GenericOptionsParser for passing
file with secret keys to a map reduce job. (boryas)
HADOOP-3205. Read multiple chunks directly from FSInputChecker subclass
into user buffers. (Todd Lipcon via tomwhite)
HADOOP-6479. TestUTF8 assertions could fail with better text.
(Steve Loughran via tomwhite)
HADOOP-6155. Deprecate RecordIO anticipating Avro. (Tom White via cdouglas)
HADOOP-6492. Make some Avro serialization APIs public.
(Aaron Kimball via cutting)
HADOOP-6497. Add an adapter for Avro's SeekableInput interface, so
that Avro can read FileSystem data.
(Aaron Kimball via cutting)
HADOOP-6495. Identifier should be serialized after the password is
created In Token constructor (jnp via boryas)
HADOOP-6518. Makes the UGI honor the env var KRB5CCNAME.
(Owen O'Malley via ddas)
HADOOP-6531. Enhance FileUtil with an API to delete all contents of a
directory. (Amareshwari Sriramadasu via yhemanth)
HADOOP-6547. Move DelegationToken into Common, so that it can be used by
MapReduce also. (devaraj via omalley)
HADOOP-6552. Puts renewTGT=true and useTicketCache=true for the keytab
kerberos options. (ddas)
HADOOP-6534. Trim whitespace from directory lists initializing
LocalDirAllocator. (Todd Lipcon via cdouglas)
HADOOP-6559. Makes the RPC client automatically re-login when the SASL
connection setup fails. This is applicable only to keytab based logins.
(Devaraj Das)
HADOOP-6551. Delegation token renewing and cancelling should provide
meaningful exceptions when there are failures instead of returning
false. (omalley)
HADOOP-6583. Captures authentication and authorization metrics. (ddas)
HADOOP-6543. Allows secure clients to talk to unsecure clusters.
(Kan Zhang via ddas)
HADOOP-6579. Provide a mechanism for encoding/decoding Tokens from
a url-safe string and change the commons-code library to 1.4. (omalley)
HADOOP-6596. Add a version field to the AbstractDelegationTokenIdentifier's
serialized value. (omalley)
HADOOP-6573. Support for persistent delegation tokens.
(Jitendra Pandey via shv)
HADOOP-6594. Provide a fetchdt tool via bin/hdfs. (jhoman via acmurthy)
HADOOP-6589. Provide better error messages when RPC authentication fails.
(Kan Zhang via omalley)
HADOOP-6599 Split existing RpcMetrics into RpcMetrics & RpcDetailedMetrics.
(Suresh Srinivas via Sanjay Radia)
HADOOP-6537 Declare more detailed exceptions in FileContext and
AbstractFileSystem (Suresh Srinivas via Sanjay Radia)
HADOOP-6486. fix common classes to work with Avro 1.3 reflection.
(cutting via tomwhite)
HADOOP-6591. HarFileSystem can handle paths with the whitespace characters.
(Rodrigo Schmidt via dhruba)
HADOOP-6407. Have a way to automatically update Eclipse .classpath file
when new libs are added to the classpath through Ivy. (tomwhite)
HADOOP-3659. Patch to allow hadoop native to compile on Mac OS X.
(Colin Evans and Allen Wittenauer via tomwhite)
HADOOP-6471. StringBuffer -> StringBuilder - conversion of references
as necessary. (Kay Kay via tomwhite)
HADOOP-6646. Move HarfileSystem out of Hadoop Common. (mahadev)
HADOOP-6566. Add methods supporting, enforcing narrower permissions on
local daemon directories. (Arun Murthy and Luke Lu via cdouglas)
HADOOP-6705. Fix to work with 1.5 version of jiracli
(Giridharan Kesavan)
HADOOP-6658. Exclude Private elements from generated Javadoc. (tomwhite)
HADOOP-6635. Install/deploy source jars to Maven repo.
(Patrick Angeles via jghoman)
HADOOP-6717. Log levels in o.a.h.security.Groups too high
(Todd Lipcon via jghoman)
HADOOP-6667. RPC.waitForProxy should retry through NoRouteToHostException.
(Todd Lipcon via tomwhite)
HADOOP-6677. InterfaceAudience.LimitedPrivate should take a string not an
enum. (tomwhite)
HADOOP-678. Remove FileContext#isFile, isDirectory, and exists.
(Eli Collins via hairong)
HADOOP-6515. Make maximum number of http threads configurable.
(Scott Chen via zshao)
HADOOP-6563. Add more symlink tests to cover intermediate symlinks
in paths. (Eli Collins via suresh)
HADOOP-6585. Add FileStatus#isDirectory and isFile. (Eli Collins via
tomwhite)
HADOOP-6738. Move cluster_setup.xml from MapReduce to Common.
(Tom White via tomwhite)
HADOOP-6794. Move configuration and script files post split. (tomwhite)
HADOOP-6403. Deprecate EC2 bash scripts. (tomwhite)
HADOOP-6769. Add an API in FileSystem to get FileSystem instances based
on users(ddas via boryas)
HADOOP-6813. Add a new newInstance method in FileSystem that takes
a "user" as argument (ddas via boryas)
random DataNode. (hairong)
HADOOP-5603. Improve NameNode's block placement performance. (hairong)
HADOOP-5638. More improvement on block placement performance. (hairong)
HADOOP-6180. NameNode slowed down when many files with same filename
were moved to Trash. (Boris Shkolnik via hairong)
HADOOP-6166. Further improve the performance of the pure-Java CRC32
implementation. (Tsz Wo (Nicholas), SZE via cdouglas)
HADOOP-6271. Add recursive and non recursive create and mkdir to
FileContext. (Sanjay Radia via suresh)
HADOOP-6261. Add URI based tests for FileContext.
(Ravi Pulari via suresh).
HADOOP-6307. Add a new SequenceFile.Reader constructor in order to support
reading on un-closed file. (szetszwo)
HADOOP-6467. Improve the performance on HarFileSystem.listStatus(..).
(mahadev via szetszwo)
HADOOP-6569. FsShell#cat should avoid calling unecessary getFileStatus
before opening a file to read. (hairong)
HADOOP-6689. Add directory renaming test to existing FileContext tests.
(Eli Collins via suresh)
HADOOP-6713. The RPC server Listener thread is a scalability bottleneck.
(Dmytro Molkov via hairong)
BUG FIXES
HADOOP-6748. Removes hadoop.cluster.administrators, cluster administrators
acl is passed as parameter in constructor. (amareshwari)
HADOOP-6828. Herrior uses old way of accessing logs directories (Sreekanth
Ramakrishnan via cos)
HADOOP-6788. [Herriot] Exception exclusion functionality is not working
correctly. (Vinay Thota via cos)
HADOOP-6773. Ivy folder contains redundant files (cos)
HADOOP-5379. CBZip2InputStream to throw IOException on data crc error.
(Rodrigo Schmidt via zshao)
HADOOP-5326. Fixes CBZip2OutputStream data corruption problem.
(Rodrigo Schmidt via zshao)
HADOOP-4963. Fixes a logging to do with getting the location of
map output file. (Amareshwari Sriramadasu via ddas)
HADOOP-2337. Trash should close FileSystem on exit and should not start
emtying thread if disabled. (shv)
HADOOP-5072. Fix failure in TestCodec because testSequenceFileGzipCodec
won't pass without native gzip codec. (Zheng Shao via dhruba)
HADOOP-5050. TestDFSShell.testFilePermissions should not assume umask
setting. (Jakob Homan via szetszwo)
HADOOP-4975. Set classloader for nested mapred.join configs. (Jingkei Ly
via cdouglas)
HADOOP-5078. Remove invalid AMI kernel in EC2 scripts. (tomwhite)
HADOOP-5045. FileSystem.isDirectory() should not be deprecated. (Suresh
Srinivas via szetszwo)
HADOOP-4960. Use datasource time, rather than system time, during metrics
demux. (Eric Yang via cdouglas)
HADOOP-5032. Export conf dir set in config script. (Eric Yang via cdouglas)
HADOOP-5176. Fix a typo in TestDFSIO. (Ravi Phulari via szetszwo)
HADOOP-4859. Distinguish daily rolling output dir by adding a timestamp.
(Jerome Boulon via cdouglas)
HADOOP-4959. Correct system metric collection from top on Redhat 5.1. (Eric
Yang via cdouglas)
HADOOP-5039. Fix log rolling regex to process only the relevant
subdirectories. (Jerome Boulon via cdouglas)
HADOOP-5095. Update Chukwa watchdog to accept config parameter. (Jerome
Boulon via cdouglas)
HADOOP-5147. Correct reference to agent list in Chukwa bin scripts. (Ari
Rabkin via cdouglas)
HADOOP-5148. Fix logic disabling watchdog timer in Chukwa daemon scripts.
(Ari Rabkin via cdouglas)
HADOOP-5100. Append, rather than truncate, when creating log4j metrics in
Chukwa. (Jerome Boulon via cdouglas)
HADOOP-5204. Fix broken trunk compilation on Hudson by letting
task-controller be an independent target in build.xml.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-5212. Fix the path translation problem introduced by HADOOP-4868
running on cygwin. (Sharad Agarwal via omalley)
HADOOP-5226. Add license headers to html and jsp files. (szetszwo)
HADOOP-5172. Disable misbehaving Chukwa unit test until it can be fixed.
(Jerome Boulon via nigel)
HADOOP-4933. Fixes a ConcurrentModificationException problem that shows up
when the history viewer is accessed concurrently.
(Amar Kamat via ddas)
HADOOP-5253. Remove duplicate call to cn-docs target.
(Giri Kesavan via nigel)
HADOOP-5251. Fix classpath for contrib unit tests to include clover jar.
(nigel)
HADOOP-5206. Synchronize "unprotected*" methods of FSDirectory on the root.
(Jakob Homan via shv)
HADOOP-5292. Fix NPE in KFS::getBlockLocations. (Sriram Rao via lohit)
HADOOP-5219. Adds a new property io.seqfile.local.dir for use by
SequenceFile, which earlier used mapred.local.dir. (Sharad Agarwal
via ddas)
HADOOP-5300. Fix ant javadoc-dev target and the typo in the class name
NameNodeActivtyMBean. (szetszwo)
HADOOP-5218. libhdfs unit test failed because it was unable to
start namenode/datanode. Fixed. (dhruba)
HADOOP-5273. Add license header to TestJobInProgress.java. (Jakob Homan
via szetszwo)
HADOOP-5229. Remove duplicate version variables in build files
(Stefan Groschupf via johan)
HADOOP-5383. Avoid building an unused string in NameNode's
verifyReplication(). (Raghu Angadi)
HADOOP-5347. Create a job output directory for the bbp examples. (szetszwo)
HADOOP-5341. Make hadoop-daemon scripts backwards compatible with the
changes in HADOOP-4868. (Sharad Agarwal via yhemanth)
HADOOP-5456. Fix javadoc links to ClientProtocol#restoreFailedStorage(..).
(Boris Shkolnik via szetszwo)
HADOOP-5458. Remove leftover Chukwa entries from build, etc. (cdouglas)
HADOOP-5386. Modify hdfsproxy unit test to start on a random port,
implement clover instrumentation. (Zhiyong Zhang via cdouglas)
HADOOP-5511. Add Apache License to EditLogBackupOutputStream. (shv)
HADOOP-5507. Fix JMXGet javadoc warnings. (Boris Shkolnik via szetszwo)
HADOOP-5191. Accessing HDFS with any ip or hostname should work as long
as it points to the interface NameNode is listening on. (Raghu Angadi)
HADOOP-5561. Add javadoc.maxmemory parameter to build, preventing OOM
exceptions from javadoc-dev. (Jakob Homan via cdouglas)
HADOOP-5149. Modify HistoryViewer to ignore unfamiliar files in the log
directory. (Hong Tang via cdouglas)
HADOOP-5477. Fix rare failure in TestCLI for hosts returning variations of
'localhost'. (Jakob Homan via cdouglas)
HADOOP-5194. Disables setsid for tasks run on cygwin.
(Ravi Gummadi via ddas)
HADOOP-5322. Fix misleading/outdated comments in JobInProgress.
(Amareshwari Sriramadasu via cdouglas)
HADOOP-5198. Fixes a problem to do with the task PID file being absent and
the JvmManager trying to look for it. (Amareshwari Sriramadasu via ddas)
HADOOP-5464. DFSClient did not treat write timeout of 0 properly.
(Raghu Angadi)
HADOOP-4045. Fix processing of IO errors in EditsLog.
(Boris Shkolnik via shv)
HADOOP-5462. Fixed a double free bug in the task-controller
executable. (Sreekanth Ramakrishnan via yhemanth)
HADOOP-5652. Fix a bug where in-memory segments are incorrectly retained in
memory. (cdouglas)
HADOOP-5533. Recovery duration shown on the jobtracker webpage is
inaccurate. (Amar Kamat via sharad)
HADOOP-5647. Fix TestJobHistory to not depend on /tmp. (Ravi Gummadi
via sharad)
HADOOP-5661. Fixes some findbugs warnings in o.a.h.mapred* packages and
supresses a bunch of them. (Jothi Padmanabhan via ddas)
HADOOP-5704. Fix compilation problems in TestFairScheduler and
TestCapacityScheduler. (Chris Douglas via szetszwo)
HADOOP-5650. Fix safemode messages in the Namenode log. (Suresh Srinivas
via szetszwo)
HADOOP-5488. Removes the pidfile management for the Task JVM from the
framework and instead passes the PID back and forth between the
TaskTracker and the Task processes. (Ravi Gummadi via ddas)
HADOOP-5658. Fix Eclipse templates. (Philip Zeyliger via shv)
HADOOP-5709. Remove redundant synchronization added in HADOOP-5661. (Jothi
Padmanabhan via cdouglas)
HADOOP-5715. Add conf/mapred-queue-acls.xml to the ignore lists.
(szetszwo)
HADOOP-5592. Fix typo in Streaming doc in reference to GzipCodec.
(Corinne Chandel via tomwhite)
HADOOP-5656. Counter for S3N Read Bytes does not work. (Ian Nowland
via tomwhite)
HADOOP-5406. Fix JNI binding for ZlibCompressor::setDictionary. (Lars
Francke via cdouglas)
HADOOP-3426. Fix/provide handling when DNS lookup fails on the loopback
address. Also cache the result of the lookup. (Steve Loughran via cdouglas)
HADOOP-5476. Close the underlying InputStream in SequenceFile::Reader when
the constructor throws an exception. (Michael Tamm via cdouglas)
HADOOP-5675. Do not launch a job if DistCp has no work to do. (Tsz Wo
(Nicholas), SZE via cdouglas)
HADOOP-5737. Fixes a problem in the way the JobTracker used to talk to
other daemons like the NameNode to get the job's files. Also adds APIs
in the JobTracker to get the FileSystem objects as per the JobTracker's
configuration. (Amar Kamat via ddas)
HADOOP-5648. Not able to generate gridmix.jar on the already compiled
version of hadoop. (gkesavan)
HADOOP-5808. Fix import never used javac warnings in hdfs. (szetszwo)
HADOOP-5203. TT's version build is too restrictive. (Rick Cox via sharad)
HADOOP-5818. Revert the renaming from FSNamesystem.checkSuperuserPrivilege
to checkAccess by HADOOP-5643. (Amar Kamat via szetszwo)
HADOOP-5820. Fix findbugs warnings for http related codes in hdfs.
(szetszwo)
HADOOP-5822. Fix javac warnings in several dfs tests related to unncessary
casts. (Jakob Homan via szetszwo)
HADOOP-5842. Fix a few javac warnings under packages fs and util.
(Hairong Kuang via szetszwo)
HADOOP-5845. Build successful despite test failure on test-core target.
(sharad)
HADOOP-5314. Prevent unnecessary saving of the file system image during
name-node startup. (Jakob Homan via shv)
HADOOP-5855. Fix javac warnings for DisallowedDatanodeException and
UnsupportedActionException. (szetszwo)
HADOOP-5582. Fixes a problem in Hadoop Vaidya to do with reading
counters from job history files. (Suhas Gogate via ddas)
HADOOP-5829. Fix javac warnings found in ReplicationTargetChooser,
FSImage, Checkpointer, SecondaryNameNode and a few other hdfs classes.
(Suresh Srinivas via szetszwo)
HADOOP-5835. Fix findbugs warnings found in Block, DataNode, NameNode and
a few other hdfs classes. (Suresh Srinivas via szetszwo)
HADOOP-5853. Undeprecate HttpServer.addInternalServlet method. (Suresh
Srinivas via szetszwo)
HADOOP-5801. Fixes the problem: If the hosts file is changed across restart
then it should be refreshed upon recovery so that the excluded hosts are
lost and the maps are re-executed. (Amar Kamat via ddas)
HADOOP-5841. Resolve findbugs warnings in DistributedFileSystem,
DatanodeInfo, BlocksMap, DataNodeDescriptor. (Jakob Homan via szetszwo)
HADOOP-5878. Fix import and Serializable javac warnings found in hdfs jsp.
(szetszwo)
HADOOP-5782. Revert a few formatting changes introduced in HADOOP-5015.
(Suresh Srinivas via rangadi)
HADOOP-5687. NameNode throws NPE if fs.default.name is the default value.
(Philip Zeyliger via shv)
HADOOP-5867. Fix javac warnings found in NNBench and NNBenchWithoutMR.
(Konstantin Boudnik via szetszwo)
HADOOP-5728. Fixed FSEditLog.printStatistics IndexOutOfBoundsException.
(Wang Xu via johan)
HADOOP-5847. Fixed failing Streaming unit tests (gkesavan)
HADOOP-5252. Streaming overrides -inputformat option (Klaas Bosteels
via sharad)
HADOOP-5710. Counter MAP_INPUT_BYTES missing from new mapreduce api.
(Amareshwari Sriramadasu via sharad)
HADOOP-5809. Fix job submission, broken by errant directory creation.
(Sreekanth Ramakrishnan and Jothi Padmanabhan via cdouglas)
HADOOP-5635. Change distributed cache to work with other distributed file
systems. (Andrew Hitchcock via tomwhite)
HADOOP-5856. Fix "unsafe multithreaded use of DateFormat" findbugs warning
in DataBlockScanner. (Kan Zhang via szetszwo)
HADOOP-4864. Fixes a problem to do with -libjars with multiple jars when
client and cluster reside on different OSs. (Amareshwari Sriramadasu via
ddas)
HADOOP-5623. Fixes a problem to do with status messages getting overwritten
in streaming jobs. (Rick Cox and Jothi Padmanabhan via ddas)
HADOOP-5895. Fixes computation of count of merged bytes for logging.
(Ravi Gummadi via ddas)
HADOOP-5805. problem using top level s3 buckets as input/output
directories. (Ian Nowland via tomwhite)
HADOOP-5940. trunk eclipse-plugin build fails while trying to copy
commons-cli jar from the lib dir (Giridharan Kesavan via gkesavan)
HADOOP-5864. Fix DMI and OBL findbugs in packages hdfs and metrics.
(hairong)
HADOOP-5935. Fix Hudson's release audit warnings link is broken.
(Giridharan Kesavan via gkesavan)
HADOOP-5947. Delete empty TestCombineFileInputFormat.java
HADOOP-5899. Move a log message in FSEditLog to the right place for
avoiding unnecessary log. (Suresh Srinivas via szetszwo)
HADOOP-5944. Add Apache license header to BlockManager.java. (Suresh
Srinivas via szetszwo)
HADOOP-5891. SecondaryNamenode is able to converse with the NameNode
even when the default value of dfs.http.address is not overridden.
(Todd Lipcon via dhruba)
HADOOP-5953. The isDirectory(..) and isFile(..) methods in KosmosFileSystem
should not be deprecated. (szetszwo)
HADOOP-5954. Fix javac warnings in TestFileCreation, TestSmallBlock,
TestFileStatus, TestDFSShellGenericOptions, TestSeekBug and
TestDFSStartupVersions. (szetszwo)
HADOOP-5956. Fix ivy dependency in hdfsproxy and capacity-scheduler.
(Giridharan Kesavan via szetszwo)
HADOOP-5836. Bug in S3N handling of directory markers using an object with
a trailing "/" causes jobs to fail. (Ian Nowland via tomwhite)
HADOOP-5861. s3n files are not getting split by default. (tomwhite)
HADOOP-5762. Fix a problem that DistCp does not copy empty directory.
(Rodrigo Schmidt via szetszwo)
HADOOP-5859. Fix "wait() or sleep() with locks held" findbugs warnings in
DFSClient. (Kan Zhang via szetszwo)
HADOOP-5457. Fix to continue to run builds even if contrib test fails
(Giridharan Kesavan via gkesavan)
HADOOP-5963. Remove an unnecessary exception catch in NNBench. (Boris
Shkolnik via szetszwo)
HADOOP-5989. Fix streaming test failure. (gkesavan)
HADOOP-5981. Fix a bug in HADOOP-2838 in parsing mapred.child.env.
(Amar Kamat via sharad)
HADOOP-5420. Fix LinuxTaskController to kill tasks using the process
groups they are launched with.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-6031. Remove @author tags from Java source files. (Ravi Phulari
via szetszwo)
HADOOP-5980. Fix LinuxTaskController so tasks get passed
LD_LIBRARY_PATH and other environment variables.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-4041. IsolationRunner does not work as documented.
(Philip Zeyliger via tomwhite)
HADOOP-6004. Fixes BlockLocation deserialization. (Jakob Homan via
szetszwo)
HADOOP-6079. Serialize proxySource as DatanodeInfo in DataTransferProtocol.
(szetszwo)
HADOOP-6096. Fix Eclipse project and classpath files following project
split. (tomwhite)
HADOOP-6122. The great than operator in test-patch.sh should be "-gt" but
not ">". (szetszwo)
HADOOP-6114. Fix javadoc documentation for FileStatus.getLen.
(Dmitry Rzhevskiy via dhruba)
HADOOP-6131. A sysproperty should not be set unless the property
is set on the ant command line in build.xml (hong tang via mahadev)
HADOOP-6137. Fix project specific test-patch requirements
(Giridharan Kesavan)
HADOOP-6138. Eliminate the deprecated warnings introduced by H-5438.
(He Yongqiang via szetszwo)
HADOOP-6132. RPC client create an extra connection because of incorrect
key for connection cache. (Kan Zhang via rangadi)
HADOOP-6123. Add missing classpaths in hadoop-config.sh. (Sharad Agarwal
via szetszwo)
HADOOP-6172. Fix jar file names in hadoop-config.sh and include
${build.src} as a part of the source list in build.xml. (Hong Tang via
szetszwo)
HADOOP-6124. Fix javac warning detection in test-patch.sh. (Giridharan
Kesavan via szetszwo)
HADOOP-6177. FSInputChecker.getPos() would return position greater
than the file size. (Hong Tang via hairong)
HADOOP-6188. TestTrash uses java.io.File api but not hadoop FileSystem api.
(Boris Shkolnik via szetszwo)
HADOOP-6192. Fix Shell.getUlimitMemoryCommand to not rely on Map-Reduce
specific configs. (acmurthy)
HADOOP-6103. Clones the classloader as part of Configuration clone.
(Amareshwari Sriramadasu via ddas)
HADOOP-6152. Fix classpath variables in bin/hadoop-config.sh and some
other scripts. (Aaron Kimball via szetszwo)
HADOOP-6215. fix GenericOptionParser to deal with -D with '=' in the
value. (Amar Kamat via sharad)
HADOOP-6227. Fix Configuration to allow final parameters to be set to null
and prevent them from being overridden.
(Amareshwari Sriramadasu via yhemanth)
HADOOP-6199. Move io.map.skip.index property to core-default from mapred.
(Amareshwari Sriramadasu via cdouglas)
HADOOP-6229. Attempt to make a directory under an existing file on
LocalFileSystem should throw an Exception. (Boris Shkolnik via tomwhite)
HADOOP-6243. Fix a NullPointerException in processing deprecated keys.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-6009. S3N listStatus incorrectly returns null instead of empty
array when called on empty root. (Ian Nowland via tomwhite)
HADOOP-6181. Fix .eclipse.templates/.classpath for avro and jets3t jar
files. (Carlos Valiente via szetszwo)
HADOOP-6196. Fix a bug in SequenceFile.Reader where syncing within the
header would cause the reader to read the sync marker as a record. (Jay
Booth via cdouglas)
HADOOP-6250. Modify test-patch to delete copied XML files before running
patch build. (Rahul Kumar Singh via yhemanth)
HADOOP-6257. Two TestFileSystem classes are confusing
hadoop-hdfs-hdfwithmr. (Philip Zeyliger via tomwhite)
HADOOP-6151. Added a input filter to all of the http servlets that quotes
html characters in the parameters, to prevent cross site scripting
attacks. (omalley)
HADOOP-6274. Fix TestLocalFSFileContextMainOperations test failure.
(Gary Murry via suresh).
HADOOP-6281. Avoid null pointer exceptions when the jsps don't have
paramaters (omalley)
HADOOP-6285. Fix the result type of the getParameterMap method in the
HttpServer.QuotingInputFilter. (omalley)
HADOOP-6286. Fix bugs in related to URI handling in glob methods in
FileContext. (Boris Shkolnik via suresh)
HADOOP-6292. Update native libraries guide. (Corinne Chandel via cdouglas)
HADOOP-6327. FileContext tests should not use /tmp and should clean up
files. (Sanjay Radia via szetszwo)
HADOOP-6318. Upgrade to Avro 1.2.0. (cutting)
HADOOP-6334. Fix GenericOptionsParser to understand URI for -files,
-libjars and -archives options and fix Path to support URI with fragment.
(Amareshwari Sriramadasu via szetszwo)
HADOOP-6344. Fix rm and rmr immediately delete files rather than sending
to trash, if a user is over-quota. (Jakob Homan via suresh)
HADOOP-6347. run-test-core-fault-inject runs a test case twice if
-Dtestcase is set (cos)
HADOOP-6375. Sync documentation for FsShell du with its implementation.
(Todd Lipcon via cdouglas)
HADOOP-6441. Protect web ui from cross site scripting attacks (XSS) on
the host http header and using encoded utf-7. (omalley)
HADOOP-6451. Fix build to run contrib unit tests. (Tom White via cdouglas)
HADOOP-6374. JUnit tests should never depend on anything in conf.
(Anatoli Fomenko via cos)
HADOOP-6290. Prevent duplicate slf4j-simple jar via Avro's classpath.
(Owen O'Malley via cdouglas)
HADOOP-6293. Fix FsShell -text to work on filesystems other than the
default. (cdouglas)
HADOOP-6341. Fix test-patch.sh for checkTests function. (gkesavan)
HADOOP-6314. Fix "fs -help" for the "-count" commond. (Ravi Phulari via
szetszwo)
HADOOP-6405. Update Eclipse configuration to match changes to Ivy
configuration (Edwin Chan via cos)
HADOOP-6411. Remove deprecated file src/test/hadoop-site.xml. (cos)
HADOOP-6386. NameNode's HttpServer can't instantiate InetSocketAddress:
IllegalArgumentException is thrown (cos)
HADOOP-6254. Slow reads cause s3n to fail with SocketTimeoutException.
(Andrew Hitchcock via tomwhite)
HADOOP-6428. HttpServer sleeps with negative values. (cos)
HADOOP-6414. Add command line help for -expunge command.
(Ravi Phulari via tomwhite)
HADOOP-6391. Classpath should not be part of command line arguments.
(Cristian Ivascu via tomwhite)
HADOOP-6462. Target "compile" does not exist in contrib/cloud. (tomwhite)
HADOOP-6402. testConf.xsl is not well-formed XML. (Steve Loughran
via tomwhite)
HADOOP-6489. Fix 3 findbugs warnings. (Erik Steffl via suresh)
HADOOP-6517. Fix UserGroupInformation so that tokens are saved/retrieved
to/from the embedded Subject (Owen O'Malley & Kan Zhang via ddas)
HADOOP-6538. Sets hadoop.security.authentication to simple by default.
(ddas)
HADOOP-6540. Contrib unit tests have invalid XML for core-site, etc.
(Aaron Kimball via tomwhite)
HADOOP-6521. User specified umask using deprecated dfs.umask must override
server configured using new dfs.umaskmode for backward compatibility.
(suresh)
HADOOP-6522. Fix decoding of codepoint zero in UTF8. (cutting)
HADOOP-6505. Use tr rather than sed to effect literal substitution in the
build script. (Allen Wittenauer via cdouglas)
HADOOP-6548. Replace mortbay imports with commons logging. (cdouglas)
HADOOP-6560. Handle invalid har:// uri in HarFileSystem. (szetszwo)
HADOOP-6549. TestDoAsEffectiveUser should use ip address of the host
for superuser ip check(jnp via boryas)
HADOOP-6570. RPC#stopProxy throws NPE if getProxyEngine(proxy) returns
null. (hairong)
HADOOP-6558. Return null in HarFileSystem.getFileChecksum(..) since no
checksum algorithm is implemented. (szetszwo)
HADOOP-6572. Makes sure that SASL encryption and push to responder
queue for the RPC response happens atomically. (Kan Zhang via ddas)
HADOOP-6545. Changes the Key for the FileSystem cache to be UGI (ddas)
HADOOP-6609. Fixed deadlock in RPC by replacing shared static
DataOutputBuffer in the UTF8 class with a thread local variable. (omalley)
HADOOP-6504. Invalid example in the documentation of
org.apache.hadoop.util.Tool. (Benoit Sigoure via tomwhite)
HADOOP-6546. BloomMapFile can return false negatives. (Clark Jefcoat
via tomwhite)
HADOOP-6593. TextRecordInputStream doesn't close SequenceFile.Reader.
(Chase Bradford via tomwhite)
HADOOP-6175. Incorrect version compilation with es_ES.ISO8859-15 locale
on Solaris 10. (Urko Benito via tomwhite)
HADOOP-6645. Bugs on listStatus for HarFileSystem (rodrigo via mahadev)
HADOOP-6645. Re: Bugs on listStatus for HarFileSystem (rodrigo via
mahadev)
HADOOP-6654. Fix code example in WritableComparable javadoc. (Tom White
via szetszwo)
HADOOP-6640. FileSystem.get() does RPC retries within a static
synchronized block. (hairong)
HADOOP-6691. TestFileSystemCaching sometimes hangs. (hairong)
HADOOP-6507. Hadoop Common Docs - delete 3 doc files that do not belong
under Common. (Corinne Chandel via tomwhite)
HADOOP-6439. Fixes handling of deprecated keys to follow order in which
keys are defined. (V.V.Chaitanya Krishna via yhemanth)
HADOOP-6690. FilterFileSystem correctly handles setTimes call.
(Rodrigo Schmidt via dhruba)
HADOOP-6703. Prevent renaming a file, directory or symbolic link to
itself. (Eli Collins via suresh)
HADOOP-6710. Symbolic umask for file creation is not conformant with posix.
(suresh)
HADOOP-6719. Insert all missing methods in FilterFs.
(Rodrigo Schmidt via dhruba)
HADOOP-6724. IPC doesn't properly handle IOEs thrown by socket factory.
(Todd Lipcon via tomwhite)
HADOOP-6722. NetUtils.connect should check that it hasn't connected a socket
to itself. (Todd Lipcon via tomwhite)
HADOOP-6634. Fix AccessControlList to use short names to verify access
control. (Vinod Kumar Vavilapalli via sharad)
HADOOP-6709. Re-instate deprecated FileSystem methods that were removed
after 0.20. (tomwhite)
HADOOP-6630. hadoop-config.sh fails to get executed if hadoop wrapper
scripts are in path. (Allen Wittenauer via tomwhite)
HADOOP-6742. Add methods HADOOP-6709 from to TestFilterFileSystem.
(Eli Collins via tomwhite)
HADOOP-6727. Remove UnresolvedLinkException from public FileContext APIs.
(Eli Collins via tomwhite)
HADOOP-6631. Fix FileUtil.fullyDelete() to continue deleting other files
despite failure at any level. (Contributed by Ravi Gummadi and
Vinod Kumar Vavilapalli)
HADOOP-6723. Unchecked exceptions thrown in IPC Connection should not
orphan clients. (Todd Lipcon via tomwhite)
HADOOP-6404. Rename the generated artifacts to common instead of core.
(tomwhite)
HADOOP-6461. Webapps aren't located correctly post-split.
(Todd Lipcon and Steve Loughran via tomwhite)
HADOOP-6826. Revert FileSystem create method that takes CreateFlags.
(tomwhite)
HADOOP-6782. TestAvroRpc fails with avro-1.3.1 and avro-1.3.2.
(Doug Cutting via tomwhite)
HADOOP-6800. Harmonize JAR library versions. (tomwhite)
HADOOP-6847. Problem staging 0.21.0 artifacts to Apache Nexus Maven
Repository (Giridharan Kesavan via cos)
HADOOP-6819. [Herriot] Shell command for getting the new exceptions in
the logs returning exitcode 1 after executing successfully. (Vinay Thota
via cos)
HADOOP-6839. [Herriot] Implement a functionality for getting the user list
for creating proxy users. (Vinay Thota via cos)
HADOOP-6836. [Herriot]: Generic method for adding/modifying the attributes
for new configuration. (Vinay Thota via cos)
HADOOP-6860. 'compile-fault-inject' should never be called directly.
(Konstantin Boudnik)
HADOOP-6790. Instrumented (Herriot) build uses too wide mask to include
aspect files. (Konstantin Boudnik)
HADOOP-6875. [Herriot] Cleanup of temp. configurations is needed upon
restart of a cluster (Vinay Thota via cos)
Release 0.20.3 - Unreleased
NEW FEATURES
HADOOP-6637. Benchmark for establishing RPC session. (shv)
BUG FIXES
HADOOP-6760. WebServer shouldn't increase port number in case of negative
port setting caused by Jetty's race (cos)
HADOOP-6881. Make WritableComparator intialize classes when
looking for their raw comparator, as classes often register raw
comparators in initializers, which are no longer automatically run
in Java 6 when a class is referenced. (cutting via omalley)
Release 0.20.2 - 2010-2-16
NEW FEATURES
HADOOP-6218. Adds a feature where TFile can be split by Record
Sequence number. (Hong Tang and Raghu Angadi via ddas)
BUG FIXES
HADOOP-6231. Allow caching of filesystem instances to be disabled on a
per-instance basis. (tomwhite)
HADOOP-5759. Fix for IllegalArgumentException when CombineFileInputFormat
is used as job InputFormat. (Amareshwari Sriramadasu via dhruba)
HADOOP-6097. Fix Path conversion in makeQualified and reset LineReader byte
count at the start of each block in Hadoop archives. (Ben Slusky, Tom
White, and Mahadev Konar via cdouglas)
HADOOP-6269. Fix threading issue with defaultResource in Configuration.
(Sreekanth Ramakrishnan via cdouglas)
HADOOP-6460. Reinitializes buffers used for serializing responses in ipc
server on exceeding maximum response size to free up Java heap. (suresh)
HADOOP-6315. Avoid incorrect use of BuiltInflater/BuiltInDeflater in
GzipCodec. (Aaron Kimball via cdouglas)
HADOOP-6498. IPC client bug may cause rpc call hang. (Ruyue Ma and
hairong via hairong)
IMPROVEMENTS
HADOOP-5611. Fix C++ libraries to build on Debian Lenny. (Todd Lipcon
via tomwhite)
HADOOP-5612. Some c++ scripts are not chmodded before ant execution.
(Todd Lipcon via tomwhite)
HADOOP-1849. Add undocumented configuration parameter for per handler
call queue size in IPC Server. (shv)
Release 0.20.1 - 2009-09-01
INCOMPATIBLE CHANGES
HADOOP-5726. Remove pre-emption from capacity scheduler code base.
(Rahul Kumar Singh via yhemanth)
HADOOP-5881. Simplify memory monitoring and scheduling related
configuration. (Vinod Kumar Vavilapalli via yhemanth)
NEW FEATURES
HADOOP-6080. Introduce -skipTrash option to rm and rmr.
(Jakob Homan via shv)
HADOOP-3315. Add a new, binary file foramt, TFile. (Hong Tang via cdouglas)
IMPROVEMENTS
HADOOP-5711. Change Namenode file close log to info. (szetszwo)
HADOOP-5736. Update the capacity scheduler documentation for features
like memory based scheduling, job initialization and removal of pre-emption.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-5714. Add a metric for NameNode getFileInfo operation. (Jakob Homan
via szetszwo)
HADOOP-4372. Improves the way history filenames are obtained and manipulated.
(Amar Kamat via ddas)
HADOOP-5897. Add name-node metrics to capture java heap usage.
(Suresh Srinivas via shv)
OPTIMIZATIONS
BUG FIXES
HADOOP-5691. Makes org.apache.hadoop.mapreduce.Reducer concrete class
instead of abstract. (Amareshwari Sriramadasu via sharad)
HADOOP-5646. Fixes a problem in TestQueueCapacities.
(Vinod Kumar Vavilapalli via ddas)
HADOOP-5655. TestMRServerPorts fails on java.net.BindException. (Devaraj
Das via hairong)
HADOOP-5654. TestReplicationPolicy.<init> fails on java.net.BindException.
(hairong)
HADOOP-5688. Fix HftpFileSystem checksum path construction. (Tsz Wo
(Nicholas) Sze via cdouglas)
HADOOP-4674. Fix fs help messages for -test, -text, -tail, -stat
and -touchz options. (Ravi Phulari via szetszwo)
HADOOP-5718. Remove the check for the default queue in capacity scheduler.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-5719. Remove jobs that failed initialization from the waiting queue
in the capacity scheduler. (Sreekanth Ramakrishnan via yhemanth)
HADOOP-4744. Attaching another fix to the jetty port issue. The TaskTracker
kills itself if it ever discovers that the port to which jetty is actually
bound is invalid (-1). (ddas)
HADOOP-5349. Fixes a problem in LocalDirAllocator to check for the return
path value that is returned for the case where the file we want to write
is of an unknown size. (Vinod Kumar Vavilapalli via ddas)
HADOOP-5636. Prevents a job from going to RUNNING state after it has been
KILLED (this used to happen when the SetupTask would come back with a
success after the job has been killed). (Amar Kamat via ddas)
HADOOP-5641. Fix a NullPointerException in capacity scheduler's memory
based scheduling code when jobs get retired. (yhemanth)
HADOOP-5828. Use absolute path for mapred.local.dir of JobTracker in
MiniMRCluster. (yhemanth)
HADOOP-4981. Fix capacity scheduler to schedule speculative tasks
correctly in the presence of High RAM jobs.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-5210. Solves a problem in the progress report of the reduce task.
(Ravi Gummadi via ddas)
HADOOP-5850. Fixes a problem to do with not being able to jobs with
0 maps/reduces. (Vinod K V via ddas)
HADOOP-4626. Correct the API links in hdfs forrest doc so that they
point to the same version of hadoop. (szetszwo)
HADOOP-5883. Fixed tasktracker memory monitoring to account for
momentary spurts in memory usage due to java's fork() model.
(yhemanth)
HADOOP-5539. Fixes a problem to do with not preserving intermediate
output compression for merged data.
(Jothi Padmanabhan and Billy Pearson via ddas)
HADOOP-5932. Fixes a problem in capacity scheduler in computing
available memory on a tasktracker.
(Vinod Kumar Vavilapalli via yhemanth)
HADOOP-5908. Fixes a problem to do with ArithmeticException in the
JobTracker when there are jobs with 0 maps. (Amar Kamat via ddas)
HADOOP-5924. Fixes a corner case problem to do with job recovery with
empty history files. Also, after a JT restart, sends KillTaskAction to
tasks that report back but the corresponding job hasn't been initialized
yet. (Amar Kamat via ddas)
HADOOP-5882. Fixes a reducer progress update problem for new mapreduce
api. (Amareshwari Sriramadasu via sharad)
HADOOP-5746. Fixes a corner case problem in Streaming, where if an exception
happens in MROutputThread after the last call to the map/reduce method, the
exception goes undetected. (Amar Kamat via ddas)
HADOOP-5884. Fixes accounting in capacity scheduler so that high RAM jobs
take more slots. (Vinod Kumar Vavilapalli via yhemanth)
HADOOP-5937. Correct a safemode message in FSNamesystem. (Ravi Phulari
via szetszwo)
HADOOP-5869. Fix bug in assignment of setup / cleanup task that was
causing TestQueueCapacities to fail.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-5921. Fixes a problem in the JobTracker where it sometimes never used
to come up due to a system file creation on JobTracker's system-dir failing.
This problem would sometimes show up only when the FS for the system-dir
(usually HDFS) is started at nearly the same time as the JobTracker.
(Amar Kamat via ddas)
HADOOP-5920. Fixes a testcase failure for TestJobHistory.
(Amar Kamat via ddas)
HADOOP-6139. Fix the FsShell help messages for rm and rmr. (Jakob Homan
via szetszwo)
HADOOP-6145. Fix FsShell rm/rmr error messages when there is a FNFE.
(Jakob Homan via szetszwo)
HADOOP-6150. Users should be able to instantiate comparator using TFile
API. (Hong Tang via rangadi)
Release 0.20.0 - 2009-04-15
INCOMPATIBLE CHANGES
HADOOP-4210. Fix findbugs warnings for equals implementations of mapred ID
classes. Removed public, static ID::read and ID::forName; made ID an
abstract class. (Suresh Srinivas via cdouglas)
HADOOP-4253. Fix various warnings generated by findbugs.
Following deprecated methods in RawLocalFileSystem are removed:
public String getName()
public void lock(Path p, boolean shared)
public void release(Path p)
(Suresh Srinivas via johan)
HADOOP-4618. Move http server from FSNamesystem into NameNode.
FSNamesystem.getNameNodeInfoPort() is removed.
FSNamesystem.getDFSNameNodeMachine() and FSNamesystem.getDFSNameNodePort()
replaced by FSNamesystem.getDFSNameNodeAddress().
NameNode(bindAddress, conf) is removed.
(shv)
HADOOP-4567. GetFileBlockLocations returns the NetworkTopology
information of the machines where the blocks reside. (dhruba)
HADOOP-4435. The JobTracker WebUI displays the amount of heap memory
in use. (dhruba)
HADOOP-4628. Move Hive into a standalone subproject. (omalley)
HADOOP-4188. Removes task's dependency on concrete filesystems.
(Sharad Agarwal via ddas)
HADOOP-1650. Upgrade to Jetty 6. (cdouglas)
HADOOP-3986. Remove static Configuration from JobClient. (Amareshwari
Sriramadasu via cdouglas)
JobClient::setCommandLineConfig is removed
JobClient::getCommandLineConfig is removed
JobShell, TestJobShell classes are removed
HADOOP-4422. S3 file systems should not create bucket.
(David Phillips via tomwhite)
HADOOP-4035. Support memory based scheduling in capacity scheduler.
(Vinod Kumar Vavilapalli via yhemanth)
HADOOP-3497. Fix bug in overly restrictive file globbing with a
PathFilter. (tomwhite)
HADOOP-4445. Replace running task counts with running task
percentage in capacity scheduler UI. (Sreekanth Ramakrishnan via
yhemanth)
HADOOP-4631. Splits the configuration into three parts - one for core,
one for mapred and the last one for HDFS. (Sharad Agarwal via cdouglas)
HADOOP-3344. Fix libhdfs build to use autoconf and build the same
architecture (32 vs 64 bit) of the JVM running Ant. The libraries for
pipes, utils, and libhdfs are now all in c++/<os_osarch_jvmdatamodel>/lib.
(Giridharan Kesavan via nigel)
HADOOP-4874. Remove LZO codec because of licensing issues. (omalley)
HADOOP-4970. The full path name of a file is preserved inside Trash.
(Prasad Chakka via dhruba)
HADOOP-4103. NameNode keeps a count of missing blocks. It warns on
WebUI if there are such blocks. '-report' and '-metaSave' have extra
info to track such blocks. (Raghu Angadi)
HADOOP-4783. Change permissions on history files on the jobtracker
to be only group readable instead of world readable.
(Amareshwari Sriramadasu via yhemanth)
NEW FEATURES
HADOOP-4575. Add a proxy service for relaying HsftpFileSystem requests.
Includes client authentication via user certificates and config-based
access control. (Kan Zhang via cdouglas)
HADOOP-4661. Add DistCh, a new tool for distributed ch{mod,own,grp}.
(szetszwo)
HADOOP-4709. Add several new features and bug fixes to Chukwa.
Added Hadoop Infrastructure Care Center (UI for visualize data collected
by Chukwa)
Added FileAdaptor for streaming small file in one chunk
Added compression to archive and demux output
Added unit tests and validation for agent, collector, and demux map
reduce job
Added database loader for loading demux output (sequence file) to jdbc
connected database
Added algorithm to distribute collector load more evenly
(Jerome Boulon, Eric Yang, Andy Konwinski, Ariel Rabkin via cdouglas)
HADOOP-4179. Add Vaidya tool to analyze map/reduce job logs for performanc
problems. (Suhas Gogate via omalley)
HADOOP-4029. Add NameNode storage information to the dfshealth page and
move DataNode information to a separated page. (Boris Shkolnik via
szetszwo)
HADOOP-4348. Add service-level authorization for Hadoop. (acmurthy)
HADOOP-4826. Introduce admin command saveNamespace. (shv)
HADOOP-3063 BloomMapFile - fail-fast version of MapFile for sparsely
populated key space (Andrzej Bialecki via stack)
HADOOP-1230. Add new map/reduce API and deprecate the old one. Generally,
the old code should work without problem. The new api is in
org.apache.hadoop.mapreduce and the old classes in org.apache.hadoop.mapred
are deprecated. Differences in the new API:
1. All of the methods take Context objects that allow us to add new
methods without breaking compatability.
2. Mapper and Reducer now have a "run" method that is called once and
contains the control loop for the task, which lets applications
replace it.
3. Mapper and Reducer by default are Identity Mapper and Reducer.
4. The FileOutputFormats use part-r-00000 for the output of reduce 0 and
part-m-00000 for the output of map 0.
5. The reduce grouping comparator now uses the raw compare instead of
object compare.
6. The number of maps in FileInputFormat is controlled by min and max
split size rather than min size and the desired number of maps.
(omalley)
HADOOP-3305. Use Ivy to manage dependencies. (Giridharan Kesavan
and Steve Loughran via cutting)
IMPROVEMENTS
HADOOP-4749. Added a new counter REDUCE_INPUT_BYTES. (Yongqiang He via
zshao)
HADOOP-4234. Fix KFS "glue" layer to allow applications to interface
with multiple KFS metaservers. (Sriram Rao via lohit)
HADOOP-4245. Update to latest version of KFS "glue" library jar.
(Sriram Rao via lohit)
HADOOP-4244. Change test-patch.sh to check Eclipse classpath no matter
it is run by Hudson or not. (szetszwo)
HADOOP-3180. Add name of missing class to WritableName.getClass
IOException. (Pete Wyckoff via omalley)
HADOOP-4178. Make the capacity scheduler's default values configurable.
(Sreekanth Ramakrishnan via omalley)
HADOOP-4262. Generate better error message when client exception has null
message. (stevel via omalley)
HADOOP-4226. Refactor and document LineReader to make it more readily
understandable. (Yuri Pradkin via cdouglas)
HADOOP-4238. When listing jobs, if scheduling information isn't available
print NA instead of empty output. (Sreekanth Ramakrishnan via johan)
HADOOP-4284. Support filters that apply to all requests, or global filters,
to HttpServer. (Kan Zhang via cdouglas)
HADOOP-4276. Improve the hashing functions and deserialization of the
mapred ID classes. (omalley)
HADOOP-4485. Add a compile-native ant task, as a shorthand. (enis)
HADOOP-4454. Allow # comments in slaves file. (Rama Ramasamy via omalley)
HADOOP-3461. Remove hdfs.StringBytesWritable. (szetszwo)
HADOOP-4437. Use Halton sequence instead of java.util.Random in
PiEstimator. (szetszwo)
HADOOP-4572. Change INode and its sub-classes to package private.
(szetszwo)
HADOOP-4187. Does a runtime lookup for JobConf/JobConfigurable, and if
found, invokes the appropriate configure method. (Sharad Agarwal via ddas)
HADOOP-4453. Improve ssl configuration and handling in HsftpFileSystem,
particularly when used with DistCp. (Kan Zhang via cdouglas)
HADOOP-4583. Several code optimizations in HDFS. (Suresh Srinivas via
szetszwo)
HADOOP-3923. Remove org.apache.hadoop.mapred.StatusHttpServer. (szetszwo)
HADOOP-4622. Explicitly specify interpretor for non-native
pipes binaries. (Fredrik Hedberg via johan)
HADOOP-4505. Add a unit test to test faulty setup task and cleanup
task killing the job. (Amareshwari Sriramadasu via johan)
HADOOP-4608. Don't print a stack trace when the example driver gets an
unknown program to run. (Edward Yoon via omalley)
HADOOP-4645. Package HdfsProxy contrib project without the extra level
of directories. (Kan Zhang via omalley)
HADOOP-4126. Allow access to HDFS web UI on EC2 (tomwhite via omalley)
HADOOP-4612. Removes RunJar's dependency on JobClient.
(Sharad Agarwal via ddas)
HADOOP-4185. Adds setVerifyChecksum() method to FileSystem.
(Sharad Agarwal via ddas)
HADOOP-4523. Prevent too many tasks scheduled on a node from bringing
it down by monitoring for cumulative memory usage across tasks.
(Vinod Kumar Vavilapalli via yhemanth)
HADOOP-4640. Adds an input format that can split lzo compressed
text files. (johan)
HADOOP-4666. Launch reduces only after a few maps have run in the
Fair Scheduler. (Matei Zaharia via johan)
HADOOP-4339. Remove redundant calls from FileSystem/FsShell when
generating/processing ContentSummary. (David Phillips via cdouglas)
HADOOP-2774. Add counters tracking records spilled to disk in MapTask and
ReduceTask. (Ravi Gummadi via cdouglas)
HADOOP-4513. Initialize jobs asynchronously in the capacity scheduler.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-4649. Improve abstraction for spill indices. (cdouglas)
HADOOP-3770. Add gridmix2, an iteration on the gridmix benchmark. (Runping
Qi via cdouglas)
HADOOP-4708. Add support for dfsadmin commands in TestCLI. (Boris Shkolnik
via cdouglas)
HADOOP-4758. Add a splitter for metrics contexts to support more than one
type of collector. (cdouglas)
HADOOP-4722. Add tests for dfsadmin quota error messages. (Boris Shkolnik
via cdouglas)
HADOOP-4690. fuse-dfs - create source file/function + utils + config +
main source files. (pete wyckoff via mahadev)
HADOOP-3750. Fix and enforce module dependencies. (Sharad Agarwal via
tomwhite)
HADOOP-4747. Speed up FsShell::ls by removing redundant calls to the
filesystem. (David Phillips via cdouglas)
HADOOP-4305. Improves the blacklisting strategy, whereby, tasktrackers
that are blacklisted are not given tasks to run from other jobs, subject
to the following conditions (all must be met):
1) The TaskTracker has been blacklisted by at least 4 jobs (configurable)
2) The TaskTracker has been blacklisted 50% more number of times than
the average (configurable)
3) The cluster has less than 50% trackers blacklisted
Once in 24 hours, a TaskTracker blacklisted for all jobs is given a chance.
Restarting the TaskTracker moves it out of the blacklist.
(Amareshwari Sriramadasu via ddas)
HADOOP-4688. Modify the MiniMRDFSSort unit test to spill multiple times,
exercising the map-side merge code. (cdouglas)
HADOOP-4737. Adds the KILLED notification when jobs get killed.
(Amareshwari Sriramadasu via ddas)
HADOOP-4728. Add a test exercising different namenode configurations.
(Boris Shkolnik via cdouglas)
HADOOP-4807. Adds JobClient commands to get the active/blacklisted tracker
names. Also adds commands to display running/completed task attempt IDs.
(ddas)
HADOOP-4699. Remove checksum validation from map output servlet. (cdouglas)
HADOOP-4838. Added a registry to automate metrics and mbeans management.
(Sanjay Radia via acmurthy)
HADOOP-3136. Fixed the default scheduler to assign multiple tasks to each
tasktracker per heartbeat, when feasible. To ensure locality isn't hurt
too badly, the scheudler will not assign more than one off-switch task per
heartbeat. The heartbeat interval is also halved since the task-tracker is
fixed to no longer send out heartbeats on each task completion. A
slow-start for scheduling reduces is introduced to ensure that reduces
aren't started till sufficient number of maps are done, else reduces of
jobs whose maps aren't scheduled might swamp the cluster.
Configuration changes to mapred-default.xml:
add mapred.reduce.slowstart.completed.maps
(acmurthy)
HADOOP-4545. Add example and test case of secondary sort for the reduce.
(omalley)
HADOOP-4753. Refactor gridmix2 to reduce code duplication. (cdouglas)
HADOOP-4909. Fix Javadoc and make some of the API more consistent in their
use of the JobContext instead of Configuration. (omalley)
HADOOP-4920. Stop storing Forrest output in Subversion. (cutting)
HADOOP-4948. Add parameters java5.home and forrest.home to the ant commands
in test-patch.sh. (Giridharan Kesavan via szetszwo)
HADOOP-4830. Add end-to-end test cases for testing queue capacities.
(Vinod Kumar Vavilapalli via yhemanth)
HADOOP-4980. Improve code layout of capacity scheduler to make it
easier to fix some blocker bugs. (Vivek Ratan via yhemanth)
HADOOP-4916. Make user/location of Chukwa installation configurable by an
external properties file. (Eric Yang via cdouglas)
HADOOP-4950. Make the CompressorStream, DecompressorStream,
BlockCompressorStream, and BlockDecompressorStream public to facilitate
non-Hadoop codecs. (omalley)
HADOOP-4843. Collect job history and configuration in Chukwa. (Eric Yang
via cdouglas)
HADOOP-5030. Build Chukwa RPM to install into configured directory. (Eric
Yang via cdouglas)
HADOOP-4828. Updates documents to do with configuration (HADOOP-4631).
(Sharad Agarwal via ddas)
HADOOP-4939. Adds a test that would inject random failures for tasks in
large jobs and would also inject TaskTracker failures. (ddas)
HADOOP-4944. A configuration file can include other configuration
files. (Rama Ramasamy via dhruba)
HADOOP-4804. Provide Forrest documentation for the Fair Scheduler.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-5248. A testcase that checks for the existence of job directory
after the job completes. Fails if it exists. (ddas)
HADOOP-4664. Introduces multiple job initialization threads, where the
number of threads are configurable via mapred.jobinit.threads.
(Matei Zaharia and Jothi Padmanabhan via ddas)
HADOOP-4191. Adds a testcase for JobHistory. (Ravi Gummadi via ddas)
HADOOP-5466. Change documenation CSS style for headers and code. (Corinne
Chandel via szetszwo)
HADOOP-5275. Add ivy directory and files to built tar.
(Giridharan Kesavan via nigel)
HADOOP-5468. Add sub-menus to forrest documentation and make some minor
edits. (Corinne Chandel via szetszwo)
HADOOP-5437. Fix TestMiniMRDFSSort to properly test jvm-reuse. (omalley)
HADOOP-5521. Removes dependency of TestJobInProgress on RESTART_COUNT
JobHistory tag. (Ravi Gummadi via ddas)
OPTIMIZATIONS
HADOOP-3293. Fixes FileInputFormat to do provide locations for splits
based on the rack/host that has the most number of bytes.
(Jothi Padmanabhan via ddas)
HADOOP-4683. Fixes Reduce shuffle scheduler to invoke
getMapCompletionEvents in a separate thread. (Jothi Padmanabhan
via ddas)
BUG FIXES
HADOOP-4204. Fix findbugs warnings related to unused variables, naive
Number subclass instantiation, Map iteration, and badly scoped inner
classes. (Suresh Srinivas via cdouglas)
HADOOP-4207. Update derby jar file to release 10.4.2 release.
(Prasad Chakka via dhruba)
HADOOP-4325. SocketInputStream.read() should return -1 in case EOF.
(Raghu Angadi)
HADOOP-4408. FsAction functions need not create new objects. (cdouglas)
HADOOP-4440. TestJobInProgressListener tests for jobs killed in queued
state (Amar Kamat via ddas)
HADOOP-4346. Implement blocking connect so that Hadoop is not affected
by selector problem with JDK default implementation. (Raghu Angadi)
HADOOP-4388. If there are invalid blocks in the transfer list, Datanode
should handle them and keep transferring the remaining blocks. (Suresh
Srinivas via szetszwo)
HADOOP-4587. Fix a typo in Mapper javadoc. (Koji Noguchi via szetszwo)
HADOOP-4530. In fsck, HttpServletResponse sendError fails with
IllegalStateException. (hairong)
HADOOP-4377. Fix a race condition in directory creation in
NativeS3FileSystem. (David Phillips via cdouglas)
HADOOP-4621. Fix javadoc warnings caused by duplicate jars. (Kan Zhang via
cdouglas)
HADOOP-4566. Deploy new hive code to support more types.
(Zheng Shao via dhruba)
HADOOP-4571. Add chukwa conf files to svn:ignore list. (Eric Yang via
szetszwo)
HADOOP-4589. Correct PiEstimator output messages and improve the code
readability. (szetszwo)
HADOOP-4650. Correct a mismatch between the default value of
local.cache.size in the config and the source. (Jeff Hammerbacher via
cdouglas)
HADOOP-4606. Fix cygpath error if the log directory does not exist.
(szetszwo via omalley)
HADOOP-4141. Fix bug in ScriptBasedMapping causing potential infinite
loop on misconfigured hadoop-site. (Aaron Kimball via tomwhite)
HADOOP-4691. Correct a link in the javadoc of IndexedSortable. (szetszwo)
HADOOP-4598. '-setrep' command skips under-replicated blocks. (hairong)
HADOOP-4429. Set defaults for user, group in UnixUserGroupInformation so
login fails more predictably when misconfigured. (Alex Loddengaard via
cdouglas)
HADOOP-4676. Fix broken URL in blacklisted tasktrackers page. (Amareshwari
Sriramadasu via cdouglas)
HADOOP-3422 Ganglia counter metrics are all reported with the metric
name "value", so the counter values can not be seen. (Jason Attributor
and Brian Bockelman via stack)
HADOOP-4704. Fix javadoc typos "the the". (szetszwo)
HADOOP-4677. Fix semantics of FileSystem::getBlockLocations to return
meaningful values. (Hong Tang via cdouglas)
HADOOP-4669. Use correct operator when evaluating whether access time is
enabled (Dhruba Borthakur via cdouglas)
HADOOP-4732. Pass connection and read timeouts in the correct order when
setting up fetch in reduce. (Amareshwari Sriramadasu via cdouglas)
HADOOP-4558. Fix capacity reclamation in capacity scheduler.
(Amar Kamat via yhemanth)
HADOOP-4770. Fix rungridmix_2 script to work with RunJar. (cdouglas)
HADOOP-4738. When using git, the saveVersion script will use only the
commit hash for the version and not the message, which requires escaping.
(cdouglas)
HADOOP-4576. Show pending job count instead of task count in the UI per
queue in capacity scheduler. (Sreekanth Ramakrishnan via yhemanth)
HADOOP-4623. Maintain running tasks even if speculative execution is off.
(Amar Kamat via yhemanth)
HADOOP-4786. Fix broken compilation error in
TestTrackerBlacklistAcrossJobs. (yhemanth)
HADOOP-4785. Fixes theJobTracker heartbeat to not make two calls to
System.currentTimeMillis(). (Amareshwari Sriramadasu via ddas)
HADOOP-4792. Add generated Chukwa configuration files to version control
ignore lists. (cdouglas)
HADOOP-4796. Fix Chukwa test configuration, remove unused components. (Eric
Yang via cdouglas)
HADOOP-4708. Add binaries missed in the initial checkin for Chukwa. (Eric
Yang via cdouglas)
HADOOP-4805. Remove black list collector from Chukwa Agent HTTP Sender.
(Eric Yang via cdouglas)
HADOOP-4837. Move HADOOP_CONF_DIR configuration to chukwa-env.sh (Jerome
Boulon via cdouglas)
HADOOP-4825. Use ps instead of jps for querying process status in Chukwa.
(Eric Yang via cdouglas)
HADOOP-4844. Fixed javadoc for
org.apache.hadoop.fs.permission.AccessControlException to document that
it's deprecated in favour of
org.apache.hadoop.security.AccessControlException. (acmurthy)
HADOOP-4706. Close the underlying output stream in
IFileOutputStream::close. (Jothi Padmanabhan via cdouglas)
HADOOP-4855. Fixed command-specific help messages for refreshServiceAcl in
DFSAdmin and MRAdmin. (acmurthy)
HADOOP-4820. Remove unused method FSNamesystem::deleteInSafeMode. (Suresh
Srinivas via cdouglas)
HADOOP-4698. Lower io.sort.mb to 10 in the tests and raise the junit memory
limit to 512m from 256m. (Nigel Daley via cdouglas)
HADOOP-4860. Split TestFileTailingAdapters into three separate tests to
avoid contention. (Eric Yang via cdouglas)
HADOOP-3921. Fixed clover (code coverage) target to work with JDK 6.
(tomwhite via nigel)
HADOOP-4845. Modify the reduce input byte counter to record only the
compressed size and add a human-readable label. (Yongqiang He via cdouglas)
HADOOP-4458. Add a test creating symlinks in the working directory.
(Amareshwari Sriramadasu via cdouglas)
HADOOP-4879. Fix org.apache.hadoop.mapred.Counters to correctly define
Object.equals rather than depend on contentEquals api. (omalley via
acmurthy)
HADOOP-4791. Fix rpm build process for Chukwa. (Eric Yang via cdouglas)
HADOOP-4771. Correct initialization of the file count for directories
with quotas. (Ruyue Ma via shv)
HADOOP-4878. Fix eclipse plugin classpath file to point to ivy's resolved
lib directory and added the same to test-patch.sh. (Giridharan Kesavan via
acmurthy)
HADOOP-4774. Fix default values of some capacity scheduler configuration
items which would otherwise not work on a fresh checkout.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-4876. Fix capacity scheduler reclamation by updating count of
pending tasks correctly. (Sreekanth Ramakrishnan via yhemanth)
HADOOP-4849. Documentation for Service Level Authorization implemented in
HADOOP-4348. (acmurthy)
HADOOP-4827. Replace Consolidator with Aggregator macros in Chukwa (Eric
Yang via cdouglas)
HADOOP-4894. Correctly parse ps output in Chukwa jettyCollector.sh. (Ari
Rabkin via cdouglas)
HADOOP-4892. Close fds out of Chukwa ExecPlugin. (Ari Rabkin via cdouglas)
HADOOP-4889. Fix permissions in RPM packaging. (Eric Yang via cdouglas)
HADOOP-4869. Fixes the TT-JT heartbeat to have an explicit flag for
restart apart from the initialContact flag that there was earlier.
(Amareshwari Sriramadasu via ddas)
HADOOP-4716. Fixes ReduceTask.java to clear out the mapping between
hosts and MapOutputLocation upon a JT restart (Amar Kamat via ddas)
HADOOP-4880. Removes an unnecessary testcase from TestJobTrackerRestart.
(Amar Kamat via ddas)
HADOOP-4924. Fixes a race condition in TaskTracker re-init. (ddas)
HADOOP-4854. Read reclaim capacity interval from capacity scheduler
configuration. (Sreekanth Ramakrishnan via yhemanth)
HADOOP-4896. HDFS Fsck does not load HDFS configuration. (Raghu Angadi)
HADOOP-4956. Creates TaskStatus for failed tasks with an empty Counters
object instead of null. (ddas)
HADOOP-4979. Fix capacity scheduler to block cluster for failed high
RAM requirements across task types. (Vivek Ratan via yhemanth)
HADOOP-4949. Fix native compilation. (Chris Douglas via acmurthy)
HADOOP-4787. Fixes the testcase TestTrackerBlacklistAcrossJobs which was
earlier failing randomly. (Amareshwari Sriramadasu via ddas)
HADOOP-4914. Add description fields to Chukwa init.d scripts (Eric Yang via
cdouglas)
HADOOP-4884. Make tool tip date format match standard HICC format. (Eric
Yang via cdouglas)
HADOOP-4925. Make Chukwa sender properties configurable. (Ari Rabkin via
cdouglas)
HADOOP-4947. Make Chukwa command parsing more forgiving of whitespace. (Ari
Rabkin via cdouglas)
HADOOP-5026. Make chukwa/bin scripts executable in repository. (Andy
Konwinski via cdouglas)
HADOOP-4977. Fix a deadlock between the reclaimCapacity and assignTasks
in capacity scheduler. (Vivek Ratan via yhemanth)
HADOOP-4988. Fix reclaim capacity to work even when there are queues with
no capacity. (Vivek Ratan via yhemanth)
HADOOP-5065. Remove generic parameters from argument to
setIn/OutputFormatClass so that it works with SequenceIn/OutputFormat.
(cdouglas via omalley)
HADOOP-4818. Pass user config to instrumentation API. (Eric Yang via
cdouglas)
HADOOP-4993. Fix Chukwa agent configuration and startup to make it both
more modular and testable. (Ari Rabkin via cdouglas)
HADOOP-5048. Fix capacity scheduler to correctly cleanup jobs that are
killed after initialization, but before running.
(Sreekanth Ramakrishnan via yhemanth)
HADOOP-4671. Mark loop control variables shared between threads as
volatile. (cdouglas)
HADOOP-5079. HashFunction inadvertently destroys some randomness
(Jonathan Ellis via stack)
HADOOP-4999. A failure to write to FsEditsLog results in
IndexOutOfBounds exception. (Boris Shkolnik via rangadi)
HADOOP-5139. Catch IllegalArgumentException during metrics registration
in RPC. (Hairong Kuang via szetszwo)
HADOOP-5085. Copying a file to local with Crc throws an exception.
(hairong)
HADOOP-5211. Fix check for job completion in TestSetupAndCleanupFailure.
(enis)
HADOOP-5254. The Configuration class should be able to work with XML
parsers that do not support xmlinclude. (Steve Loughran via dhruba)
HADOOP-4692. Namenode in infinite loop for replicating/deleting corrupt
blocks. (hairong)
HADOOP-5255. Fix use of Math.abs to avoid overflow. (Jonathan Ellis via
cdouglas)
HADOOP-5269. Fixes a problem to do with tasktracker holding on to
FAILED_UNCLEAN or KILLED_UNCLEAN tasks forever. (Amareshwari Sriramadasu
via ddas)
HADOOP-5214. Fixes a ConcurrentModificationException while the Fairshare
Scheduler accesses the tasktrackers stored by the JobTracker.
(Rahul Kumar Singh via yhemanth)
HADOOP-5233. Addresses the three issues - Race condition in updating
status, NPE in TaskTracker task localization when the conf file is missing
(HADOOP-5234) and NPE in handling KillTaskAction of a cleanup task
(HADOOP-5235). (Amareshwari Sriramadasu via ddas)
HADOOP-5247. Introduces a broadcast of KillJobAction to all trackers when
a job finishes. This fixes a bunch of problems to do with NPE when a
completed job is not in memory and a tasktracker comes to the jobtracker
with a status report of a task belonging to that job. (Amar Kamat via ddas)
HADOOP-5282. Fixed job history logs for task attempts that are
failed by the JobTracker, say due to lost task trackers. (Amar
Kamat via yhemanth)
HADOOP-5241. Fixes a bug in disk-space resource estimation. Makes
the estimation formula linear where blowUp =
Total-Output/Total-Input. (Sharad Agarwal via ddas)
HADOOP-5142. Fix MapWritable#putAll to store key/value classes.
(Do??acan G??ney via enis)
HADOOP-4744. Workaround for jetty6 returning -1 when getLocalPort
is invoked on the connector. The workaround patch retries a few
times before failing. (Jothi Padmanabhan via yhemanth)
HADOOP-5280. Adds a check to prevent a task state transition from
FAILED to any of UNASSIGNED, RUNNING, COMMIT_PENDING or
SUCCEEDED. (ddas)
HADOOP-5272. Fixes a problem to do with detecting whether an
attempt is the first attempt of a Task. This affects JobTracker
restart. (Amar Kamat via ddas)
HADOOP-5306. Fixes a problem to do with logging/parsing the http port of a
lost tracker. Affects JobTracker restart. (Amar Kamat via ddas)
HADOOP-5111. Fix Job::set* methods to work with generics. (cdouglas)
HADOOP-5274. Fix gridmix2 dependency on wordcount example. (cdouglas)
HADOOP-5145. Balancer sometimes runs out of memory after running
days or weeks. (hairong)
HADOOP-5338. Fix jobtracker restart to clear task completion
events cached by tasktrackers forcing them to fetch all events
afresh, thus avoiding missed task completion events on the
tasktrackers. (Amar Kamat via yhemanth)
HADOOP-4695. Change TestGlobalFilter so that it allows a web page to be
filtered more than once for a single access. (Kan Zhang via szetszwo)
HADOOP-5298. Change TestServletFilter so that it allows a web page to be
filtered more than once for a single access. (szetszwo)
HADOOP-5432. Disable ssl during unit tests in hdfsproxy, as it is unused
and causes failures. (cdouglas)
HADOOP-5416. Correct the shell command "fs -test" forrest doc description.
(Ravi Phulari via szetszwo)
HADOOP-5327. Fixed job tracker to remove files from system directory on
ACL check failures and also check ACLs on restart.
(Amar Kamat via yhemanth)
HADOOP-5395. Change the exception message when a job is submitted to an
invalid queue. (Rahul Kumar Singh via yhemanth)
HADOOP-5276. Fixes a problem to do with updating the start time of
a task when the tracker that ran the task is lost. (Amar Kamat via
ddas)
HADOOP-5278. Fixes a problem to do with logging the finish time of
a task during recovery (after a JobTracker restart). (Amar Kamat
via ddas)
HADOOP-5490. Fixes a synchronization problem in the
EagerTaskInitializationListener class. (Jothi Padmanabhan via
ddas)
HADOOP-5493. The shuffle copier threads return the codecs back to
the pool when the shuffle completes. (Jothi Padmanabhan via ddas)
HADOOP-5414. Fixes IO exception while executing hadoop fs -touchz
fileName by making sure that lease renewal thread exits before dfs
client exits. (hairong)
HADOOP-5103. FileInputFormat now reuses the clusterMap network
topology object and that brings down the log messages in the
JobClient to do with NetworkTopology.add significantly. (Jothi
Padmanabhan via ddas)
HADOOP-5483. Fixes a problem in the Directory Cleanup Thread due to which
TestMiniMRWithDFS sometimes used to fail. (ddas)
HADOOP-5281. Prevent sharing incompatible ZlibCompressor instances between
GzipCodec and DefaultCodec. (cdouglas)
HADOOP-5463. Balancer throws "Not a host:port pair" unless port is
specified in fs.default.name. (Stuart White via hairong)
HADOOP-5514. Fix JobTracker metrics and add metrics for wating, failed
tasks. (cdouglas)
HADOOP-5516. Fix NullPointerException in TaskMemoryManagerThread
that comes when monitored processes disappear when the thread is
running. (Vinod Kumar Vavilapalli via yhemanth)
HADOOP-5382. Support combiners in the new context object API. (omalley)
HADOOP-5471. Fixes a problem to do with updating the log.index file in the
case where a cleanup task is run. (Amareshwari Sriramadasu via ddas)
HADOOP-5534. Fixed a deadlock in Fair scheduler's servlet.
(Rahul Kumar Singh via yhemanth)
HADOOP-5328. Fixes a problem in the renaming of job history files during
job recovery. (Amar Kamat via ddas)
HADOOP-5417. Don't ignore InterruptedExceptions that happen when calling
into rpc. (omalley)
HADOOP-5320. Add a close() in TestMapReduceLocal. (Jothi Padmanabhan
via szetszwo)
HADOOP-5520. Fix a typo in disk quota help message. (Ravi Phulari
via szetszwo)
HADOOP-5519. Remove claims from mapred-default.xml that prime numbers
of tasks are helpful. (Owen O'Malley via szetszwo)
HADOOP-5484. TestRecoveryManager fails wtih FileAlreadyExistsException.
(Amar Kamat via hairong)
HADOOP-5564. Limit the JVM heap size in the java command for initializing
JAVA_PLATFORM. (Suresh Srinivas via szetszwo)
HADOOP-5565. Add API for failing/finalized jobs to the JT metrics
instrumentation. (Jerome Boulon via cdouglas)
HADOOP-5390. Remove duplicate jars from tarball, src from binary tarball
added by hdfsproxy. (Zhiyong Zhang via cdouglas)
HADOOP-5066. Building binary tarball should not build docs/javadocs, copy
src, or run jdiff. (Giridharan Kesavan via cdouglas)
HADOOP-5459. Fix undetected CRC errors where intermediate output is closed
before it has been completely consumed. (cdouglas)
HADOOP-5571. Remove widening primitive conversion in TupleWritable mask
manipulation. (Jingkei Ly via cdouglas)
HADOOP-5588. Remove an unnecessary call to listStatus(..) in
FileSystem.globStatusInternal(..). (Hairong Kuang via szetszwo)
HADOOP-5473. Solves a race condition in killing a task - the state is KILLED
if there is a user request pending to kill the task and the TT reported
the state as SUCCESS. (Amareshwari Sriramadasu via ddas)
HADOOP-5576. Fix LocalRunner to work with the new context object API in
mapreduce. (Tom White via omalley)
HADOOP-4374. Installs a shutdown hook in the Task JVM so that log.index is
updated before the JVM exits. Also makes the update to log.index atomic.
(Ravi Gummadi via ddas)
HADOOP-5577. Add a verbose flag to mapreduce.Job.waitForCompletion to get
the running job's information printed to the user's stdout as it runs.
(omalley)
HADOOP-5607. Fix NPE in TestCapacityScheduler. (cdouglas)
HADOOP-5605. All the replicas incorrectly got marked as corrupt. (hairong)
HADOOP-5337. JobTracker, upon restart, now waits for the TaskTrackers to
join back before scheduling new tasks. This fixes race conditions associated
with greedy scheduling as was the case earlier. (Amar Kamat via ddas)
HADOOP-5227. Fix distcp so -update and -delete can be meaningfully
combined. (Tsz Wo (Nicholas), SZE via cdouglas)
HADOOP-5305. Increase number of files and print debug messages in
TestCopyFiles. (szetszwo)
HADOOP-5548. Add synchronization for JobTracker methods in RecoveryManager.
(Amareshwari Sriramadasu via sharad)
HADOOP-3810. NameNode seems unstable on a cluster with little space left.
(hairong)
HADOOP-5068. Fix NPE in TestCapacityScheduler. (Vinod Kumar Vavilapalli
via szetszwo)
HADOOP-5585. Clear FileSystem statistics between tasks when jvm-reuse
is enabled. (omalley)
HADOOP-5394. JobTracker might schedule 2 attempts of the same task
with the same attempt id across restarts. (Amar Kamat via sharad)
HADOOP-5645. After HADOOP-4920 we need a place to checkin
releasenotes.html. (nigel)
Release 0.19.2 - Unreleased
BUG FIXES
HADOOP-5154. Fixes a deadlock in the fairshare scheduler.
(Matei Zaharia via yhemanth)
HADOOP-5146. Fixes a race condition that causes LocalDirAllocator to miss
files. (Devaraj Das via yhemanth)
HADOOP-4638. Fixes job recovery to not crash the job tracker for problems
with a single job file. (Amar Kamat via yhemanth)
HADOOP-5384. Fix a problem that DataNodeCluster creates blocks with
generationStamp == 1. (szetszwo)
HADOOP-5376. Fixes the code handling lost tasktrackers to set the task state
to KILLED_UNCLEAN only for relevant type of tasks.
(Amareshwari Sriramadasu via yhemanth)
HADOOP-5285..
(ddas)
HADOOP-5392. Fixes a problem to do with JT crashing during recovery when
the job files are garbled. (Amar Kamat via ddas)
HADOOP-5332. Appending to files is not allowed (by default) unless
dfs.support.append is set to true. (dhruba)
HADOOP-5333. libhdfs supports appending to files. (dhruba)
HADOOP-3998. Fix dfsclient exception when JVM is shutdown. (dhruba)
HADOOP-5440. Fixes a problem to do with removing a taskId from the list
of taskIds that the TaskTracker's TaskMemoryManager manages.
(Amareshwari Sriramadasu via ddas)
HADOOP-5446. Restore TaskTracker metrics. (cdouglas)
HADOOP-5449. Fixes the history cleaner thread.
(Amareshwari Sriramadasu via ddas)
HADOOP-5479. NameNode should not send empty block replication request to
DataNode. (hairong)
HADOOP-5259. Job with output hdfs:/user/<username>/outputpath (no
authority) fails with Wrong FS. (Doug Cutting via hairong)
HADOOP-5522. Documents the setup/cleanup tasks in the mapred tutorial.
(Amareshwari Sriramadasu via ddas)
HADOOP-5549. ReplicationMonitor should schedule both replication and
deletion work in one iteration. (hairong)
HADOOP-5554. DataNodeCluster and CreateEditsLog should create blocks with
the same generation stamp value. (hairong via szetszwo)
HADOOP-5231. Clones the TaskStatus before passing it to the JobInProgress.
(Amareshwari Sriramadasu via ddas)
HADOOP-4719. Fix documentation of 'ls' format for FsShell. (Ravi Phulari
via cdouglas)
HADOOP-5374. Fixes a NPE problem in getTasksToSave method.
(Amareshwari Sriramadasu via ddas)
HADOOP-4780. Cache the size of directories in DistributedCache, avoiding
long delays in recalculating it. (He Yongqiang via cdouglas)
HADOOP-5551. Prevent directory destruction on file create.
(Brian Bockelman via shv)
HADOOP-5671. Fix FNF exceptions when copying from old versions of
HftpFileSystem. (Tsz Wo (Nicholas), SZE via cdouglas)
HADOOP-5213. Fix Null pointer exception caused when bzip2compression
was used and user closed a output stream without writing any data.
(Zheng Shao via dhruba)
HADOOP-5579. Set errno correctly in libhdfs for permission, quota, and FNF
conditions. (Brian Bockelman via cdouglas)
HADOOP-5816. Fixes a problem in the KeyFieldBasedComparator to do with
ArrayIndexOutOfBounds exception. (He Yongqiang via ddas)
HADOOP-5951. Add Apache license header to StorageInfo.java. (Suresh
Srinivas via szetszwo)
Release 0.19.1 - 2009-02-23
IMPROVEMENTS
HADOOP-4739. Fix spelling and grammar, improve phrasing of some sections in
mapred tutorial. (Vivek Ratan via cdouglas)
HADOOP-3894. DFSClient logging improvements. (Steve Loughran via shv)
HADOOP-5126. Remove empty file BlocksWithLocations.java (shv)
HADOOP-5127. Remove public methods in FSDirectory. (Jakob Homan via shv)
BUG FIXES
HADOOP-4697. Fix getBlockLocations in KosmosFileSystem to handle multiple
blocks correctly. (Sriram Rao via cdouglas)
HADOOP-4420. Add null checks for job, caused by invalid job IDs.
(Aaron Kimball via tomwhite)
HADOOP-4632. Fix TestJobHistoryVersion to use test.build.dir instead of the
current workding directory for scratch space. (Amar Kamat via cdouglas)
HADOOP-4508. Fix FSDataOutputStream.getPos() for append. (dhruba via
szetszwo)
HADOOP-4727. Fix a group checking bug in fill_stat_structure(...) in
fuse-dfs. (Brian Bockelman via szetszwo)
HADOOP-4836. Correct typos in mapred related documentation. (Jord? Polo
via szetszwo)
HADOOP-4821. Usage description in the Quotas guide documentations are
incorrect. (Boris Shkolnik via hairong)
HADOOP-4847. Moves the loading of OutputCommitter to the Task.
(Amareshwari Sriramadasu via ddas)
HADOOP-4966. Marks completed setup tasks for removal.
(Amareshwari Sriramadasu via ddas)
HADOOP-4982. TestFsck should run in Eclipse. (shv)
HADOOP-5008. TestReplication#testPendingReplicationRetry leaves an opened
fd unclosed. (hairong)
HADOOP-4906. Fix TaskTracker OOM by keeping a shallow copy of JobConf in
TaskTracker.TaskInProgress. (Sharad Agarwal via acmurthy)
HADOOP-4918. Fix bzip2 compression to work with Sequence Files.
(Zheng Shao via dhruba).
HADOOP-4965. TestFileAppend3 should close FileSystem. (shv)
HADOOP-4967. Fixes a race condition in the JvmManager to do with killing
tasks. (ddas)
HADOOP-5009. DataNode#shutdown sometimes leaves data block scanner
verification log unclosed. (hairong)
HADOOP-5086. Use the appropriate FileSystem for trash URIs. (cdouglas)
HADOOP-4955. Make DBOutputFormat us column names from setOutput().
(Kevin Peterson via enis)
HADOOP-4862. Minor : HADOOP-3678 did not remove all the cases of
spurious IOExceptions logged by DataNode. (Raghu Angadi)
HADOOP-5034. NameNode should send both replication and deletion requests
to DataNode in one reply to a heartbeat. (hairong)
HADOOP-4759. Removes temporary output directory for failed and killed
tasks by launching special CLEANUP tasks for the same.
(Amareshwari Sriramadasu via ddas)
HADOOP-5161. Accepted sockets do not get placed in
DataXceiverServer#childSockets. (hairong)
HADOOP-5193. Correct calculation of edits modification time. (shv)
HADOOP-4494. Allow libhdfs to append to files.
(Pete Wyckoff via dhruba)
HADOOP-5166. Fix JobTracker restart to work when ACLs are configured
for the JobTracker. (Amar Kamat via yhemanth).
HADOOP-5067. Fixes TaskInProgress.java to keep track of count of failed and
killed tasks correctly. (Amareshwari Sriramadasu via ddas)
HADOOP-4760. HDFS streams should not throw exceptions when closed twice.
(enis)
Release 0.19.0 - 2008-11-18
INCOMPATIBLE CHANGES
HADOOP-3595. Remove deprecated methods for mapred.combine.once
functionality, which was necessary to providing backwards
compatible combiner semantics for 0.18. (cdouglas via omalley)
HADOOP-3667. Remove the following deprecated methods from JobConf:
addInputPath(Path)
getInputPaths()
getMapOutputCompressionType()
getOutputPath()
getSystemDir()
setInputPath(Path)
setMapOutputCompressionType(CompressionType style)
setOutputPath(Path)
(Amareshwari Sriramadasu via omalley)
HADOOP-3652. Remove deprecated class OutputFormatBase.
(Amareshwari Sriramadasu via cdouglas)
HADOOP-2885. Break the hadoop.dfs package into separate packages under
hadoop.hdfs that reflect whether they are client, server, protocol,
etc. DistributedFileSystem and DFSClient have moved and are now
considered package private. (Sanjay Radia via omalley)
HADOOP-2325. Require Java 6. (cutting)
HADOOP-372. Add support for multiple input paths with a different
InputFormat and Mapper for each path. (Chris Smith via tomwhite)
HADOOP-1700. Support appending to file in HDFS. (dhruba)
HADOOP-3792. Make FsShell -test consistent with unix semantics, returning
zero for true and non-zero for false. (Ben Slusky via cdouglas)
HADOOP-3664. Remove the deprecated method InputFormat.validateInput,
which is no longer needed. (tomwhite via omalley)
HADOOP-3549. Give more meaningful errno's in libhdfs. In particular,
EACCES is returned for permission problems. (Ben Slusky via omalley)
HADOOP-4036. ResourceStatus was added to TaskTrackerStatus by HADOOP-3759,
so increment the InterTrackerProtocol version. (Hemanth Yamijala via
omalley)
HADOOP-3150. Moves task promotion to tasks. Defines a new interface for
committing output files. Moves job setup to jobclient, and moves jobcleanup
to a separate task. (Amareshwari Sriramadasu via ddas)
HADOOP-3446. Keep map outputs in memory during the reduce. Remove
fs.inmemory.size.mb and replace with properties defining in memory map
output retention during the shuffle and reduce relative to maximum heap
usage. (cdouglas)
HADOOP-3245. Adds the feature for supporting JobTracker restart. Running
jobs can be recovered from the history file. The history file format has
been modified to support recovery. The task attempt ID now has the
JobTracker start time to disinguish attempts of the same TIP across
restarts. (Amar Ramesh Kamat via ddas)
HADOOP-4007. REMOVE DFSFileInfo - FileStatus is sufficient.
(Sanjay Radia via hairong)
HADOOP-3722. Fixed Hadoop Streaming and Hadoop Pipes to use the Tool
interface and GenericOptionsParser. (Enis Soztutar via acmurthy)
HADOOP-2816. Cluster summary at name node web reports the space
utilization as:
Configured Capacity: capacity of all the data directories - Reserved space
Present Capacity: Space available for dfs,i.e. remaining+used space
DFS Used%: DFS used space/Present Capacity
(Suresh Srinivas via hairong)
HADOOP-3938. Disk space quotas for HDFS. This is similar to namespace
quotas in 0.18. (rangadi)
HADOOP-4293. Make Configuration Writable and remove unreleased
WritableJobConf. Configuration.write is renamed to writeXml. (omalley)
HADOOP-4281. Change dfsadmin to report available disk space in a format
consistent with the web interface as defined in HADOOP-2816. (Suresh
Srinivas via cdouglas)
HADOOP-4430. Further change the cluster summary at name node web that was
changed in HADOOP-2816:
Non DFS Used - This indicates the disk space taken by non DFS file from
the Configured capacity
DFS Used % - DFS Used % of Configured Capacity
DFS Remaining % - Remaing % Configured Capacity available for DFS use
DFS command line report reflects the same change. Config parameter
dfs.datanode.du.pct is no longer used and is removed from the
hadoop-default.xml. (Suresh Srinivas via hairong)
HADOOP-4116. Balancer should provide better resource management. (hairong)
HADOOP-4599. BlocksMap and BlockInfo made package private. (shv)
NEW FEATURES
HADOOP-3341. Allow streaming jobs to specify the field separator for map
and reduce input and output. The new configuration values are:
stream.map.input.field.separator
stream.map.output.field.separator
stream.reduce.input.field.separator
stream.reduce.output.field.separator
All of them default to "\t". (Zheng Shao via omalley)
HADOOP-3479. Defines the configuration file for the resource manager in
Hadoop. You can configure various parameters related to scheduling, such
as queues and queue properties here. The properties for a queue follow a
naming convention,such as, hadoop.rm.queue.queue-name.property-name.
(Hemanth Yamijala via ddas)
HADOOP-3149. Adds a way in which map/reducetasks can create multiple
outputs. (Alejandro Abdelnur via ddas)
HADOOP-3714. Add a new contrib, bash-tab-completion, which enables
bash tab completion for the bin/hadoop script. See the README file
in the contrib directory for the installation. (Chris Smith via enis)
HADOOP-3730. Adds a new JobConf constructor that disables loading
default configurations. (Alejandro Abdelnur via ddas)
HADOOP-3772. Add a new Hadoop Instrumentation api for the JobTracker and
the TaskTracker, refactor Hadoop Metrics as an implementation of the api.
(Ari Rabkin via acmurthy)
HADOOP-2302. Provides a comparator for numerical sorting of key fields.
(ddas)
HADOOP-153. Provides a way to skip bad records. (Sharad Agarwal via ddas)
HADOOP-657. Free disk space should be modelled and used by the scheduler
to make scheduling decisions. (Ari Rabkin via omalley)
HADOOP-3719. Initial checkin of Chukwa, which is a data collection and
analysis framework. (Jerome Boulon, Andy Konwinski, Ari Rabkin,
and Eric Yang)
HADOOP-3873. Add -filelimit and -sizelimit options to distcp to cap the
number of files/bytes copied in a particular run to support incremental
updates and mirroring. (TszWo (Nicholas), SZE via cdouglas)
HADOOP-3585. FailMon package for hardware failure monitoring and
analysis of anomalies. (Ioannis Koltsidas via dhruba)
HADOOP-1480. Add counters to the C++ Pipes API. (acmurthy via omalley)
HADOOP-3854. Add support for pluggable servlet filters in the HttpServers.
(Tsz Wo (Nicholas) Sze via omalley)
HADOOP-3759. Provides ability to run memory intensive jobs without
affecting other running tasks on the nodes. (Hemanth Yamijala via ddas)
HADOOP-3746. Add a fair share scheduler. (Matei Zaharia via omalley)
HADOOP-3754. Add a thrift interface to access HDFS. (dhruba via omalley)
HADOOP-3828. Provides a way to write skipped records to DFS.
(Sharad Agarwal via ddas)
HADOOP-3948. Separate name-node edits and fsimage directories.
(Lohit Vijayarenu via shv)
HADOOP-3939. Add an option to DistCp to delete files at the destination
not present at the source. (Tsz Wo (Nicholas) Sze via cdouglas)
HADOOP-3601. Add a new contrib module for Hive, which is a sql-like
query processing tool that uses map/reduce. (Ashish Thusoo via omalley)
HADOOP-3866. Added sort and multi-job updates in the JobTracker web ui.
(Craig Weisenfluh via omalley)
HADOOP-3698. Add access control to control who is allowed to submit or
modify jobs in the JobTracker. (Hemanth Yamijala via omalley)
HADOOP-1869. Support access times for HDFS files. (dhruba)
HADOOP-3941. Extend FileSystem API to return file-checksums.
(szetszwo)
HADOOP-3581. Prevents memory intensive user tasks from taking down
nodes. (Vinod K V via ddas)
HADOOP-3970. Provides a way to recover counters written to JobHistory.
(Amar Kamat via ddas)
HADOOP-3702. Adds ChainMapper and ChainReducer classes allow composing
chains of Maps and Reduces in a single Map/Reduce job, something like
MAP+ / REDUCE MAP*. (Alejandro Abdelnur via ddas)
HADOOP-3445. Add capacity scheduler that provides guaranteed capacities to
queues as a percentage of the cluster. (Vivek Ratan via omalley)
HADOOP-3992. Add a synthetic load generation facility to the test
directory. (hairong via szetszwo)
HADOOP-3981. Implement a distributed file checksum algorithm in HDFS
and change DistCp to use file checksum for comparing src and dst files
(szetszwo)
HADOOP-3829. Narrown down skipped records based on user acceptable value.
(Sharad Agarwal via ddas)
HADOOP-3930. Add common interfaces for the pluggable schedulers and the
cli & gui clients. (Sreekanth Ramakrishnan via omalley)
HADOOP-4176. Implement getFileChecksum(Path) in HftpFileSystem. (szetszwo)
HADOOP-249. Reuse JVMs across Map-Reduce Tasks.
Configuration changes to hadoop-default.xml:
add mapred.job.reuse.jvm.num.tasks
(Devaraj Das via acmurthy)
HADOOP-4070. Provide a mechanism in Hive for registering UDFs from the
query language. (tomwhite)
HADOOP-2536. Implement a JDBC based database input and output formats to
allow Map-Reduce applications to work with databases. (Fredrik Hedberg and
Enis Soztutar via acmurthy)
HADOOP-3019. A new library to support total order partitions.
(cdouglas via omalley)
HADOOP-3924. Added a 'KILLED' job status. (Subramaniam Krishnan via
acmurthy)
IMPROVEMENTS
HADOOP-4205. hive: metastore and ql to use the refactored SerDe library.
(zshao)
HADOOP-4106. libhdfs: add time, permission and user attribute support
(part 2). (Pete Wyckoff through zshao)
HADOOP-4104. libhdfs: add time, permission and user attribute support.
(Pete Wyckoff through zshao)
HADOOP-3908. libhdfs: better error message if llibhdfs.so doesn't exist.
(Pete Wyckoff through zshao)
HADOOP-3732. Delay intialization of datanode block verification till
the verification thread is started. (rangadi)
HADOOP-1627. Various small improvements to 'dfsadmin -report' output.
(rangadi)
HADOOP-3577. Tools to inject blocks into name node and simulated
data nodes for testing. (Sanjay Radia via hairong)
HADOOP-2664. Add a lzop compatible codec, so that files compressed by lzop
may be processed by map/reduce. (cdouglas via omalley)
HADOOP-3655. Add additional ant properties to control junit. (Steve
Loughran via omalley)
HADOOP-3543. Update the copyright year to 2008. (cdouglas via omalley)
HADOOP-3587. Add a unit test for the contrib/data_join framework.
(cdouglas)
HADOOP-3402. Add terasort example program (omalley)
HADOOP-3660. Add replication factor for injecting blocks in simulated
datanodes. (Sanjay Radia via cdouglas)
HADOOP-3684. Add a cloning function to the contrib/data_join framework
permitting users to define a more efficient method for cloning values from
the reduce than serialization/deserialization. (Runping Qi via cdouglas)
HADOOP-3478. Improves the handling of map output fetching. Now the
randomization is by the hosts (and not the map outputs themselves).
(Jothi Padmanabhan via ddas)
HADOOP-3617. Removed redundant checks of accounting space in MapTask and
makes the spill thread persistent so as to avoid creating a new one for
each spill. (Chris Douglas via acmurthy)
HADOOP-3412. Factor the scheduler out of the JobTracker and make
it pluggable. (Tom White and Brice Arnould via omalley)
HADOOP-3756. Minor. Remove unused dfs.client.buffer.dir from
hadoop-default.xml. (rangadi)
HADOOP-3747. Adds counter suport for MultipleOutputs.
(Alejandro Abdelnur via ddas)
HADOOP-3169. LeaseChecker daemon should not be started in DFSClient
constructor. (TszWo (Nicholas), SZE via hairong)
HADOOP-3824. Move base functionality of StatusHttpServer to a core
package. (TszWo (Nicholas), SZE via cdouglas)
HADOOP-3646. Add a bzip2 compatible codec, so bzip compressed data
may be processed by map/reduce. (Abdul Qadeer via cdouglas)
HADOOP-3861. MapFile.Reader and Writer should implement Closeable.
(tomwhite via omalley)
HADOOP-3791. Introduce generics into ReflectionUtils. (Chris Smith via
cdouglas)
HADOOP-3694. Improve unit test performance by changing
MiniDFSCluster to listen only on 127.0.0.1. (cutting)
HADOOP-3620. Namenode should synchronously resolve a datanode's network
location when the datanode registers. (hairong)
HADOOP-3860. NNThroughputBenchmark is extended with rename and delete
benchmarks. (shv)
HADOOP-3892. Include unix group name in JobConf. (Matei Zaharia via johan)
HADOOP-3875. Change the time period between heartbeats to be relative to
the end of the heartbeat rpc, rather than the start. This causes better
behavior if the JobTracker is overloaded. (acmurthy via omalley)
HADOOP-3853. Move multiple input format (HADOOP-372) extension to
library package. (tomwhite via johan)
HADOOP-9. Use roulette scheduling for temporary space when the size
is not known. (Ari Rabkin via omalley)
HADOOP-3202. Use recursive delete rather than FileUtil.fullyDelete.
(Amareshwari Sriramadasu via omalley)
HADOOP-3368. Remove common-logging.properties from conf. (Steve Loughran
via omalley)
HADOOP-3851. Fix spelling mistake in FSNamesystemMetrics. (Steve Loughran
via omalley)
HADOOP-3780. Remove asynchronous resolution of network topology in the
JobTracker (Amar Kamat via omalley)
HADOOP-3852. Add ShellCommandExecutor.toString method to make nicer
error messages. (Steve Loughran via omalley)
HADOOP-3844. Include message of local exception in RPC client failures.
(Steve Loughran via omalley)
HADOOP-3935. Split out inner classes from DataNode.java. (johan)
HADOOP-3905. Create generic interfaces for edit log streams. (shv)
HADOOP-3062. Add metrics to DataNode and TaskTracker to record network
traffic for HDFS reads/writes and MR shuffling. (cdouglas)
HADOOP-3742. Remove HDFS from public java doc and add javadoc-dev for
generative javadoc for developers. (Sanjay Radia via omalley)
HADOOP-3944. Improve documentation for public TupleWritable class in
join package. (Chris Douglas via enis)
HADOOP-2330. Preallocate HDFS transaction log to improve performance.
(dhruba and hairong)
HADOOP-3965. Convert DataBlockScanner into a package private class. (shv)
HADOOP-3488. Prevent hadoop-daemon from rsync'ing log files (Stefan
Groshupf and Craig Macdonald via omalley)
HADOOP-3342. Change the kill task actions to require http post instead of
get to prevent accidental crawls from triggering it. (enis via omalley)
HADOOP-3937. Limit the job name in the job history filename to 50
characters. (Matei Zaharia via omalley)
HADOOP-3943. Remove unnecessary synchronization in
NetworkTopology.pseudoSortByDistance. (hairong via omalley)
HADOOP-3498. File globbing alternation should be able to span path
components. (tomwhite)
HADOOP-3361. Implement renames for NativeS3FileSystem.
(Albert Chern via tomwhite)
HADOOP-3605. Make EC2 scripts show an error message if AWS_ACCOUNT_ID is
unset. (Al Hoang via tomwhite)
HADOOP-4147. Remove unused class JobWithTaskContext from class
JobInProgress. (Amareshwari Sriramadasu via johan)
HADOOP-4151. Add a byte-comparable interface that both Text and
BytesWritable implement. (cdouglas via omalley)
HADOOP-4174. Move fs image/edit log methods from ClientProtocol to
NamenodeProtocol. (shv via szetszwo)
HADOOP-4181. Include a .gitignore and saveVersion.sh change to support
developing under git. (omalley)
HADOOP-4186. Factor LineReader out of LineRecordReader. (tomwhite via
omalley)
HADOOP-4184. Break the module dependencies between core, hdfs, and
mapred. (tomwhite via omalley)
HADOOP-4075. test-patch.sh now spits out ant commands that it runs.
(Ramya R via nigel)
HADOOP-4117. Improve configurability of Hadoop EC2 instances.
(tomwhite)
HADOOP-2411. Add support for larger CPU EC2 instance types.
(Chris K Wensel via tomwhite)
HADOOP-4083. Changed the configuration attribute queue.name to
mapred.job.queue.name. (Hemanth Yamijala via acmurthy)
HADOOP-4194. Added the JobConf and JobID to job-related methods in
JobTrackerInstrumentation for better metrics. (Mac Yang via acmurthy)
HADOOP-3975. Change test-patch script to report working the dir
modifications preventing the suite from being run. (Ramya R via cdouglas)
HADOOP-4124. Added a command-line switch to allow users to set job
priorities, also allow it to be manipulated via the web-ui. (Hemanth
Yamijala via acmurthy)
HADOOP-2165. Augmented JobHistory to include the URIs to the tasks'
userlogs. (Vinod Kumar Vavilapalli via acmurthy)
HADOOP-4062. Remove the synchronization on the output stream when a
connection is closed and also remove an undesirable exception when
a client is stoped while there is no pending RPC request. (hairong)
HADOOP-4227. Remove the deprecated class org.apache.hadoop.fs.ShellCommand.
(szetszwo)
HADOOP-4006. Clean up FSConstants and move some of the constants to
better places. (Sanjay Radia via rangadi)
HADOOP-4279. Trace the seeds of random sequences in append unit tests to
make itermitant failures reproducible. (szetszwo via cdouglas)
HADOOP-4209. Remove the change to the format of task attempt id by
incrementing the task attempt numbers by 1000 when the job restarts.
(Amar Kamat via omalley)
HADOOP-4301. Adds forrest doc for the skip bad records feature.
(Sharad Agarwal via ddas)
HADOOP-4354. Separate TestDatanodeDeath.testDatanodeDeath() into 4 tests.
(szetszwo)
HADOOP-3790. Add more unit tests for testing HDFS file append. (szetszwo)
HADOOP-4321. Include documentation for the capacity scheduler. (Hemanth
Yamijala via omalley)
HADOOP-4424. Change menu layout for Hadoop documentation (Boris Shkolnik
via cdouglas).
HADOOP-4438. Update forrest documentation to include missing FsShell
commands. (Suresh Srinivas via cdouglas)
HADOOP-4105. Add forrest documentation for libhdfs.
(Pete Wyckoff via cutting)
HADOOP-4510. Make getTaskOutputPath public. (Chris Wensel via omalley)
OPTIMIZATIONS
HADOOP-3556. Removed lock contention in MD5Hash by changing the
singleton MessageDigester by an instance per Thread using
ThreadLocal. (Iv?n de Prado via omalley)
HADOOP-3328. When client is writing data to DFS, only the last
datanode in the pipeline needs to verify the checksum. Saves around
30% CPU on intermediate datanodes. (rangadi)
HADOOP-3863. Use a thread-local string encoder rather than a static one
that is protected by a lock. (acmurthy via omalley)
HADOOP-3864. Prevent the JobTracker from locking up when a job is being
initialized. (acmurthy via omalley)
HADOOP-3816. Faster directory listing in KFS. (Sriram Rao via omalley)
HADOOP-2130. Pipes submit job should have both blocking and non-blocking
versions. (acmurthy via omalley)
HADOOP-3769. Make the SampleMapper and SampleReducer from
GenericMRLoadGenerator public, so they can be used in other contexts.
(Lingyun Yang via omalley)
HADOOP-3514. Inline the CRCs in intermediate files as opposed to reading
it from a different .crc file. (Jothi Padmanabhan via ddas)
HADOOP-3638. Caches the iFile index files in memory to reduce seeks
(Jothi Padmanabhan via ddas)
HADOOP-4225. FSEditLog.logOpenFile() should persist accessTime
rather than modificationTime. (shv)
HADOOP-4380. Made several new classes (Child, JVMId,
JobTrackerInstrumentation, QueueManager, ResourceEstimator,
TaskTrackerInstrumentation, and TaskTrackerMetricsInst) in
org.apache.hadoop.mapred package private instead of public. (omalley)
BUG FIXES
HADOOP-3563. Refactor the distributed upgrade code so that it is
easier to identify datanode and namenode related code. (dhruba)
HADOOP-3640. Fix the read method in the NativeS3InputStream. (tomwhite via
omalley)
HADOOP-3711. Fixes the Streaming input parsing to properly find the
separator. (Amareshwari Sriramadasu via ddas)
HADOOP-3725. Prevent TestMiniMRMapDebugScript from swallowing exceptions.
(Steve Loughran via cdouglas)
HADOOP-3726. Throw exceptions from TestCLI setup and teardown instead of
swallowing them. (Steve Loughran via cdouglas)
HADOOP-3721. Refactor CompositeRecordReader and related mapred.join classes
to make them clearer. (cdouglas)
HADOOP-3720. Re-read the config file when dfsadmin -refreshNodes is invoked
so dfs.hosts and dfs.hosts.exclude are observed. (lohit vijayarenu via
cdouglas)
HADOOP-3485. Allow writing to files over fuse.
(Pete Wyckoff via dhruba)
HADOOP-3723. The flags to the libhdfs.create call can be treated as
a bitmask. (Pete Wyckoff via dhruba)
HADOOP-3643. Filter out completed tasks when asking for running tasks in
the JobTracker web/ui. (Amar Kamat via omalley)
HADOOP-3777. Ensure that Lzo compressors/decompressors correctly handle the
case where native libraries aren't available. (Chris Douglas via acmurthy)
HADOOP-3728. Fix SleepJob so that it doesn't depend on temporary files,
this ensures we can now run more than one instance of SleepJob
simultaneously. (Chris Douglas via acmurthy)
HADOOP-3795. Fix saving image files on Namenode with different checkpoint
stamps. (Lohit Vijayarenu via mahadev)
HADOOP-3624. Improving createeditslog to create tree directory structure.
(Lohit Vijayarenu via mahadev)
HADOOP-3778. DFSInputStream.seek() did not retry in case of some errors.
(Luo Ning via rangadi)
HADOOP-3661. The handling of moving files deleted through fuse-dfs to
Trash made similar to the behaviour from dfs shell.
(Pete Wyckoff via dhruba)
HADOOP-3819. Unset LANG and LC_CTYPE in saveVersion.sh to make it
compatible with non-English locales. (Rong-En Fan via cdouglas)
HADOOP-3848. Cache calls to getSystemDir in the TaskTracker instead of
calling it for each task start. (acmurthy via omalley)
HADOOP-3131. Fix reduce progress reporting for compressed intermediate
data. (Matei Zaharia via acmurthy)
HADOOP-3796. fuse-dfs configuration is implemented as file system
mount options. (Pete Wyckoff via dhruba)
HADOOP-3836. Fix TestMultipleOutputs to correctly clean up. (Alejandro
Abdelnur via acmurthy)
HADOOP-3805. Improve fuse-dfs write performance.
(Pete Wyckoff via zshao)
HADOOP-3846. Fix unit test CreateEditsLog to generate paths correctly.
(Lohit Vjayarenu via cdouglas)
HADOOP-3904. Fix unit tests using the old dfs package name.
(TszWo (Nicholas), SZE via johan)
HADOOP-3319. Fix some HOD error messages to go stderr instead of
stdout. (Vinod Kumar Vavilapalli via omalley)
HADOOP-3907. Move INodeDirectoryWithQuota to its own .java file.
(Tsz Wo (Nicholas), SZE via hairong)
HADOOP-3919. Fix attribute name in hadoop-default for
mapred.jobtracker.instrumentation. (Ari Rabkin via omalley)
HADOOP-3903. Change the package name for the servlets to be hdfs instead of
dfs. (Tsz Wo (Nicholas) Sze via omalley)
HADOOP-3773. Change Pipes to set the default map output key and value
types correctly. (Koji Noguchi via omalley)
HADOOP-3952. Fix compilation error in TestDataJoin referencing dfs package.
(omalley)
HADOOP-3951. Fix package name for FSNamesystem logs and modify other
hard-coded Logs to use the class name. (cdouglas)
HADOOP-3889. Improve error reporting from HftpFileSystem, handling in
DistCp. (Tsz Wo (Nicholas), SZE via cdouglas)
HADOOP-3946. Fix TestMapRed after hadoop-3664. (tomwhite via omalley)
HADOOP-3949. Remove duplicate jars from Chukwa. (Jerome Boulon via omalley)
HADOOP-3933. DataNode sometimes sends up to io.byte.per.checksum bytes
more than required to client. (Ning Li via rangadi)
HADOOP-3962. Shell command "fs -count" should support paths with different
file systems. (Tsz Wo (Nicholas), SZE via mahadev)
HADOOP-3957. Fix javac warnings in DistCp and TestCopyFiles. (Tsz Wo
(Nicholas), SZE via cdouglas)
HADOOP-3958. Fix TestMapRed to check the success of test-job. (omalley via
acmurthy)
HADOOP-3985. Fix TestHDFSServerPorts to use random ports. (Hairong Kuang
via omalley)
HADOOP-3964. Fix javadoc warnings introduced by FailMon. (dhruba)
HADOOP-3785. Fix FileSystem cache to be case-insensitive for scheme and
authority. (Bill de hOra via cdouglas)
HADOOP-3506. Fix a rare NPE caused by error handling in S3. (Tom White via
cdouglas)
HADOOP-3705. Fix mapred.join parser to accept InputFormats named with
underscore and static, inner classes. (cdouglas)
HADOOP-4023. Fix javadoc warnings introduced when the HDFS javadoc was
made private. (omalley)
HADOOP-4030. Remove lzop from the default list of codecs. (Arun Murthy via
cdouglas)
HADOOP-3961. Fix task disk space requirement estimates for virtual
input jobs. Delays limiting task placement until after 10% of the maps
have finished. (Ari Rabkin via omalley)
HADOOP-2168. Fix problem with C++ record reader's progress not being
reported to framework. (acmurthy via omalley)
HADOOP-3966. Copy findbugs generated output files to PATCH_DIR while
running test-patch. (Ramya R via lohit)
HADOOP-4037. Fix the eclipse plugin for versions of kfs and log4j. (nigel
via omalley)
HADOOP-3950. Cause the Mini MR cluster to wait for task trackers to
register before continuing. (enis via omalley)
HADOOP-3910. Remove unused ClusterTestDFSNamespaceLogging and
ClusterTestDFS. (Tsz Wo (Nicholas), SZE via cdouglas)
HADOOP-3954. Disable record skipping by default. (Sharad Agarwal via
cdouglas)
HADOOP-4050. Fix TestFairScheduler to use absolute paths for the work
directory. (Matei Zaharia via omalley)
HADOOP-4069. Keep temporary test files from TestKosmosFileSystem under
test.build.data instead of /tmp. (lohit via omalley)
HADOOP-4078. Create test files for TestKosmosFileSystem in separate
directory under test.build.data. (lohit)
HADOOP-3968. Fix getFileBlockLocations calls to use FileStatus instead
of Path reflecting the new API. (Pete Wyckoff via lohit)
HADOOP-3963. libhdfs does not exit on its own, instead it returns error
to the caller and behaves as a true library. (Pete Wyckoff via dhruba)
HADOOP-4100. Removes the cleanupTask scheduling from the Scheduler
implementations and moves it to the JobTracker.
(Amareshwari Sriramadasu via ddas)
HADOOP-4097. Make hive work well with speculative execution turned on.
(Joydeep Sen Sarma via dhruba)
HADOOP-4113. Changes to libhdfs to not exit on its own, rather return
an error code to the caller. (Pete Wyckoff via dhruba)
HADOOP-4054. Remove duplicate lease removal during edit log loading.
(hairong)
HADOOP-4071. FSNameSystem.isReplicationInProgress should add an
underReplicated block to the neededReplication queue using method
"add" not "update". (hairong)
HADOOP-4154. Fix type warnings in WritableUtils. (szetszwo via omalley)
HADOOP-4133. Log files generated by Hive should reside in the
build directory. (Prasad Chakka via dhruba)
HADOOP-4094. Hive now has hive-default.xml and hive-site.xml similar
to core hadoop. (Prasad Chakka via dhruba)
HADOOP-4112. Handles cleanupTask in JobHistory
(Amareshwari Sriramadasu via ddas)
HADOOP-3831. Very slow reading clients sometimes failed while reading.
(rangadi)
HADOOP-4155. Use JobTracker's start time while initializing JobHistory's
JobTracker Unique String. (lohit)
HADOOP-4099. Fix null pointer when using HFTP from an 0.18 server.
(dhruba via omalley)
HADOOP-3570. Includes user specified libjar files in the client side
classpath path. (Sharad Agarwal via ddas)
HADOOP-4129. Changed memory limits of TaskTracker and Tasks to be in
KiloBytes rather than bytes. (Vinod Kumar Vavilapalli via acmurthy)
HADOOP-4139. Optimize Hive multi group-by.
(Namin Jain via dhruba)
HADOOP-3911. Add a check to fsck options to make sure -files is not
the first option to resolve conflicts with GenericOptionsParser
(lohit)
HADOOP-3623. Refactor LeaseManager. (szetszwo)
HADOOP-4125. Handles Reduce cleanup tip on the web ui.
(Amareshwari Sriramadasu via ddas)
HADOOP-4087. Hive Metastore API for php and python clients.
(Prasad Chakka via dhruba)
HADOOP-4197. Update DATA_TRANSFER_VERSION for HADOOP-3981. (szetszwo)
HADOOP-4138. Refactor the Hive SerDe library to better structure
the interfaces to the serializer and de-serializer.
(Zheng Shao via dhruba)
HADOOP-4195. Close compressor before returning to codec pool.
(acmurthy via omalley)
HADOOP-2403. Escapes some special characters before logging to
history files. (Amareshwari Sriramadasu via ddas)
HADOOP-4200. Fix a bug in the test-patch.sh script.
(Ramya R via nigel)
HADOOP-4084. Add explain plan capabilities to Hive Query Language.
(Ashish Thusoo via dhruba)
HADOOP-4121. Preserve cause for exception if the initialization of
HistoryViewer for JobHistory fails. (Amareshwari Sri Ramadasu via
acmurthy)
HADOOP-4213. Fixes NPE in TestLimitTasksPerJobTaskScheduler.
(Sreekanth Ramakrishnan via ddas)
HADOOP-4077. Setting access and modification time for a file
requires write permissions on the file. (dhruba)
HADOOP-3592. Fix a couple of possible file leaks in FileUtil
(Bill de hOra via rangadi)
HADOOP-4120. Hive interactive shell records the time taken by a
query. (Raghotham Murthy via dhruba)
HADOOP-4090. The hive scripts pick up hadoop from HADOOP_HOME
and then the path. (Raghotham Murthy via dhruba)
HADOOP-4242. Remove extra ";" in FSDirectory that blocks compilation
in some IDE's. (szetszwo via omalley)
HADOOP-4249. Fix eclipse path to include the hsqldb.jar. (szetszwo via
omalley)
HADOOP-4247. Move InputSampler into org.apache.hadoop.mapred.lib, so that
examples.jar doesn't depend on tools.jar. (omalley)
HADOOP-4269. Fix the deprecation of LineReader by extending the new class
into the old name and deprecating it. Also update the tests to test the
new class. (cdouglas via omalley)
HADOOP-4280. Fix conversions between seconds in C and milliseconds in
Java for access times for files. (Pete Wyckoff via rangadi)
HADOOP-4254. -setSpaceQuota command does not convert "TB" extenstion to
terabytes properly. Implementation now uses StringUtils for parsing this.
(Raghu Angadi)
HADOOP-4259. Findbugs should run over tools.jar also. (cdouglas via
omalley)
HADOOP-4275. Move public method isJobValidName from JobID to a private
method in JobTracker. (omalley)
HADOOP-4173. fix failures in TestProcfsBasedProcessTree and
TestTaskTrackerMemoryManager tests. ProcfsBasedProcessTree and
memory management in TaskTracker are disabled on Windows.
(Vinod K V via rangadi)
HADOOP-4189. Fixes the history blocksize & intertracker protocol version
issues introduced as part of HADOOP-3245. (Amar Kamat via ddas)
HADOOP-4190. Fixes the backward compatibility issue with Job History.
introduced by HADOOP-3245 and HADOOP-2403. (Amar Kamat via ddas)
HADOOP-4237. Fixes the TestStreamingBadRecords.testNarrowDown testcase.
(Sharad Agarwal via ddas)
HADOOP-4274. Capacity scheduler accidently modifies the underlying
data structures when browing the job lists. (Hemanth Yamijala via omalley)
HADOOP-4309. Fix eclipse-plugin compilation. (cdouglas)
HADOOP-4232. Fix race condition in JVM reuse when multiple slots become
free. (ddas via acmurthy)
HADOOP-4302. Fix a race condition in TestReduceFetch that can yield false
negatvies. (cdouglas)
HADOOP-3942. Update distcp documentation to include features introduced in
HADOOP-3873, HADOOP-3939. (Tsz Wo (Nicholas), SZE via cdouglas)
HADOOP-4319. fuse-dfs dfs_read function returns as many bytes as it is
told to read unlesss end-of-file is reached. (Pete Wyckoff via dhruba)
HADOOP-4246. Ensure we have the correct lower bound on the number of
retries for fetching map-outputs; also fixed the case where the reducer
automatically kills on too many unique map-outputs could not be fetched
for small jobs. (Amareshwari Sri Ramadasu via acmurthy)
HADOOP-4163. Report FSErrors from map output fetch threads instead of
merely logging them. (Sharad Agarwal via cdouglas)
HADOOP-4261. Adds a setup task for jobs. This is required so that we
don't setup jobs that haven't been inited yet (since init could lead
to job failure). Only after the init has successfully happened do we
launch the setupJob task. (Amareshwari Sriramadasu via ddas)
HADOOP-4256. Removes Completed and Failed Job tables from
jobqueue_details.jsp. (Sreekanth Ramakrishnan via ddas)
HADOOP-4267. Occasional exceptions during shutting down HSQLDB is logged
but not rethrown. (enis)
HADOOP-4018. The number of tasks for a single job cannot exceed a
pre-configured maximum value. (dhruba)
HADOOP-4288. Fixes a NPE problem in CapacityScheduler.
(Amar Kamat via ddas)
HADOOP-4014. Create hard links with 'fsutil hardlink' on Windows. (shv)
HADOOP-4393. Merged org.apache.hadoop.fs.permission.AccessControlException
and org.apache.hadoop.security.AccessControlIOException into a single
class hadoop.security.AccessControlException. (omalley via acmurthy)
HADOOP-4287. Fixes an issue to do with maintaining counts of running/pending
maps/reduces. (Sreekanth Ramakrishnan via ddas)
HADOOP-4361. Makes sure that jobs killed from command line are killed
fast (i.e., there is a slot to run the cleanup task soon).
(Amareshwari Sriramadasu via ddas) | https://apache.googlesource.com/hadoop/+/refs/tags/release-0.21.0-rc2/CHANGES.txt | CC-MAIN-2021-25 | refinedweb | 19,895 | 63.05 |
Modular Testing
November 19, 2013
Today’s exercise is my reminder to myself to do a better job of testing. I was porting some prime-number code to Python, one of several preliminary steps to writing a new essay. I am comfortable enough with Python (mostly), and there is nothing particularly tricky about the code, so I wrote the code, tested it quickly, and went on. Later, a different part of the code was failing, and I couldn’t find the problem. Of course, the problem was in the earlier code that I had quickly tested, and therefore hard to spot, since I was looking in the wrong place. The failing code is shown below.
def primes(n):
ps, sieve = [], [True] * (n + 1)
for p in range(2, n + 1):
if sieve[p]:
ps.append(p)
for i in range(p * p, n + 1, p):
sieve[i] = False
return ps
def isPrime(n):
if n % 2 == 0:
return n == 2
d = 3
while d * d <= n:
if n % d == 0:
return False
d = d + 2
return True
def inverse(x, m):
a, b, u = 0, m, 1
while x > 0:
q = b // x
x, a, b, u = b % x, u, x, a - q * u
if b == 1: return a % m
raise ArithmeticError("must be coprime")
def jacobi(a, p):
a = a % p; t = 1
while a != 0:
while a % 2 == 0:
a = a / 2
if p % 8 in [3, 5]:
t = -t
a, p = p, a
if a % 4 == 3 and p % 4 == 3:
t = -t
a = a % p
return t if p == 1 else 0) * pow(a, (t+1)/2, p)) % p
return x, p-x
Your task is to write a proper test suite, discover the bug in the code shown above, and fix it. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. | http://programmingpraxis.com/2013/11/19/ | CC-MAIN-2014-15 | refinedweb | 325 | 65.69 |
More Signal - Less Noise
I recently began a discussion of the Silverlight Toolkit and on the way towards explaining the AutoCompleteBox I became distracted by creating a list of words to use as our datasource.
I've actually reworked that example, slightly to build the list using a worker thread (to explore threading and to improve the UI) but I have broken through and actually managed to get to the point, which is adding an AutoCompleteBox to the page, and while I was at it, I included (per one of the examples provided) a slider to set the minimum number of characters you must put in before the box begins to show you matches
What we are seeing here, is the user is typing in letters and the autoCompleteBox is offering choices from our data source (the words retrieved from Swan's Way) that match what has been typed so far. The slider lets us set how many letters must be typed before choices are offered. Increasing the minimum prefix length cuts down on the clutter but offers less help (though it can vastly improve performance if the data is not local).
(Complete source code here)
Let's start with coding the auto-complete box and then circle back to the changes I made to gathering the data
The first step is to add the Toolkit library to your references
(image cropped)
With that we can add the namespace to the top of Page. xaml (the last name space in the UserControl)
<UserControl x:Class="AutoFill2.Page"
xmlns=""
xmlns:x=""
Width="800" Height="601" xmlns:d=""
xmlns:mc=""
mc:Ignorable="d"
xmlns:
Finally, we'll add a grid within our outer grid to place our new controls. We want to add a prompt, an auto complete box, a second prompt for the prefix-length, the current value of the prefix length, two text boxes to indicate the range, and the slider itself,
Here's the Xaml,
<Grid x:
<Grid.RowDefinitions>
<RowDefinition Height="0.385*"/>
<RowDefinition Height="41"/>
<RowDefinition Height="11"/>
<RowDefinition Height="15"/>
<RowDefinition Height="25"/>
<RowDefinition Height="59"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="0.442*"/>
<ColumnDefinition Width="0.558*"/>
</Grid.ColumnDefinitions>
<TextBlock x:
<controls:AutoCompleteBox x:
<TextBlock x:
<TextBlock x:Name="negOne"
HorizontalAlignment="Left" VerticalAlignment="Bottom"
Grid.
<TextBlock x:Name="eight"
Margin="0,0,5,0"
HorizontalAlignment="Right" VerticalAlignment="Bottom"
Grid.
<TextBlock x:Name="CurrentValue" Text="2"
HorizontalAlignment="Right" VerticalAlignment="Bottom"
Margin="5,0,0,0" Width="20"
Grid.
<Slider x:Name="SetPrefixLength" Minimum="-1" Value="2" Maximum="8"
SmallChange="1" LargeChange="2" Width="160"
Grid.
<Border Height="Auto" x:Name="Boundary"
HorizontalAlignment="Stretch" VerticalAlignment="Stretch"
Width="Auto" Margin="0,0,5,0"
Grid.Row="1" Grid.RowSpan="4" Grid.ColumnSpan="2"
Canvas.
</Grid>
The Xaml declares an inner grid (making it easier to divide up the space for our search box). You can tell by the funky values that this grid was created in Blend,
A couple quick things to notice here… the trick to placing objects inside the inner grid is to make it the current container. You do that by double clicking on it. It will be surrounded by a yellow rectangle both in the Objects and Timeline window and in the art board. This gives the inner grid the same ability to draw rows and columns from the margins that you had with the outer grid.
Note also that we use a border control, this time not to draw a border around the other controls, but to provide a background color.
<Border Height="Auto" x:Name="Border"
HorizontalAlignment="Stretch" VerticalAlignment="Stretch"
Width="Auto" Margin="0,0,5,0"
Grid.Row="1" Grid.RowSpan="4" Grid.ColumnSpan="2"
Canvas.
The zIndex="-1" ensures that the border will be behind all the other controls.
The AutoCompleteBox
Don't let the AutoCompleteBox get lost in all this discussion of set up (Remember the AutoCompleteBox? This is a posting about the AutoCompleteBox [ with apologies to Arlo Guthrie] )…
<controls:AutoCompleteBox x:
The Supporting Code for the AutoComplete box is in the Page.xaml.cs file. There are two steps here for supporting our control.
myAutoComplete.ItemsSource = SortedWords;
That is really the key and essence of setting up the AutoCompleteBox. However, we also want to remember to set up the event handler for the slider,
SetPrefixLength.ValueChanged +=
new RoutedPropertyChangedEventHandler<double>( SetPrefixLength_ValueChanged );
void SetPrefixLength_ValueChanged(
object sender, RoutedPropertyChangedEventArgs<double> e )
{
myAutoComplete.MinimumPrefixLength =
(int) Math.Floor(SetPrefixLength.Value);
CurrentValue.Text =
myAutoComplete.MinimumPrefixLength.ToString();
}
To set the value in the AutoCompleteBox (an integer) we must cast the double we retrieve from the the slider – making sure to truncate, not round. One way to do so is to use the Math.Floor function, which returns the highest integer value, as a double, that is less than the value of the argument. Eh? An example helps: if SetPrefixLength.value is equal to 7.3, 7.9 or 8.1 the value returned will be 7.0, 7.0 or 8.0 respectively.
We then cast that Floored double to an integer and assign it to the MinimumPrefixLength property of the AutoCompleteBox. The possible values in this example are –1 through 8. Note that a value of –1 turns off Autocompletion. Interestingly, the values of 0 and 1 have the same effect.
As promised, I reworked the code to obtain the data so that it is a bit more factored, and more important, the bulk of the work is done in a background tread making for a more rewarding UI. I won't review what I covered in the previous article, but I will show the changes.
The key is to initialize a private member variable of type BackgroundWorker, and to set the property WorkerReportsProgress in the constructor. You'll also need event handlers for:
(NB: you can also choose to handle cancellation)
private BackgroundWorker worker = new BackgroundWorker();
//...
public Page()
InitializeComponent();
worker.WorkerReportsProgress = true;
worker.DoWork += new DoWorkEventHandler( worker_DoWork );
worker.ProgressChanged +=
new ProgressChangedEventHandler( worker_ProgressChanged );
worker.RunWorkerCompleted +=
new RunWorkerCompletedEventHandler( worker_RunWorkerCompleted );
We're going to kick off the thread when we determine that the user has chosen a file to open. We'll make sure the thread isn't already running and then call RunWorkerAsync (which will fire teh DoWork event) and we'll pass in the FileInfo object for the thread's edification.
void DataButton_Click( object sender, RoutedEventArgs e )
OpenFileDialog openFileDialog1 = new OpenFileDialog();
openFileDialog1.Filter = "Text Files (.txt)|*.txt|All Files (*.*)|*.*";
openFileDialog1.FilterIndex = 1;
openFileDialog1.Multiselect = false;
bool? userClickedOK = openFileDialog1.ShowDialog();
if ( userClickedOK == true )
{
// *** NEW ***
if ( worker.IsBusy != true )
worker.RunWorkerAsync(openFileDialog1.File);
}
This method is identical to the button click handler in the previous article, except that once the user identifies the file, we hand the fileInfo object to the worker thread and our job is done! We can now go eat lunch.
The Dowork method is called through the event delegate, passing in the sender (which you can safely cast to the BackgroundWorker) and a DoWorkEventArgs which contains, among other things, an argument property which in this case contains the FileInfo we passed in when we started the thread
1: void worker_DoWork( object sender, DoWorkEventArgs e )
2: {
3: const long MAXBYTES = 200000;
4: BackgroundWorker workerRef = sender as BackgroundWorker;
5:
6: if ( workerRef != null )
7: {
8: System.IO.FileInfo file = e.Argument as System.IO.FileInfo;
9:
10: if ( file != null )
11: {
12: System.IO.Stream fileStream = file.OpenRead();
13: using ( System.IO.StreamReader reader =
14: new System.IO.StreamReader( fileStream ) )
15: {
16: string temp = string.Empty;
17: try
18: {
19: do
20: {
21: temp = reader.ReadLine();
22: sb.Append( temp );
23: } while ( temp != null && sb.Length < MAXBYTES );
24: }
25: catch {}
26: } // end using
27: fileStream.Close();
28: string pattern = "\\b";
29: string[] allWords =
30: System.Text.RegularExpressions.Regex.Split(
31: sb.ToString(), pattern );
32:
33: long total = allWords.Length / 100;
34: long soFar = 0;
35: int newPctg = 0;
36: int pctg = 0;
37:
38: foreach ( string word in allWords )
39: {
40: newPctg = (int) ( (++soFar) / total );
41: if ( newPctg != pctg )
42: {
43: pctg = newPctg;
44: workerRef.ReportProgress( pctg );
45: }
46:
47: if ( words.Contains( word ) == false )
48: {
49: if ( word.Length > 0 && !IsJunk( word ) )
50: {
51: words.Add( word );
52: } // end if not junk
53: } // end if unique
54: } // end for each word in all words
55: } // end if file is not null
56: } // end if workerRef is not null
57: } // end method DoWork
The method begins by casting the sender argument to the BackgroundWorker thread and making sure that the cast was successful (and not null). It then casts e.argument to be the FileInfo object (as described above) and again makes sure the cast is successful.
The next 20 lines are right out of the previous example, however starting on line 33 we begin to compute how far we've come in our work.
A true reading of our progress would take into account three stages:
Since the first is very fast, and the third is instantaneous, and since this blog entry has gone on long enough, we'll constrain ourselves to reporting on progress on the second. We know how many words we have, and we know how many words we've processed as we iterate through the foreach loop so it is a simple matter to see when we've increased by a percentage. Each time we do, we call ReportProgress, passing in the new percentage figure.
foreach ( string word in allWords )
newPctg = (int) ( (++soFar) / total );
if ( newPctg != pctg )
pctg = newPctg;
workerRef.ReportProgress( pctg );
This fires an event that is caught in our UI thread,
void worker_ProgressChanged( object sender, ProgressChangedEventArgs e )
Message.Text = ( e.ProgressPercentage.ToString() + "%" );
The fact that this is caught in the main (UI) thread while the work is happening in the background worker thread, means that the UI is free to update,
When the thread completes it automagically calls the RunWorkerCompleted method, giving you a chance to clean up and to do any other work that can only be done once the thread finishes (in our case, setting the ItemSource for the AutoCompleteBox)
void worker_RunWorkerCompleted( object sender, RunWorkerCompletedEventArgs e )
Message.Text = words.Count + " unique words added. ";
Display();
myAutoComplete.ItemsSource = SortedWords;
More on the AutoCompleteBox when I cover DataGrids and more on the Silverlight Toolkit very soon.
Thanks.
(If you like this article, you may wan to consider subscribing. And it makes my boss happy).
=============
Special note: the animated gifs are an experiment. I think they convey a lot of information but they also make the page a lot bigger. If that is burdensome, let me know in the comments Thanks!
Very good article. I think the animation adds life to the article.
Thanks,
Rachida
Pingback from AutoCompleteBox - Custom Types & Worker Threads
Great post, Jesse. I want to bring up the suggestions list even if the AutoCompleteBox is empty. Any suggestions on how to implement this?
Syed Mehroz Alam
Pingback from Silverlight News for November 04, 2008
Jesse, you asked if this new format of screen shots (with animation) is helpful. Many people think a form with moving parts (animation) is just to jazz things up, but don't realize the value it can bring if used at the right time. For example in this case where you show how the Auto complete is working or how the counter works and so on, is much more intuitive than if it was "Static" image. I know it's much harder on you, but cases like this, can make a big difference.
I was looking at some Yoga information and I see One image before she started the move and one image when the move was completed. And then she was trying to "Describe" what's happens between. Well, can you guess how many different [wrong ways] you can do between and not even knowing it. So, animation can deliver very essential data that changes based on other factors like time or speed or heat and etc.
Great article Jesse!!!
I'll have to thank Tim who suggested the idea of using our screen capture software to make these gifs; I agree with you that used judiciously they can be more than just bling.
In this issue: Rich Griffin, Martin Mihaylov, Tim Heuer, Jafar Husain, Jeff Prosise, Mike Snow, Jeff
Jesse: Do you know of any tutorials demonstrating using the autocompletebox calling a web service?
The Silverlight Toolkit is off to a great start and lots of people have been spending time writing content
#.think.in infoDose #6 (3rd Nov - 8th Nov)
I have problem setting the Background property of the AutoCompleteBox, anyone else with the same problem or a solution?
Regards,
Håkan
Pingback from Silverlight News for November 11, 2008
I want to make similar background loading of selected files in OpenFileDialog.
But it not works, I got security error by file.OpenRead();
'((System.IO.FileStream)(fileStream)).Name' threw an exception of type 'System.Security.SecurityException'
Looks like BackgroundWorker have no access to selected files.
I'm missing something?
i'm sorry, my mistake. I used BitmapImage in worker, this caused SecurityException.
Jesse,
What is the safest method for implementing threads in Silverlight...I tried using BackGround worker, which updates/refreshes my datagrid at regular invervals from the database.The problem I faced was the number of threads increases continously and the performance of the application is slower..Any other approach i can use?
Mohammad Sadiq | http://silverlight.net/blogs/jesseliberty/archive/2008/11/03/autocompletebox-control-worker-threads.aspx | crawl-002 | refinedweb | 2,216 | 55.74 |
Using maps in a Microsoft BizTalk orchestration is pretty straightforward – create your source and destination schemas, map from one to the other, drop a Transform shape onto your orchestration, and configure the source and destination messages. But what if you need to apply different maps based on some property inside the message, or even who the message is coming from?
In our case, we process X12 837 healthcare claim files. In the X12 schemas, the same piece of information can be stored in different locations, and we are unable to enforce a standard, we have to take whatever our clients give us. So, the orchestration has to figure out which map to use to transform the 837 data to our internal schema. We do this using Dynamic Maps.
In this post I’ll demonstrate how to take an input file containing a customer name, and their home and work addresses, and transform using using one of two different maps – one that uses the home address and one that uses work. Yes, this probably isn’t terribly realistic, but it’s fine for a demo!
To begin, create an input schema to represent customer data, like this:
Promote the Map element. This is how we’ll tell the orchestration which map to use. You could also base the decision on a context property (maybe something to do with the party agreement), a part of the filename, etc.
Next, create an output schema like this:
Then create two maps, one using the home address:
…and one using the work address:
Now for the orchestration. Drag a mess of shapes out until you have something that looks like this:
Create two messages, CustomerData and OutputData, using the two schemas created earlier. Create two variables – mapName (System.String) and mapType (System.Type). Now we’ll start configuring all those expression shapes.
In the GetMapName expression, get the name of the map we’re going to use from the Map promoted property:
mapName = CustomerData.Map;
mapName = CustomerData.Map;
In Rule_1 of the Decide shape, look at mapName:
mapName == “Home”
mapName == “Home”
In the SetMap expression in the left branch, add:
mapType = System.Type.GetType("DynamicMappingBlogPost1.HomeAddressMap, DynamicMappingBlogPost1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=abcdefghijklmnop");
mapType = System.Type.GetType("DynamicMappingBlogPost1.HomeAddressMap, DynamicMappingBlogPost1, Version=1.0.0.0, Culture=neutral, PublicKeyToken=abcdefghijklmnop");
You’ll have to substitute the namespace and map name you used, and you’ll replace the PublicKeyToken after deploying the solution the first time.
The statement in the SetMap expression in the right branch will be almost identical, except for the name of the other map.
In the Message Assignment shape in the left branch, add the following code to do the transformation:
transform (OutputData) = mapType(CustomerData);
transform (OutputData) = mapType(CustomerData);
You’ll notice that in the right branch, I used an Expression shape rather than a Message Assignment. You can do this either way, but if you use the Expression shape you have to wrap the transform statement inside a construct statement:
construct OutputData { transform (OutputData) = mapType(CustomerData); }
construct OutputData { transform (OutputData) = mapType(CustomerData); }
Deploy the project, and in the Administration Console, right-click one of the maps and choose Properties. On the Assembly line, copy the PublicKeyToken, and paste it into each of the SetMap expressions where it currently has “abcdefghijklmnop”. Deploy the project one more time.
Configure the send and receive ports, and test using a schema like this:
>
Change the Map node from “Work” to “Home” and verify that a different address is mapped each time.
You can download a complete version of this project at: Dynamic Mapping Example.
Technorati Tags: BizTalk
Print | posted @ Monday, December 30, 2013 8:00 PM
©
Bill Osuch
Key West theme by
Robb Allen. | http://geekswithblogs.net/bosuch/archive/2013/12/30/using-dynamic-maps-in-microsoft-biztalk.aspx | CC-MAIN-2014-52 | refinedweb | 621 | 52.39 |
How to get rid of the default namespace .NET adds to SOAP message ?
Discussion in 'ASP .Net Web Services' started by kaush, the weird message in the eventlogbas jaburg, Feb 26, 2004, in forum: ASP .Net
- Replies:
- 3
- Views:
- 504
- Jim Cheshire [MSFT]
- Feb 27, 2004
HTTPS pages: How to get rid of secure/nonsecure message?VB Programmer, Apr 29, 2005, in forum: ASP .Net
- Replies:
- 7
- Views:
- 8,080
- VB Programmer
- Apr 29, 2005
writing a SOAP message service, how do I get the attachment from a message?rabbits77, Feb 26, 2004, in forum: Java
- Replies:
- 0
- Views:
- 864
- rabbits77
- Feb 26, 2004
template adds unexpected namespaceRolf Kemper, Sep 14, 2004, in forum: XML
- Replies:
- 1
- Views:
- 479
- David Carlisle
- Sep 20, 2004
Reaching into the default namespace when using another namespace.Jason Heyes, Nov 19, 2004, in forum: C++
- Replies:
- 1
- Views:
- 448
- Woebegone
- Nov 19, 2004 | http://www.thecodingforums.com/threads/how-to-get-rid-of-the-default-namespace-net-adds-to-soap-message.785427/ | CC-MAIN-2014-35 | refinedweb | 149 | 72.16 |
Hello,
I wrote a very simple InputFormat and RecordReader to send binary data to mappers. Binary
data can contain anything (including \n, \t, \r), here is what next() may actually send:
public class MyRecordReader implements
RecordReader<BytesWritable, BytesWritable> {
...
public boolean next(BytesWritable key, BytesWritable ignore)
throws IOException {
...
byte[] result = new byte[8];
for (int i = 0; i < result.length; ++i)
result[i] = (byte)(i+1);
result[3] = (byte)'\n';
result[4] = (byte)'\n';
key.set(result, 0, result.length);
return true;
}
}
As you can see I am using BytesWritable to send eight bytes: 01 02 03 0a 0a 06 07 08, I also
use Hadoop-1722 typed bytes (by setting -D stream.map.input=typedbytes).
According to the documentation of typed bytes the mapper should receive the following byte
sequence:
00 00 00 08 01 02 03 0a 0a 06 07 08
However bytes are somehow modified and I get the following sequence instead:
00 00 00 08 01 02 03 09 0a 09 0a 06 07 08
0a = '\n'
09 = '\t'
It seems that Hadoop (streaming?) parsed the new line character as a separator and put '\t'
which is the key/value separator for streaming I assume.
Is there any work around to send *exactly* the same bytes sequence no matter what characters
are in the sequence? Thanks in advance.
Best regards,
Youssef Hatem | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-user/201310.mbox/%3C9C5671DA-FB7C-441B-B210-34FF8A614DFF@rwth-aachen.de%3E | CC-MAIN-2017-47 | refinedweb | 225 | 62.48 |
Functions are used in various places in deal.II, for example to describe boundary conditions, coefficients in equations, forcing terms, or exact solutions. Since closed form expressions for equations are often hard to pass along as function arguments, deal.II uses the Function base class to describe these objects. Essentially, the interface of this base class requires derived classes to implement the ability to return the value of a function at one or a list of particular locations, and possibly (if needed) of gradients or second derivatives of the function. With this, function objects can then be used by algorithms like VectorTools::interpolate, VectorTools::project_boundary_values, and other functions.
Some functions are needed again and again, and are therefore already provided in deal.II. This includes a function with a constant value; a function that is zero everywhere, or a vector-valued function for which only one vector component has a particular value and all other components are zero. Some more specialized functions are also defined in the Functions namespace.
For time dependent computations, boundary conditions and/or right hand side functions may also change with time. Since at a given time step one is usually only interested in the spatial dependence of a function, it would be awkward if one had to pass a value for the time variable to all methods that use function objects. For example, the VectorTools::interpolate_boundary_values function would have to take a time argument which it can use when it wants to query the value of the boundary function at a given time step. However, it would also have to do so if we are considering a stationary problem, for which there is nothing like a time variable.
To circumvent this problem, function objects are always considered spatial functions only. However, the Function class is derived from the FunctionTime base class that stores a value for a time variable, if so necessary. This way, one can define a function object that acts as a spatial function but can do so internally by referencing a particular time. In above example, one would set the time of the function object to the present time step before handing it off to the VectorTools::interpolate_boundary_values method.
The Function class is the most frequently used, but sometimes one needs a function the values of which are tensors, rather than scalars. The TensorFunction template can do this for you. Apart from the return type, the interface is most the same as that of the Function class. | https://dealii.org/developer/doxygen/deal.II/group__functions.html | CC-MAIN-2020-10 | refinedweb | 415 | 50.87 |
Lots of great points here. It really comes to what problem you are solving, and in order to be able to identify when to use what, you'd need experience. Here's something that I came across not too long ago at work that I solved using procedural.
The problem that I was solving was related to caching:
1. Get value from cache
2. If not available in cache, get it from database
3. Then store the value from the database to the cache
In Ruby, you can pass an anonymous function to another function. The function that receives this anonymous function may decide (or not) to invoke. This is what the function roughly looks like:
def get_from_cache(key, &block) value = cache.get(key) return value if value value = block.call # call the anon function cache.set(key, value) return value end
The function is simple enough. Attempt to get value from cache, if found, return value immediately. If cache doesn't have the key, invoke another function, and use that function's return value as the value for the key.
We can 'chain' this up, creating complex logic structure. Examples:
# Get value from database and cache it value = get_from_cache("foo") do get_from_database("foo") end # Get value from HTTP and cache it value = get_from_cache("foo") do http("") end # Or multiple caching layers value = get_from_cache("foo") do get_from_another_cache("foo") do get_from_database("foo") do http("") end end end | http://www.gamedev.net/user/38172-alnite/?tab=posts | CC-MAIN-2016-26 | refinedweb | 236 | 74.39 |
Download - Emirates Diving Association
Download - Emirates Diving Association
Download - Emirates Diving Association
DIVERS
Inspiring People to Care About our Oceans Since 1995
FOR THE ENVIRONMENT | MAGAZINE | JUNE 2012 | VOLUME 9 | ISSUE
38 34 68
CONTENTS
REGULARS
5 EDA DIRECTOR’S NOTE
26 FEATURE CREATURE
Acropora Downingi
83 upcoming events
EDA Event Schedule Updates
83 INTERESTING LINKS AND RESOURCES
NEWS
6 DMEX 2012 AT A GLANCE
7 EDA ENVIRONMENTAL WORKSHOP
8 MYTH-BUSTING PADI
Advanced Open Water Diver Course
8 PROJECT AWARE RALLIES
Support For Sharks At DMEX
9 THE NEW AWARE A YEAR ON
10 DISABLED DIVERS INSTRUCTORS COURSE
11 SHARKWATCH ARABIA DATABASE UPDATE:
The Sharks Are Back
12 My Dive
In The Dubai Aquarium and Underwater Zoo
12 LEARNING THE ROPES OF THE IDC
Al Boom Profiles New Instructors Joining the UAE’s
Diving Community
13 A NEW PROFESSION
13 WHAT MADE YOU CHOOSE SCUBA
DIVING FOR A CAREER
14 I COULDN’T RECOMMEND IT MORE
14 FISH SPOTTING
15 A SUCCESSFUL PADI IE
15 A WEEK WITHOUT WALLS
16 FAMILY FUN DAY
16 RESCUE REFRESHER
16 SHARK AWARENESS
16 GAP YEAR STUDENTS
17 NORTH OR SOUTH
19 CALLING ALL RESCUE DIVERS
19 BREAKING NEW GROUND
20 A PERSONAL APPROACH TO DIVING
With Easy Divers Emirates Center
21 NEW WEBSITE, NEWSLETTER AND AN
EXPEDITION
22 DIVERS TAKE THE PLUNGE
To Cleanup The Capital Ports
23 PADI SWIM SCHOOL ARRIVES IN THE UAE
23 KINDERGARTENS PLEDGE TO BE
FRIENDS OF THE SEA
24 BEACH AND UNDERWATER CLEANUP
A Collaborative Effort
25 FAILSAFE DIVING
With The New Poseidon Tech Rebreather
CORAL NEWS
28 CAN NOISY REEFS ATTRACT MORE FISH
AND CRUSTACEANS
29 IS THERE A FUTURE FOR CORAL REEFS IN
ACID OCEANS
30 RED SEA BUTTERFLYFISH RESPONDS TO
CHANGING CORAL COVER
REEF CHECK
31 3RD PUNTA SAYULITA SURF CONTEST
A Winner For Reef Check
31 HAITI ECO DIVERS LEARN TO DIVE
32 NEW TRAINERS CERTIFIED IN THE
BAHAMAS
33 REEF CHECK PARTICIPANTS
In Boston International Seafood Show
DIVERS
Inspiring People to Care About our Oceans Since 1995
FOR THE ENVIRONMENT | MAGAZINE | JUNE 2012 | VOLUME 9 | ISSUE 2
September 2012. Send all articles, feedback or comments
to: magazine@emiratesdiving.com
EDA COVER
Photo by SIMONE CAPRODOSSI
Please recycle this magazine after you have read
JUNE 2012, DIVERS FOR THE ENVIRONMENT
3
CONTENTS
33 kids get wet to learn
About Coral Reefs in Indonesia
34 Reef CHECK PARTNERS WITH ONE WORLD
ONE OCEAN CAMPAIGN
34 RECENT STUDY IN DR SHOWS BENEFITS
OF MPA MANAGEMENT IN LA CALETA
35 SCIGIRLS EPISODE WINS AN EMMY AWARD
35 REEF CHECK SPOTLIGHT:
Shark Conservation in Bahamas
36 REEF CHECK SPOTLIGHT:
Why Is Diver Monitoring So Important to Manage Reef
Fisheries
36 VOLCANO POSES UNIQUE THREAT
To Montserrat’s Coral Reefs
37 WORKING FOR BETTER REEFS AND A
BETTER FUTURE
In Amed, North Bali, Indonesia
FEATURES
38 NATURE WILL FIND A WAY
With BBC Oceans Cameraman and Photographer,
Hugh Miller
40 THE SHARK WHISPERER
42 TAKING A SECOND LOOK
Is There A Full-Face Mask In Your Future
44 WHO SAYS TECKIES HAVE TO WEAR
BLACK
45 THE PROBLEM OF PLASTIC
48 INTRODUCING THE MANTA TRUST
UW PHOTOGRAPHY
51 DIGITAL ONLINE 2012 RESULTS
The UAE’s Only Underwater Photography and Film
Competition
66 MACRO AND SUPER MACRO
PHOTOGRAPHY
DIVING DESTINATIONS
68 DIVING IN CYPRUS
A Taste Of The Mediterranean
74 THE SMALL ISLAND OF CYPRUS
And What Lies Beneath Its Surface
77 PHUKET – THAILAND
78 WHYTECLIFF MARINE PARK:
Canada’s First
HEALTH
80 DAN – DEEP THOUGHTS
The Make-Up of Nitrogen Narcosis
82 PREVENTION OF MALARIA FOR SCUBA
DIVERS
40
48
51
EDA DIRECTOR’S NOTE
THE ART OF DIVING
Ibrahim N. Al-Zu’bi
EDA Executive Director
I would like to welcome you all to the June
issue of “Divers for the Environment”. Half of
2012 has already gone and we have been really
busy in EDA. March 2012 saw the Dive Middle
East Exhibition (DMEX), the region’s only dive
show, cover 365sqm with 26 exhibitors which
is the biggest DMEX ever since the launch in
2007. A visitor survey conducted during the
show stated that 83% that visited DMEX
stated that is was good/excellent. We also had
the pleasure this year to host PADI Project
AWARE who rallied support for Sharks at
EDA’s booth in DMEX.
Again, I find myself lucky that I was not a
member of the jury panel for our annual
Digital Online Underwater Photography
Competition. As a matter of fact, I felt sorry for
the judges. This year’s was one of the toughest
to score with lots of underwater photography
gurus participating and sending EDA amazing
photos of the varied marine life from all the
places our members have dived. If I were
to describe in one word the 49 entries we
received this year, it will simply be, ‘Fascinating’.
The Digital Online Award Ceremony at
DUCTAC in Mall of the Emirates made a clear
point that taking underwater photos is an ART.
A photograph always has a story behind it. I
want to congratulate all the participants for
enriching EDA’s photo library with amazing
photos – I am sure you will all agree with me
when you see the photos in this issue. I also
want to congratulate Mr. Warren Baverstock
for being the overall winner of the 2012
competition for the Professional Category,
Mr. Jonathan Clayton for winning the Amateur
category and Mr. Khaled Sultani for winning the
Video Category. Also many thanks to the jury,
the sponsors, the EDA team and EDA’s Events
Coordinator, Ally Landes for another successful
EDA event towards promoting for diving not
only in the UAE but in the whole region.
You will also find in this issue exclusive news
and special offers to our members from
our dive centers and clubs in the UAE. The
diving industry are in for a busy 2012! We are
also glad to see that our members and dive
centers are leading environmental campaigns
in the UAE. Al Mahara Dive center and EDA
members joined efforts to clean up the Capital
Ports. We are also glad to see that dive centers
are sharing reviews on new equipment with
our members.
As you all know, EDA is an official Training Reef
Check Facility in the UAE, we have allocated
in this issue a lot of space for our Reef Check
News! With input given by Reef Check, and
with EDA being one of the main Reef Check
partners, we hope you will enjoy the updates
and research about the condition of the coral
reefs in our seas!
We know about horse and dog whisperers,
but in this issue we have a special feature about
the one and only Cristina Zenato whom I am
sure most of you saw her fascinating video
with sharks, as Chantal Boccaccia who wrote
the feature described her, “Cristina Zenato
is an enigma; a quiet symphony of fire and
passion wrapped in a little girl’s body.” She is
simply “The Shark Whisperer”.
I also want to take this opportunity to
thank our EDA members who continuously
share their insightful diving experiences and
underwater pictures with us. Your insights
and articles are imperative in recommending
when and where to go diving as well as what
to look out for on your trip. You will read in
the diving destinations in this issue, tips about
diving in Phuket – Thailand and Cyprus. It is
also so good to receive some diving stories
from Canada from our long time member and
friend Mr. Mark Anthony Viloria.
We hope your passion and enthusiasm
continues and you send us news about your
next diving adventures, and we look forward
to seeing your next batch of waterworld snaps!
I do hope you enjoy reading this issue of
“Divers for the Environment”. We have a busy
year full of activities and events waiting for you.
The EDA team is working tirelessly to have
another successful year and we’re looking
forward to seeing you all in all EDA events.
Happy reading and safe Eco Diving!
4 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 5
NEWS
DMEX 13 – 17 March 2012 at a glance
Maintaining its unique position as the only
international diving event in the Middle East, the
6 th edition of the Dive Middle East Exhibition
(DMEX) catered to both the professional
diver and new enthusiasts offering a unique
platform to showcase the latest in diving
equipment, supplies, services and techniques,
complimented by live diving demonstrations.
The show hosted a series of presentations on
the latest dive gear, training programmes and
projects taking place around the region and
further afield in international waters.
2012 saw DMEX cover 365sqm with 26
exhibitors which is the biggest DMEX ever
since the launch in 2007!
A visitor’s survey conducted during the show
stated that 83% that visited DMEX said that it
was good/excellent.
If you are interested in exhibiting in DMEX
2013, March 5-9, please contact Barbara on:
BARBARA Herve
Exhibitions and Events Management
Dubai World Trade Centre
Tel: + 971.4.308645
Mob: + 971.551485247
Web:
NEWS
EDA ENVIRONMENTAL WORKSHOP
The workshop that EDA offers to companies is based on the short
film, ‘The Story of Stuff’, which has been watched online over 10 million
times since its premier in 2007
EDA conducted an environmental workshop for 20 members of Majid Al Futtaim staff
on the 29 th of March 2012.
This workshop is an all hands on deck situation. Everybody’s best efforts
are needed. While organisation and governmental efforts matter, it is the
people that have the highest potential to make the biggest impact. They
are likely to be more open to changing behaviour patterns, and their
energy, creativity, and optimism can be unstoppable. Supporting people
in making changes early is one of the most effective and gratifying places
to focus our efforts.
EDA, along with ‘The Story of Stuff Project’, have developed a six
session workshop; each with its unique activity that seeks to ignite the
participants’ passion for life, help them understand the fundamental
problems facing humankind and the planet, raise awareness of the
changes needed, and empower them to enact and take action in their
own lives. This one day workshop is engaging, informative, and very
interactive. We hope to support them in developing environmentally
sustainable patterns of consumption that honour Earth and deepen
their spiritual lives.
This workshop is flexible and can be modified to suit the client’s needs.
EDA is currently developing several other environmental workshops
that cover several other subjects. We will be sending updates about
this soon.
If your company or organisation is interested in these workshops, please
Reema Al Abbas
Tel: +971 4 393 9390
6 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 7
NEWS
Myth-busting the PADI
Advanced Open WATER Diver Course
If you’re a relatively new
diver you’re probably
thinking about where to
take your diving next,
and how to go about
developing your new skills.
As part of your open
water diver training, you
also learned the real truth
about common scuba
diving myths such as:
• You have to be an
Olympic-class swimmer
to dive
• Diving is only for
people who live or have
holidays in the tropics
Today we’re going to bust some myths about
the PADI Advanced Open Water Diver
program which is the next step in taking your
skills to the next level, and developing your
confidence. course, such as navigation and buoyancy.
Myth #2 The advanced open water diver
course is more challenging than the entrylevel
program.
The PADI Open Water Diver course covers
a lot of material and can be intense. Your
instructor brought you from being a nondiver
to someone who can dive together with
a buddy. Now that you’re familiar with the
basics of diving, it’s time to start exploring and
developing your confidence and skills – and
that’s what the PADI Advanced Open Water
Diver course is all about.
The Advanced Open Water Diver program
is basically five Adventure Dives – think of
it as being a way to sample different types
of diving. You can choose from more than
twenty adventure dives including; wreck diving,
underwater photography, enriched air nitrox,
night diving, underwater naturalist, boat, deep,
dry suit diving and many more.
In the Advanced Open Water Diver program,
classroom time is kept to a minimum. There’s
even an online option via PADI eLearning
where you can access the course whenever
it suits you. Either way, the main goal of the
program is to go diving. There aren’t any tests
and you can complete the program in as little
as one weekend, or take it one dive at a time.
Talk to your instructor
about upcoming adventure
dives in your area.
Myth #3 I learned how
to dive in my open water
class.
Yes and no. The open water
program teaches you the
basics and how to dive
safely. While many people
are “naturals,” perfect
buoyancy and underwater
navigation isn’t easy for
everyone. In the PADI
Advanced Open Water
Diver course, you can finetune
these skills with tips
and suggestions from your instructor. You can
also learn to confidently and help you get shots
you’ll be proud to share.
For more information contact your local PADI
Dive Centre or Resort. A list of your nearest
PADI Dive Centres can be found at padi.com.
Deep Dive
Night Dive
Project AWARE Rallies
Support for Sharks AT DMEX
FEATURE Jennifer CONSTANT
Regional Coordinator, Project AWARE Foundation
Project AWARE Foundation were honoured
to work alongside Emirates Diving Association
during the Dive Middle East Show in March 2012.
We were delighted to meet our supporters
and 100% AWARE partners exhibiting at the
dive show; Al Mahara, Atlantis, Al Boom and
Pavilion to name just a few who worked hard to
help us spread the word that our ocean needs
protection and that we, as divers, are in a very
powerful position to directly and positively affect
real, long-term change especially in regards to
collecting marine debris data and supporting the
protection of endangered sharks.
DMEX offered the perfect opportunity for
us to interact with residents of Dubai as well
as tourists, talking to them about sharks and
encouraging them all to add their names to
our Give Sharks a Fighting Chance petition.
We secured a massive 850 signatures which
contributed to hit the 100,000 landmark in April.
During the coming months we’ll take your voice
to leaders and decision makers as we target the
global power of the Convention on International
Trade in Endangered Species (CITES) to
protect threatened sharks. CITES is the largest,
most effective wildlife conservation agreement
in existence. With 175 member countries
CITES provides an international framework for
monitoring and controlling trade in species at
risk and penalizing violations. Your voice and the
success of the Give Sharks a Fighting Chance
petition allow Project AWARE’s global teams to
make profound arguments for change – including
diving-based economic benefits of living sharks
and eco-tourism.
In addition to rallying support for sharks, we
raised critical funds towards our Shark in Peril
and Marine Debris campaigns. We are grateful
for the generosity of the people who attended
DMEX 2012 and bought our badges and
necklaces collecting more than AED 4,150.
The overwhelming support and generous
contributions will go a long way in helping
us secure protection for the most vulnerable
shark species and protect our oceans from
harmful debris.
A big THANK YOU goes to EDA, all our
supporters in the Middle East and petition
signatories! We look forward to continuing
our work to clean up the oceans and save vital
yet endangered shark species from extinction!
NEWS
The New AWARE a Year On
feature Domino Albert, Project AWARE PR & COMMUNICATIONS coordinator
You could say Project AWARE is a year old this
June. Even though Project AWARE Foundation
has been around for many years – since 1992
as a registered non-profit – one year ago, on
World Oceans Day 8 th June 2011, Project
AWARE refocused, relaunched and renewed
its commitment to addressing the ocean
challenges ahead. Here’s a round up of what
Project AWARE and its dedicated volunteers
have been up to in the last year. Today we are:
1. 100,000 Shark Petition Signatures Stronger
In the last year, Project AWARE continued
to call on the diving community to express
outrage at the devastating results of the last
CITES meetings where 8 vulnerable shark
species were denied trade protections.
Project AWARE has mobilized more than
100,000 people who added their
names to the shark petition calling on
governments to protect sharks from
overexploitation – overfishing, finning
and bycatch. We’re setting our sights
on CITES 2013 in Thailand with plans
to secure listing for some of the shark
species most deserving of CITES
protections.
2. Ready to Tackle and Show the
Underwater Perspective of the
Marine Issues
It may seem like we’ve been talking
debris for decades but last year
we created “Dive Against Debris”
a unique programme aimed at
collecting underwater debris data.
Something desperately needed but
that no other organisation in the
world is currently doing! Scuba divers
are uniquely positioned to tackle the
global marine debris issue, to take
action every day and prevent debris
from entering the ocean as well as
remove it once there. Divers in all
corners of the globe have embraced
the new programme and the data
from their day to day marine debris
actions is helping provide information
not only to AWARE leaders who are
trying to find ways to improve local
debris management and prevention
but to world leaders to tackle the
issue on a global scale.
3. Connecting the Dots
Part of our relaunch was forming
and strengthening partnerships
and alliances with experts in shark
conservation and marine debris fields
as well as targeting the countries and
policies that matter most. In these
new and ongoing partnerships we
work on solutions both close to
home and globally. Our policy work
is propelling the change we need for the
ocean. It’s a giant, intricate, complicated and
slow process but thanks to your support and
generosity we are making giant steps in closing
loopholes in shark finning regulations, keeping
marine debris issues at the top of ocean policy
agendas, and keeping the pressure on CITES
representatives in the run up to CITES 2013.
4. Building a Strong Movement of Passionate
Activists
In the last year, divers pulled off some of the
most inspiring, inventive actions yet. There were
motorcycle marathons, shark demonstrations,
people shaving their heads all in the name of
ocean protection. Everyday divers from all
corners of the world are joining My Ocean,
the Project AWARE online community
network, to share their actions, inspire and
mobilize other divers to get involved. Project
AWARE has become the largest, most diverse
movement of divers on earth – 700,000 strong
and growing – who are sharing the same vision
for a healthy and abundant ocean planet.
Thanks to your support we are showing a
united front and pushing forward effective
policies measures that will ensure the survival
and health of the oceans and its inhabitants.
We are taking the momentum of the AWARE
movement and your actions to turn them
into large-scale change. Join us in celebrating
our one year anniversary, our shared passion
for the protection of the ocean planet and
the many conservation successes ahead at
projectaware.org.
8 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 9
NEWS
Disabled divers instructors course
feature claire donnelly
NEWS
Sharkwatch Arabia DATAbase Update:
The sharks are back!
feature DAVID P. Robinson, Jonathan Ali Khan & Warren Baverstock
This image shows how to correctly determine the gender of a shark by confirming the presence of
claspers in males and absence of claspers in females.
Source:
In mid April I had the privilege to attend a
disabled divers instructors course arranged
through Disabled Divers International, a
training course to become a diving instructor
for young adults and above with disabilities.
And what a privilege it was!
Myself and five other PADI dive instructors/
dive masters took part in a two day course to
learn more about specific disabilities and how
we can adapt our scuba training to include
people with these disabilities. An experience
that we take for granted each time we put on
our gear and giant stride into the ocean.
Our tutor was Fraser Bathgate, an amazing
man who took time to teach us so much.
Fraser was the first paraplegic diving
professional within PADI. He started diving
after his accident at the age of 23, and now he
is a course director and PADI advisor. Fraser
opened our eyes to so much that we had not
considered as able bodied dive professionals.
For example, did you know that diving is the
only adventure sport that, if you are in a wheel
chair, you can still buy the equipment off the
peg A major selling point and a dramatic cost
reduction compared to other sports.
So what new skills did we learn
• How to put a wetsuit on a diver in 90
seconds flat. Extremely hard as I couldn’t
put my own suit on in that time!
• One of the biggest problems with divers
with disabilities is holding them back, as
they discover a world underwater where
freedom of movement is realised.
• When in the water with a disabled student,
engage with them at all times and not the
people around them, or they will close
down quickly.
• Don’t push or pull the student, this type
of behavior completely closes a diver with
disabilities down.
• Our communication skills had to jump
to a whole new level, as did our thinking
through solutions to problems.
• The rate of conversion from “try dive”
through to completing the course is 98%!
The missing 2% is due to the student not
being passed fit in their medical.
• The only disability not able to dive is
epilepsy.
Amazing eh!
The two day training covered a day’s class
room training where we learnt more about
the specific needs of different disabilities, the
challenges that each group face daily and a
new set of diving standards (PADI approved)
that we need to follow.
We were taught the specific needs of a group
of disabilities – amputees, those with cerebral
palsy, muscular dystrophy, downs syndrome,
sight impairment and spinal injuries.
The second day was spent in the pool
practicing lifting techniques to help get the
diver out of the water protecting them from
possible incidents. The lifting skills we learned
are based on technique rather than pure
strength as demonstrated by Fraser, who
without any power in his legs demonstrated
the three lifting skills from the water with just
the use of his arms. We also experienced a
dive blind (with the aid of a blacked out
mask), from equipment assembly through to
disassembly after completing a pool dive – an
experience all of us will remember. Finally we
practiced our newly acquired skills with the
help of a young Jumeirah guest who had hurt
his knee on holiday and was in a wheelchair.
We helped him take his first, and definitely not
his last, judging by the smile on his face (his
mask kept leaking due to a constant ear to
ear grin!), scuba experience, and helped him
lap the pool 4 times blowing bubbles – just an
amazing training course.
The training took place with the Pavilion diving
centre, based at the Jumeirah Beach Hotel,
Dubai. The Pavilion is the only DDI affiliated
diving school in the UAE. At the Pavilion there
is a growing group of DDI trained diving
instructors and dive masters (30 trained to
date) extremely keen to help people with
disabilities from the local community to
experience the world underwater. So if you
know of someone who is keen to try, but has
always been told “it’s impossible”, come and
try, we are here to help to show you that
“anything is possible”, we want to see that
first breath underwater and for you to feel the
freedom of movement of flying through space.
Contact the Pavilion on email divecentre@
jumeirah.com or call 04 406 8828 and ask for
Shay or Elena.
Since the last update the whale sharks have
started to return to the area. Sharks are
now being reported in numbers both inside
the Arabian Gulf and on the East Coast/
Musandam. The weekend of April 17 th saw
five individual sightings, three
of which were in Fujairah
and two in the Musandam.
From researching whale
shark occurrence since 2004,
it has become clear that the
whale sharks are seasonal
to the region, appearing
in April and disappearing
in November. Sharks are
spotted occasionally in the
winter but encounters are
few and far between.
Apart from the return of the
sharks, April also saw our first
2012 Musandam field survey
conducted. Armed with a
satellite tag, we did three dive surveys but
were not lucky enough to have a whale shark
encounter. Although we didn’t see any sharks
on the day, we did collect some important
plankton samples and environmental
information. Many thanks go out to Nomad
Ocean Adventures for supporting the survey
and for their continuous support to the
project.
This season, in a show of appreciation, we will
also be giving away Sharkwatch Arabia window
stickers to dive centres and individuals who
have been supportive to the project over the
last couple of years.
Watch out for the new SWA supporter
sticker which will be given to dive
centres and individuals which have
shown support for the project.
Please remember to send in your sightings if
you encounter a shark, even if you don’t have
a photo. If you do try to take an image, please
make sure you photograph or video the flank
of the animal behind the gills for spot ID
analysis. Preferably both sides if possible, but
if that’s not possible then one side is fine. If
you are diving with buddies (which we hope
you are), try to make sure one of you takes a
good look underneath the shark to see what
sex it is.
We would like to take this opportunity
to thank the following individuals for their
This image shows the area of the shark used for spot pattern ID. Every whale shark has a
unique spot pattern on both the left and right side. If you encounter a shark, try to get both
sides but, if that’s not possible, one side is fine.
support and for sending in sightings to
Sharkwatch Arabia: Christophe Chellerpermal,
Nomad Ocean Adventures, Rima Jabado,
Divers Down, Sheesa Beach, Kirsty Kavanagh
and Michael Etter.
If you encounter a whale shark in this region,
please visit and
report your sighting.
10 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 11
NEWS
NEWS
My Dive
in the Dubai AQUARIUM and Under WATER Zoo
FEATURE CAITLIN TOLLIDAY AGE 10
Learning the Ropes of the IDC
Al Boom profiles new
instructors joining
the UAE’s diving community
FEATURE SAM THOMAS
(PADI MASTER INSTRUCTOR)
Al Boom takes the great pleasure in
welcoming Bechir Chehab, Ranjith Punja, Daria
Atrash, Houssam Mneimneh, Tom Crabbe and
Necholy Mindajao into the prestigious world
of scuba instruction. Congratulations to you
all on becoming PADI Open Water Scuba
Instructors! These six candidates attended
the PADI Instructor Development Course
with Al Boom Diving Club during March and
April 2012, and successfully completed the
Instructor Examination on April 18 and April
19 resulting in a 100% pass rate. The Instructor
Examination was conducted at Jebel Ali Golf
Resort and Spa, as well as Jumeirah Open
Beach, by PADI Office Staff. They happen twice
a year in Dubai.
My name is Caitlin Tolliday, I am 10 years old.
I recently qualified as a PADI Scuba Diver and
when I was asked to do a shark dive in the
Dubai Mall Main Aquarium tank, I jumped at
the opportunity.
On Thursday after school, I was on my way
to have the best experience of my life! At the
Aquarium I was met by my guide Ryan, he was
very nice, kind and funny! Ryan took me to
watch a movie about diving with sharks, the
movie told me how to behave when diving
with the sharks, and what to expect, so I was
put at ease. After the movie had finished we
went to assemble our gear, ready to dive. They
had all the equipment I needed, including a
wet suit and flippers (or should I say ‘fins’)!
Soon after we were ready, I started to get a bit
nervous, but Ryan made me feel very relaxed!
We made our way down the stairs to the
diving platform. We did our final buddy checks,
and dangled our feet into the Aquarium water.
The beautiful, clear, saphire blue water felt cold
at the start, but I wasn’t going to let that spoil
my experience, and to be truthful, I got used
to it very soon and didn’t feel cold again! As
we were preparing to enter the Aquarium, a
huge Sand Tiger Shark swam past where we
were, almost touching my leg! Wow, this was
going to be amazing.
We proceeded to enter the tank and started
down the blue rope, we stopped to equalize
and look around, before we carried on. We
soon came to a rock just above the tunnel
that goes through the Aquarium Tank, I was
really surprised how small the people looked
walking through the tunnel! I had a fish eye
view, and it felt funny that I was finally seeing
what the fish in the Aquarium see every day!
My breath was sending bubbles that floated up
like little jelly fish to the surface. There were
really big rays and fish swimming through the
bubbles, they must love the feel of them. The
rocks and corals were all around me, as were
many different kinds of fish, Rays and Sharks,
fish and eels, it was so beautiful!
After swimming at around 5m for a while,
we finally descended to the bottom of the
tank, which is about 10m according to my
depth guage. There were as many fish at the
bottom of the tank. I took lot’s of photographs,
especially of the Guitar and Zebra sharks lazily
sleeping on the bottom of the tank.
We stayed at the bottom taking pictures for
a while, I was surrounded by magnificent fish
and sharks…I thought about how lucky I was
to experience so many extraordinary marine
animals, and all in one place! My favourite fish
was called ‘Bob’ and he is a Giant Grouper
from Australia.
Unfortunatley my air was running low so we
had to return to the human world at the
surface. So I said goodbye to my marine friends,
and really hope I can dive again in the Aquarium
tank to see them all again some day very soon!
I would really recommend this experience to
anyone, it was truly an experience of a lifetime!
All these candidates decided to continue their
education up through the PADI System of
diver education, realising that there is always
something new to learn when it comes to
diving. Starting from PADI Open Water Diver,
continuing through to earn the Advanced
Open Water Diver, Emergency First Response,
Rescue Diver and Divemaster certifications.
Mr Bechir Chehab has an interesting case of not
beginning his diving career with PADI. He has
the NAUI (National Association of Underwater
Instructors) equivalent certifications for the
PADI Open Water, Advanced Open Water
and Rescue Diver courses; demonstrating that
even a non-PADI certified diver can become a
PADI Open Water Scuba Instructor! His new
certification allows him to conduct courses
and programs independently ranging from
Discover Scuba Diving all the way through
to PADI Divemaster. From the onset of the
course, Bechir maintained his sense of humour,
dedication and always kept a positive attitude
when tackling the dreaded topic of physics!
Conversely, Miss Daria Atrash, completed all
of her PADI courses from Open Water Diver
to Divemaster, nurturing her diving career with
Al Boom Diving Club. Her mission now is to
continue onto the Master Scuba Diver Trainer
rating, which she is currently working on with
our PADI Course Director Mohamed Helmy.
Daria displayed a fierce determination and a
great willingness to learn while keeping IDC
staff on their toes with what seemed like a
never ending supply of questions!
While Al Boom Diving Club would love to
highlight all of the newly certified instructors,
on a more personal note as one of the IDC
Staff who personally witnessed the hard work,
dedication, commitment to ‘back to school’
studying, I am both proud of – and pleased
to welcome these six outstanding new
instructors to the UAE’s diving community.
Congratulations guys!
A NEW PROFESSION
feature Daria Atrash
If someone had told me last May that in a years
time I would become a diving instructor, I would
have laughed. A year ago, I knew about scuba
diving as much as about cybernetics. It exists, but
what it is exactly – is a very difficult question.
Everything changed since I moved to Dubai
from Moscow. My boyfriend, also from Russia,
offered me to try scuba diving. He was already
an assistant instructor at the time. I had a lot
of free time and no friends so I thought, why
not! It’s better than to stay at home. I went to
Al Boom Diving and signed up for my Open
Water course. By the way, when I first came
to the UAE, my English was quite poor and
my first instructor, Sam Thomas, wondered if I
understood anything he was saying because he
heard only one answer from all his questions!
“Yes”. But he told me this much later, after we
became friends.
So after my Open water course, I sighned up
straight away for my Advanced, then Rescue,
EFR…everything went so fast. From August
2011, I became an Al Boom Diving trainee, so
I spent a lot of time in the dive center. My day
began from a beach dive at 8 a.m. and finished
at 6 p.m. after a usual pool session. Every day
from morning till evening, I was in the water –
it was difficult, but it was so useful, I watched
how instructors worked, how they talked to
students and how they handled problems
underwater. I learnt so much from those 6
months. Then I started my Divemaster course.
Now I felt like a professional. I was surprised
that I liked to help people. For example, on
the boat when we went to Fujairah, I ran my
first refresher session, you can not imagine
how nervous I was! Thank god my English had
improved by that time.
Every time I was in Fujairah or the Musandam,
a new world opened up for me. I was like a
child pointing my finger at all the colorful fish
or beautiful corals, but I suppose to be fair, I was
already a professional guiding my own students.
Diving gave me a chance to learn something
new myself, to open new borders of my
personality. I meet new people everyday, some
of them are good friends today.
If you ask me what was the best time of my
diving career, I would answer without delay –
the IDC. Why Because I found a new family.
For several weeks we studied non stop, through
our weekends, sometimes till 12 p.m. The dive
centre was already closed and we sat outside
on branches scrutinizing over our papers under
the dim light of lanterns. You can’t imagine how
much fun we had. I want to thank my instructor
Sam Thomas and my Course Director
Mohammed Helmy over and over again for all
their patience and hard work. All my knowledge
is what they passed on to me. During our IE,
we were all very close friends that if one of us
failed in an area of the IE we went through the
emotions together and when we hit success,
we all felt the pride. We all passed our exams! It
was not easy, but we did it.
In a year, my life totally changed. I moved to
another country, took on a new profession,
met a lot of wonderful people and hopefully
finally found myself.
WHAT MADE YOU CHOOSE
SCUBA DIVING FOR A CAREER
feature SAM THOMAS
(IDC STAFF INSTRUCTOR AND EFR
INSTRUCTOR TRAINER)
At a recent gathering I was asked a question
that does seem to be coming up quite
frequently when it comes to my incredible job.
You’ll be surprised how many people ask ‘Have
you ever been bitten by a shark’ Or ‘Have you
ever been down to 500 metres’ To answer
these ridiculous questions, I simply say, ‘Yes’!
Anyway, the question was: ‘What made you
choose scuba diving for a career’ And my
answer was plainly, ‘Not sure’. Let’s keep things
simple. But, of course, I know full well exactly
why I chose a career where I get paid to do
what I love doing! I’m pretty sure the majority
of people can’t say that.
Well, to be honest, I never had it in my mind
to become a full time PADI instructor. I never
even imagined I would try diving until I came
to the nice, warm, tropical waters that Dubai
has to offer – apart from now – freezing!
12 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 13
NEWS
NEWS
Let’s start at the beginning of my career, because,
after all, that’s where we all begin. Of course,
after working my way up through the Open
Water Diver and Advanced Open Water Diver
courses, I’m really starting to enjoy my new
hobby and trying cool things like diving at night.
At the start of the summer 2009, I thought
about how during every single summer I turn
into the most useless and lazy slouch ever…
which of course, has changed now – more
or less. Time for a change! I called the owner
of one of the local dive operators, who just
happens to be a friend, and came to an
arrangement that I could work for him during
that summer whilst getting the Divemaster
course in return. Woohoo – my first ‘official’
job! Well all I can tell you is that I got first class
training and an astonishing tan by the end of it.
I think the Divemaster course is the ‘NO
U-TURNS’ point for most people. Nearly
everyone wants to continue their education
onto the instructor level courses by attending
an Instructor Development Course held at
some dive centres. For me, regrettably I’m like
a child, when I want something it has to be
right this very second. So at the end of my
Divemaster course I decided I couldn’t wait
for the next instructor examination, and off I
flew to Thailand – awesome! Parties, women
(I think) and diving! I met up with my Course
Director and 14 days later – drum roll please
– received my Open Water Scuba Instructor
certification. I have to tell you, the first beer
after that tastes good!
Time for a vacation I think – a proper one
this time. All high and mighty with my new
credential I went off to places like the Maldives
and Mauritius, eagerly waiting for the operator
to ask to see my diving license…
Now I’m back in the UAE, teaching my favorite
hobby in the world, and still learning new
things on a daily basis. I get to guide people
who are new to diving, and can watch them
experience the same range of emotions you
felt yourself oh-so long ago.
Diving has taken me all over the world and
into places no one has ever ventured before
– just amazing really. Where I’m working now
at Al Boom Diving, I have the opportunity
to progress through all the instructor levels
and assist with our Instructor Development
Courses. I think if someone asks me again
‘What made you choose scuba diving for a
career’ I’ll actually share my story this time.
I COULDN’T
RECOMMEND IT MORE
FEATURE TOM CRABBE
Just recently, I have become an Instructor at Al
Boom Diving. The role of Instructor has been
a goal of mine for the past year since I started
diving again, now making it my fourteenth year
of diving with hundreds of dives under my belt.
Now that it is a reality it has certainly proved
worthwhile. Not only is there the added
respect from fellow divers, not to say that
Divemasters aren’t respected, just that they
fall into an obscure level of training that not
all divers are familiar with. When you talk to
someone outside of diving and mention that
you can teach them how to dive as you are an
instructor, your years of experience in diving is
more apparent due to your title.
One of the most important aspects in my mind
which makes an instructor better at their job is
the confidence and fluidity in which they teach.
Staying calm and collected is also a big plus
seeing as the majority of problems in diving are
stress related. Having worked closely with many
instructors at Al Boom for the last year has
helped instil this attitude in me. The best practice
is slow and steady definitely wins the race.
Even so, with one of my first courses I have
currently been teaching being the PADI
Sealteam program, it has caused me to be
a little anxious especially given my recent
certification upgrade and the circumstances
of the course. Teaching to adults is one thing,
while teaching to a group of children is very
daunting especially as for many of them it’s
their first experience in diving. Fortunately I
planned everything meticulously in advance
and had the assistance of fellow instructors,
Daniel and Randy, working with me. After the
introductory session and the first few skills,
I can safely say that we had a near perfect
lesson and the kids have had a great time. With
more dives planned for them in the coming
weeks, it should end up being a great course.
I hope to make every course that I run just as
memorable for both my students and myself
so that I can end up being a role model for
future divers.
The aspiration to become better has not
ceased with me having reached Instructor. I
always intended to go into professional diving
and build on my knowledge and skills as well as
getting together a good set of equipment for
every occasion. Right now this means getting
started with specialties and eventually having
Master Scuba Diver Trainer as my title. I keep
adjusting these goals raising the bar higher for
myself and so can see myself doing this for
many years to come. As a lot people told me
when I first got into the industry the lifestyle
is good and the pay is reasonable, but not
extravagant, and so far everything stands true.
The life of an Instructor is incredibly enjoyable
if you make it interesting and fun, so for
those looking to go into instructing I couldn’t
recommend it more.
Al Boom Diving: 04 342 2993
Fish Spotting
feature Andrew Roughton
I was recently sat on a long-haul flight scrolling
through the movie channel when I stumbled
across a comedy film called The Big Year.
The synopsis described the film as the story
of an annual, North American bird watching
competition, which sounded a little lame.
However, with Steve Martin, Jack Black, and
Owen Wilson starring, I assumed that I would
be in for a few laughs. And thankfully I was
right. The Big Year is a gentle comedy with a
pleasant plot, likeable characters, and some
truly stunning North American scenery.
a
However, for me, the main success of the film
is its celebration of wildlife and the wilderness,
which lead me to draw parallels between birdwatching
(or “birding” as the film insists is the
correct terminology) and recreational diving.
Firstly, birding, just like pleasure diving, is
an excuse to leave the city and enjoy pure,
unadulterated nature. Secondly, isn’t spotting
birds, just like spotting fish Isn’t the main topic
of post-dive conversation the different species
you’ve spotted And isn’t the conversation
always more exciting the rarer the fish you’ve
spotted And thirdly, the bonds built between
the birders in The Big Year are just like the
bonds built between divers on recreational
dives. It’s about unifying people in a celebration
of nature in exactly the same way. Ok, birding
is arguably much geekier. Binoculars, anoraks,
and flasks of tea could never be considered
as cool as BCDs, Masks, and Fins. Nonetheless,
the similarities between the fundamental joys
of birding and diving are undeniable.
Now this doesn’t mean that I’ll be trading
in my BCD for a pair of Binoculars any
time soon, but it does reiterate the joys of
diving for me. Just as Steve Martin quits his
high powered job to undertake a “Big Year,”
Jack Black walks miles to spot a Pink Footed
Goose, and Owen Wilson misses New Year’s
celebrations to spot a Snowy Owl, I will
continue to hammer my credit card, forget the
dives in two meter visibility, and revel in the
joys of spotting Lionfish, Yellowtail Barracuda,
and Picasso Trigerfish.
A successful PADI IE
The Wall of Fame keeps on growing at the
Atlantis Dive Centre.
I am very proud to add two new photographs
to the Instructors Wall of Fame; namely Talal
and Ice. In April, both successfully completed
their PADI Instructor Examinations. You guys
did an incredible job, the whole team at the
Atlantis Dive Centre are very proud of you.
With a 100% success rate in the PADI IE, the
team are looking forward to the two remaining
IDC/IE’s for 2012.
A WEEK WITHOUT WALLS
April 22 nd -26 th saw the Atlantis Dive Centre
open its doors to the GEMS World Academy
who joined us as part of their, ‘Week Without
Walls’ (WWW) program. The courses on offer
for the week ranged from PADI Open Water,
Advanced open water and specialty courses.
The week however was not just about diving.
As the Atlantis Dive Centre is 100% AWARE,
each student also took part in a Project
AWARE course in Coral Conservation and as
it was Shark month, some very lucky students
completed the AWARE Shark Conservation
Specialty course and enjoyed diving in the
Shark Lagoon at Atlantis.
The week was action packed and loads of fun
for both students and instructors. By the end
of the week, a total of 103 certifications were
completed but that was not the end…
Friday was underwater clean up day and as
this was the weekend and technically after
‘Week Without Walls’ was completed, it was
not mandatory for students to take part.
I was blown away by the turnout for the
cleanup. The conditions were perfect; water
temperatures were great and the visibility
superb. The students collected a huge amount
of debris and then spent the time after the
dive counting the debris in order to submit to
Project AWARE.
We look forward to continuing with all those
students in the coming months and next years
‘WWW’ will be even more amazing.
Some feedback from the students:
“From theory to wreck dives to shark tanks,
we always had a great time. Not only did we
have fun but we also learned. Everyone got
along really great and the dives were awesome.
The wrecks together with the fish we saw were
fascinating. I’m really glad I went scuba diving for
Week Without Walls.”
Nina Bernhardt
“I am absolutely in love with sharks now. Before the
trip I was a bit worried about diving with sharks, but
after all the training, buoyancy skill practices, I felt
a lot more confident and soooo happy that I did
the shark dive. Thank you for an amazing week.”
Anna Pocs
“I really liked Week Without Walls this year,
because there was a lot of interaction between
the people in the course. We did a lot of fun
activities over the week and used every day to
it’s maximum. I especially liked the under water
photography specialty course because you got to
record all the interesting things you get to see
underwater. I will definitely continue extending
my diving adventures next year at WWW.
Marvin Arnold
14 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 15
NEWS
FAMILY fun day
Saturday, the 9 th of June from 3pm to 5pm is
‘Family Fun Day’.
SHARK AWARENESS
Sharks are not only incredible to see underwater, but the highlight I think, of most peoples dives.
Sharks are also crucial to the marine eco system, without them, our oceans are in even more
trouble.
This is not one of those issues that does not affect us, its happening in our waters at an alarming
rate. For many of you who have dived in the Musandam waters for many years, you will know
what I am talking about. Go back to your logbooks from 10 years ago and see how many sharks
you saw in one dive and then go to your log book from last weekend…say no more.
For anyone aged 8 to 88…The Atlantis Dive
Centre is hosting a fun family day where we
invite all non-divers to come and experience
what it’s like to breath underwater. So please
tell your non-diving friends and come along
with them and jump in our dive pools.
Rescue Refresher
Life in Dubai goes quickly, very quickly. Bet it
seems like just the other day that you did your
PADI Rescue and EFR course! Or was it…
Saturday June 16 th the Atlantis Dive Centre is
offering you a chance for FREE (don’t tell the
boss) to come to the dive centre and update
your rescue skills and for those who’s EFR
has expired, (only valid for 2 years) we will
also be running an EFR Update for you in the
afternoon. The EFR update will cost AED 300.
Even if it hasn’t not been 2 years since your
EFR or Rescue, come up anyway and refresh
your skills. Learn how to rescue divers on
rebreathers and Tec gear, you never know
when you might be able to make a difference
to someone’s life!
There are many issues that need to be addressed globally, but we can each do our bit today. If
you have not yet signed the shark petition, please give a ‘Shout out to the Sharks’ by logging onto.
At the Atlantis Dive Centre, we are running the AWARE Shark Conservation Diver Course. Not
only is the course a huge amount of fun, but it will also contribute towards the conservation of
sharks by building awareness of the issues and inspiring us all to speak up for them. The course
culminates with a thrilling dive in the Hotel’s shark lagoon.
If you would like to know more about what you can do for the sharks, turtles and whale sharks,
join in on our monthly ‘Dive Against Debris’ and the ‘AWARE Shark Conservation Diver Course’.
Please contact Jason at: Jason@atlantisdivecentre.com
GAP YEAR STUDENTS
So you’re leaving school this summer! Finally
no more school!!!
Well if you’re one of many students who have
decided not to go to University straight away and
are thinking about taking a Gap Year, then read on.
This summer, the Atlantis Dive Centre is
running a PADI Instructor Development
Course for Gap Year students giving you the
opportunity to become a PADI Professional,
travel the world, work in some exotic locations
and get paid at the same time!!!
Live the dream
Throughout the summer you will not only be
learning to become a PADI Professional, but
gain valuable experience in the Dive Industry.
You will learn what it takes to become an
excellent PADI Instructor. We will be working
with you to create your diving CV and assisting
you with job placements.
To take part in this program, you must be 18
years of age and a currently certified diver.
To get more information about this incredible
opportunity, please contact:
jASON Sockett | PADI Course Director
Jason@atlantisdivecentre.com
16 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 17
NEWS
North or South
FEATURE Paul Sant, owner – Divers Down
When we look at buying retail items such as a
microwave, we look at a few models and buy
the one that suits our needs – maybe with an
analogue timer or a pizza setting. Either way it
is still a microwave.
The same can be said about diving the Gulf
of Oman: each location offers great diving.
There are differences in dive sites and dive
operations, but basically you are diving the
same waters.
For the last 12 years, I have dived the UAE East
Coast southern sites between Dibba rock and
Ras Quidfa Wall on a daily basis, and on a few
occasions I have joined one of our dive centre
Northern Musandam, two night trips.
When asked “where is it best to dive” by our
guests, I will in general say Fujairah and offer
my main reasons:
• Practicality
• Marine life
• Dive site choice
• Logistics
Practically it makes sense. You drive over from
Dubai or Abu Dhabi and go diving. If your
family or friends are non-divers, they can relax
on the beach or at the pool.
You can bring your lunch and as many bags
as you like and they are kept safe in the shop
while you are diving. Or you can buy lunch at
one of the hotel’s 3 food outlets.
If you don’t have your own gear, or if you forgot
something, there is no problem, everything is
available right here, and you don’t have to miss
out on a dive.
A big plus is that there are NO border
checkpoints so everyone can come here
without the risk of being stopped!
Marine life is the same as further north, but
the dive sites are smaller, which means that
with the local knowledge of the dive sites and
its critters, our dive staff can easily find what
you wish to see.
A great example was when two staff members
from a dive centre on the west coast recently
came to us for some diving with the “request”
to see seahorses and sharks! In one dive, both
requests were met and much more as a bonus.
With more than 930 different species of fish,
the underwater world at these sites is every
bit as astonishing as further north.
Dive sites are numerous: 14 that we dive
regularly, from Dibba to Ras Qidfa, including
wrecks, coral reefs and walls. We don’t have
to stick to a certain itinerary, so we are able to
plan the dive sites according to the levels and
wishes of our guests.
The sites are mostly close to shore and are
great for the open water level divers as well
as experienced divers, offering depths from
6-32m with minimal currents, and no down
currents.
World class discover scuba sites are on our
doorstep for those exploring the underwater
world for the first time. Recently, two guests
on their first ever dive saw a whale shark, black
tip reef sharks and green turtles! On my first
ever dive, I saw a park bench at Swanage pier.
Logistically it is a no brainer: less than two
hours away and situated in a 5 star hotel, you
can drive over for the first two dives and be
back in Dubai for 4pm after a great day of
diving.
Or you can sleep in, come for the 3pm dive
and night dives, stay over and dive the next
day. With summer nearly upon us, would you
rather sit on a boat for the whole day or come
back to an AC and a chilled pool between
dives
The Musandam is a fantastic dive destination,
and has a lot to offer. Yet, it is by no means
accessible to all divers, which is why we at
Divers Down only offer the weekend dhow
charter. It is more comfortable than a 2
hour boat journey each way for two dives,
and inexperienced divers can even do their
advanced course during the trip; thus enabling
them to dive the more advanced sites.
It is definitely a worthwhile trip, made all the
better by the fantastic dhows with AC cabins,
en-suite bathrooms and great food.
The main reason I love the Musandam is not
just the diving – it is the place itself, the high
mountains and natural beauty. You do not
experience that on a day trip; you will not see
the bright stars at night or splash in the water
for a dawn dive followed by a hearty breakfast.
So that is what I tell the guests looking for
advice on where to dive. We recommend to
divers who are coming on a holiday, to dive
both destinations; mainly because the two
night trip is special.
If you would like to know more about our
local dive trips or a Dhow weekend, you can
find the information on our home page.
NEWS
Calling all rescue divers!
So I have your attention, but are you a rescue
diver Meaning: are you current in your
primary and secondary care protocols Have
you updated your rescue diver skills in the last
6 months If not, this should interest you…
Divers Down has been running rescue and
first aid workshops every 3 months and as
part of that workshop you get the chance to
renew your EFR (Emergency First Response)
certification at a very affordable rate. (EFR is
only valid for 24 months).
So what do we do on our refresher day
The day has a serious subject, but Paul Sant
will still make it fun and add his unique twist
to many of the scenarios and skill applications.
As a First Aid at Work (HSE) instructor and
ex commando medic, Paul will ensure you
are brought up-to-date with the latest ILCOR
standards.
You will be shown all the latest protocols and
have the opportunity to practice the skills
prior to a few life-like scenarios (involving a
lot of ketchup)!
Once the first aid is completed, it is time to get
wet. We use the swim area in front of the hotel
to practice skills that may have been forgotten
or missed. Paul demonstrates techniques for
various skills and gives you advice on how to
deal with the various equipment types out
there. (Such as twin sets, harness systems,
integrated weights).
You will PRACTICE:
• Missing diver
• Search patterns
• Lifting unconscious divers from the bottom
• Mouth to mouth
• Pocket mask use
• Diver tow whilst providing rescue breathing
• Extraction onto a boat and shore
At the end, we run a scenario where you will
have the chance to become a scene manager
and an assistant, allowing you to bring all your
refreshed skills together.
The day starts at 09.00hrs and finishes at
17.30hrs. The workshop is only AED 250
(excluding equipment) plus an additional AED
150 if you require your EFR certification to be
renewed.
The next session will take place on the 30 th of
June 2012 – contact info@diversdown-uae to
secure your space.
Breaking New Ground
FEATURE NEIL MURPHY, OPERATIONS MANAGER AT SHEESA BEACH
In our efforts to keep our divers coming
back by varying our diving areas, we have
over the last year gradually started pushing
further north into the Musandam. In April it
culminated in us putting together the Salamah/
Fanaku or Quion island trip. These islands are
in the Straits of Hormuz and lie approximately
over a 100 kms north of Dibba in Oman.
There are 3 islands located in this remote
area namely, Salamah, Fanaku and Didemar (an
Omani military base is located on Didemar
and cannot be dived). It is a long haul by
speedboat and the time taken to get there
is roughly 2 hours and 30 minutes depending
on the number of people on board and how
much equipment is being carried. However, it
is well worth it! The divers also need to be
experienced in drift diving as well as surface
marker buoy deployment and it is certainly
not for the faint hearted.
The visibility was fantastic, the marine life is
incredibly diverse and we had encounters
with at least 6 or 7 leopard sharks, rays galore,
turtles, great barracudas, massive snappers, the
healthiest coral I have seen in the 2 years I
have been here and a kaleidoscope of colour.
The topography on Fanaku is impressive, you
weave your way between huge coral heads
that come up to the surface from between
5-10m underwater. Snorkeling between dives
during our surface interval time, we saw black
tip reef sharks and they were not that small
either, huge schools of batfish and turtles.
It is by far the best diving the Oman/UAE area
has on offer and we are familiar with the sites
now and know how to navigate them. If you
are looking for adventurous diving and have
the relevant experience for the area, then this
definitely is the new frontier for diving for you!
We had the privilege of seeing two leopard
sharks mating, a ray we still do not know the
name of and the biggest school of bat fish I
have ever seen. During the winter months we
do two dives on the islands and a third on
the way back in the Leema area. However as
temperatures sore in the summer, we only do
two tank dives in consideration of our clients
safety and the long drive back to either Abu
Dhabi, Al Ain or Dubai.
Sheesa Beach Travel & Tourism was traditionally
a dhow cruise company only and since the
inception of the dive centre we have seen a
tremendous growth in the company. It is thus
that we are pleased to announce that our
brand new dhow will be ready for action on
the 25 th of October this year and she will allow
to push into new areas for our diving liveaboard
safari trips. The dhow will be tailored
to diving expectation and will encompass all
the comforts and facilities that divers require.
She will also be the only live aboard that will
offer on-board Nitrox fills for those certified
to dive Nitrox and run itineraries from 1-7
nights. This dhow will now increase our fleet
size to 8 dhows.
18 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 19
NEWS
A Personal Approach to Diving
with EASY DIVERS EMIRATES Diving Center
feature Steve Tribble
NEWS
NEW WEBSITE, NEWSLETTER AND AN EXPEDITION
FEATURE CHRIS CHELLAPERMAL, NOMAD OCEAN ADVENTURES
controlled pool anytime from 6am until 10pm.
Our staff is PADI certified with many years
of experience in diving and teaching plus
backgrounds in recreational and technical
diving.
We at Easy Divers Emirates are excited to
have recently opened our latest PADI dive
center in Dubai.
After phenomenal success with 4 co-owned
dive centers in Sharm El Sheikh, the coasts
of the UAE was an obvious best choice for
opening a new center in the region. The
waters here are perfect for year round diving.
What better way is there to beat the summer
heat! Though it may be 45 or more degrees
outside, the 28 degree waters off the east
coast are a refreshing way to cool off. Water
near the beaches in Dubai does get a bit
warm during the summer and the waters can
get a bit murky due to local construction, but
a quick boat ride and nearby cities such as
Sharjah offer great conditions for diving with
numerous shipwrecks for both recreational
and technical divers and a great training
ground for new divers or those who wish to
further develop their skills.
A short drive to the East Coast, fantastic dives
are found. We are running weekly day trips
to Fujairah and Musandam where waters of
the Indian Ocean bring in a variety of sea life
including spectacular coral reefs, an abundance
of fish, rays eels and even the seasonal whale
shark. Much to our amazement we even
spotted a Mola Mola, also known as a sunfish
or moonfish.
For the adventurous ones, we run overnight
camping and diving trips (during the cooler
months) with a delicious bbq under the stars
on the shore of the Musandam Mountains.
Live-aboard trips can be arranged anytime
from both the east and west coast of the UAE.
UAE Diving
Many do not realize the vast number of
wonderful sites available to divers and
snorkelers in the UAE. Keeping with our
personal touch and passion for diving,
we frequent many pristine sites and are
continuously exploring new dive sites. We are
fortunate to have on our team, Vyacheslav
who has been diving and leading tours in the
UAE for the past 12 years. From wreck sites
to the best time and place to be for great
underwater sightings, he has never let us down.
Our dives are based on both experience of
the area and the desires of our clients. We
customize our trips based on interest and
maximum enjoyment of our clients.
Training
Our location in The Lakes Club offers a
first class experience for dive training or
just freshening up you diving skills. All of our
students have access to the club facilities
which include not just the pool where we
train from, but also the Jacuzzi, the playground
with waterslide for the kids and a restaurant.
Between lessons, students can relax and enjoy
the facilities offered making a day of training
relaxing and fun. We offer a comfortable and
fully equipped classroom.
Students have a choice of doing their dives on
the east or west coast from shore or by boat.
Our class schedules are based on our client’s
needs. Training can be arranged for groups or
individuals but all classes are given a personal
and private approach. Your instructor will
be available based on your needs. Confined
water dives are conducted in a temperature
Other Services
From our PADI centers in Sharm El Sheikh, we
can provide excursions to the best of the Red
Sea. New divers or those advancing their skills
can choose to begin in the UAE and complete
their training in one of our facilities in Sharm.
The latest addition to the Easy Divers Group,
opening in June is located in the luxurious
Sharm Grand Plaza. This location offers one of
the best house reefs in the area! The Red Sea
provides some of the best diving in the world,
from beautiful corals to a huge array of life.
While teaching from Sharm, our Operations
Manager Olga, has sighted many unusual and
rare species including the reclusive whale shark
and giant mantas. We also have affiliations
with centers in many parts of Asia including
Thailand, Malaysia and Indonesia where you
can dive rarely dived, pristine waters and visit
the majestic Komodo dragon between dives.
We can also provide private trips where you
are welcome to hire a boat for a couple of
hours or days for private excursions. From
dive boats to luxury cruisers for diving, fishing,
or private parties we can provide boats from
33 to 70 feet with all of the amenities and
services of a 5 star service.
If required, transportation can be provided.
Transport for regular dives is usually provided
from one of our vehicles. For those wishing
VIP transportation or to pick guests up for
a company or private outing, Limousine
pickup and drop off can be provided by a H2
Hummer or Chrysler/Lincoln Limo.
Our Philosophy
Our objective is to ensure that all divers,
from beginners to professionals, enjoy their
underwater adventure. We are known for
personal attention to all our guests.
Rather than making a dive trip feel impersonal
and hectic, we keep the size of our groups
small and employ multiple dive boats to
ensure that the trip is comfortable and the
experience is fun and relaxed.
Our day trips allow for 2 to 3 dives in a day
and rest assured you will not go hungry. A full
Arabic meal is provided onboard.
We provide a first class experience to all
of our clients for diving, snorkeling or just a
pleasant cruise in the sea.
Some of the PADI Courses We Offer:
Discover Scuba Diving
Bubble Maker & Seal Team (8 years old and up)
Open Water Diver
Advanced Open Water
Rescue Diver and Emergency First Response
Master Scuba Diver
DiveMaster
Padi Specialty Certifications
Our Services:
Scuba training for beginners and experienced
divers
Scuba and Snorkeling trips to Fujairah (Dibba),
Musandam, Dubai, Sharjah and the Egyptian
Red Sea
Boat rentals for pleasure trips and fishing
Scuba equipment rental, repair and service
And More…
Easy Divers Emirates
The Lakes Club
Emirates Hills
Dubai, United Arab Emirates
Phone: +971 04 447 2247
We are busy as ever on the Musandam coast.
We have finally managed to make time to
organize some exciting things. First of all,
in June we are launching a new website for
Nomad, it will have blogs and video blogs
but unlike any other dive centers, our site is
putting forward social media. I think it is a fresh
view on dive centers for the region and I think
many will appreciate its features.
We are also launching a newsletter at the same
time, it will come out once every trimester to
begin with and it will not only feature news
about Nomad but will also have articles
submitted by other divers and sea lovers.
In terms of diving we are now offering some
fun specialties for photographers like the selfreliant
diver specialty. If you’re an experienced
diver and a photographer this was thought of
with you in mind. If you ever thought of being
able to get some space and some quality time
away from the crowd, this is it!
Throughout the summer we will have
northern trips every Friday for 3 dives, so
don’t miss out and come and explore the
north of the Musandam, it is definitely worth
it. The place is lush with life and corals are by
far the best the region has to offer with still
many unknown dive spots!
We are also organizing a trip to the Azores,
Portugal; if you are looking for a getaway trip
during Ramadan to beat the heat, why not join
our group trip!
We are heading out between the 19 th and the
26 th of July to Portugal to the amazing island
of the Azores. The dive sites over there are
stunning and the program will have manta and
shark dives scheduled! The visibility is incredible
with deep lush blue and the fauna is rich!
We will be using a dive center named Nerus
Dive Center that won many awards in Portugal
for its professionalism. They are offering us
a super price of 700 Euros per person for
a 7 night accommodation in villas that can
accommodate up to 4 people. There will be
5 very busy half days inclusive of 5 local dives,
one manta dive, one shark dive and one night/
deep dive on a wreck from WW2. The price
does not include flight, meals or equipment
rental. Meals in Portugal are quite cheap and
delicious. We will take with us a maximum
of 12 divers. The trip is booking up quickly
so if you want to book now, please contact
Nomad. Bookings close end of June!
Contact Chris at chris@discovernomad.com
for more information.
Or call +968 2683 6069
20 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 21
NEWS
Divers Take the Plunge TO Cleanup the CAPITAL Ports
As part of International Earth Day, over 150
volunteer divers and 50 land based volunteers
from all over the emirates joined together and
participated in the two day Abu Dhabi Ports
Clean Up. The underwater clean up drive is
part of Abu Dhabi Terminals’ effort to keep
the capital ports and oceans clean of marine
debris and to conserve the delicate aquatic
environment.
The event was supported by Environment
Agency of Abu Dhabi, Center of Waste
Management of Abu Dhabi, CNIA, Department
of Economic Development, Takatof, RAK
Police, Emirates Diving Association, ADMA,
GASCO Diving Club, Borouge, ADGAS, UAE
Armed Forces, Al Mahara Diving Center,
Emirates Volunteer Association and Lavajet.
Divers took the plunge into the five port
areas including Mina Zayed, New Free Port,
Municipal Port, Fishermen’s Port and Mussafah
Port and safely and methodically brought
up an estimated 15 tons of marine debris
including construction materials, old tires,
plastics, glass, iron pipes and even a ship funnel.
Commercial divers from ADMA also pitched
in with surface supplied feeds to carefully rig
up the large pieces of marine debris which
was brought by a commercial crane.
The marine debris was then collected by the
Center of Waste Management of Abu Dhabi
and some of the items were sorted into the
recycling units present at the cleanup.
Mr. Abdullah Al Muharrami, deputy CEO of
Abu Dhabi Terminals enthused, “the initiative
has been very successful and we are very
excited at the level of participation as some
volunteers came from far away emirates such
as Ras Al Khaimah and Al Fujairah to take
part.”
Abu Dhabi Terminals plans to launch this
as an annual event to continue its initiative
to protecting and preserving “Abu Dhabi’s
key assets” and “encourage companies and
individuals alike to work together to protect
them.”
These clean up dives highlight the involvement
of the community from the private and public
sectors and the unified collaboration to help
safeguard and conserve the underwater
environment for this and future generations
to enjoy.
Al Mahara Diving Center is pleased to
announce the arrival of the PADI Swim School.
PADI has partnered with the US based Starfish
Aquatic Institute© to create a swim school.
The PADI Swim School curriculum is designed
for students ages 6-months to adult. Students
participate in learning activities that allow
them to explore the water in a creative and
comfortable environment. Correct swimming
techniques are taught from the very beginning!
The PADI Swim School curriculum is made up
of several courses that are taught by trained
and certified swimming instructors who work
under the direct supervision of our course
director. The swim school is comprised of three
programs. The StarBabies and StarTots course
introduces core competencies by providing
instruction to the parent or caregiver about
how to help develop aquatic readiness. The
purpose of this course
is to develop in very
young children a high
comfort level in the
water while at the same
time training parents
in water safety and
drowning prevention.
The Swim School
course is designed
for students from
5 years old up to
adult. The course is
designed to improve
comfort and skill in
the water, regardless
of past swimming
experience. The classes
Another way to celebrate Earth Day this year
was to take a visit to the local kindergarten
class and talk about all the local cool marine life
we can see as divers in the local UAE waters.
I had brought a friend with me, “Mr. Sharky”
who was the king of the sea and wanted
to spread the message to the aged 4 and 5
year olds. Our presentation included colorful
images of the beautiful sea and its inhabitants,
the negative impacts such as pollution and a
question and answer session on how we can
positively impact the marine environment.
The school age audience also made a pledge
to spread the message about protecting
the apex predator like the sharks as well as
committing themselves to be eco-warriors
NEWS
PADI Swim School arrives in the UAE
feature Cassie Christman, Starfish Aquatic Institute Swim Instructor Trainer
and Lifeguard Trainer
are organized according to age and skill level.
Our instructors are experienced and qualified
to assess each student to determine the
appropiate level for the student to be placed in.
Kindergartens pledge TO be Friends of the Sea
feature Kathleen Russell, EDA Abu Dhabi COMMITTEE COORDINATOR
and becoming “friends of the sea.” They all
agreed not to use plastic bags anymore and
to tell their parents to use a reusable shopping
bag like the Carrefour eco friendly-reusable
bags when they go shopping.
Throughout the Canadian International
School, students celebrated Earth Day by
reducing the amount of class waste during
lunch and participating in a poster contest to
highlight Earth Day and its natural resources.
We were proud of the school’s initiative to
build awareness about the impacts of reducing
waste and the students’ actions to protect
and conserve the environment by reducing,
reusing and recycling.
The final course is the Starfish Stroke School.
This course is designed to be taught to refine
freestyle and learn stroke technique for
backstroke, butterfly, breaststroke, and more!
Students progress at their own pace in a small
group setting. The Starfish curriculm of the PADI
Swim School specializes in integrating water
safety into the program and communicate
important safety concepts to students and
parents. Experiental activities and a holistic
approach to swim instruction proves to a
positive, fun, and successful learning experience
for students. Al Mahara Diving Center is proud
to be the first authorized training center for
the PADI Swim School in this region.
If you would like more information about the PADI Swim
School or details about the program, please email Ms Cassie
at: swim@divemahara.com.
22 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 23
NEWS
Beach AND Underwater Cleanup – A Collaborative Effort
NEWS
FAILSAFE diving with the new Poseidon Tech Rebreather
“Two rebreathers in one” enhances safety in the silent WORLD
Johnson Controls (JCI), a global leader in
automotive experience, building efficiency
and power solutions, through its Dubai
manufacturing facility, partnered with the
Filipino Scuba Divers Club UAE (FSDC) in a
beach and underwater cleanup program in
early May. This was held as part of the Blue
Sky Involve initiative, Johnson Controls’ Global
Social Responsibility Program.
On the morning of May 4 th , Cleanup Day, 42
Johnson Controls employees and their families
and FSDC divers got together to collect the
rubbish that littered the beautiful Jumeirah
beach and its underwater environment. Magdy
Mekky, Johnson Controls Vice President
and Managing Director – Middle East, led
the Johnson Controls team with example,
motivating them to exceed expectations.
Johnson Controls’ “Blue Sky Involve” is an
employee-driven volunteer program which
encourages employees to form volunteer
groups and contribute to the local community
by supporting environmental stewardship and
leadership development projects. Earlier this
year, personnel from Johnson Controls Dubai
manufacturing facility approached FSDC to
propose a CSR activity in line with the“Blue
Sky Involve” initiative. FSDC responded with a
plan for a Beach and Underwater Cleanup that
would engage Johnson Controls employees
in improving the environment and spreading
awareness about the marine environment. The
plan was supported by the Emirates Diving
Association in its mission to conserve, protect
and restore the UAE’s marine resources.
Subsequent to approval from Johnson Controls
US based head quarters and detailed analysis of
safety concerns, the teams started working on
the project immediately, and the first step was
a permit from Dubai municipality. The open
beach at Jumeirah was chosen as the venue.
The beach cleanup was manned by 27
volunteers. Close to 4,000 cigarette butts were
collected and disposed in an environmentally
responsible manner. The underwater cleanup
with 15 divers resulted in the collection of
several hundred kilograms of plastic bags and
other trash. Plastic bags which take thousands
of years to degrade constitute the single
largest threat to the underwater environment
resulting in the death of fish and turtles due
to choking and ingestion. The information
recorded was summarized and sent to the
Ocean Conservancy through EDA to be used
in educating the public, business, industry and
government officials about problems arising
from marine debris.
For this Cleanup Day, participants who
gathered the most amount of debris were
given special gifts courtesy of Johnson
Controls. Mr. Mekky likewise recognized the
contributions of FSDC, led by Tina Vitug
(Chairman) and of EDA, represented by
Reema Al Abbas (Project Manager).
Pose brings the diver all the benefits of rebreather diving;
getting closer to marine life, much more time underwater, silent, bubblefree
operation along with this new and enhanced level of Poseidon’s
patented safety technology. Poseidon Tech is designed and built for one
purpose: the less a diver has to think about the equipment, the better
their dive will be.
Poseidon Tech will be available for sale from November 2012. Pricing
will be announced at that time.
More detailed information is available at:
If you have further questions, please contact:
Marcus Benér | Marketing Executive, Poseidon Diving Systems
Tel: +46 708 776 688
Poseidon Diving Systems AB Poseidon was founded by divers, for divers.
When Ingvar Elfström launched the world’s first series manufactured
single hose regulator in 1958 it became an immediate sensation.
The company currently has over 2,000 sales agents worldwide. Its
headquarters and manufacturing are located in Gothenburg, Sweden.
24 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 25
NEWS
NEWS
FEATURE CREATURE
ACROPORA DOWNINGI
FEATURE IUCN RED LIST 2011 BY IUCN PHOTOGRAPHY PHILIPPE LECOMTE
population decline for this species. This species
is particularly susceptible to bleaching, disease,
and other threats and therefore population
decline is based on both the percentage of
destroyed reefs and critical reefs that are likely
to be destroyed within 20 years..
Habitat and Ecology: This species occurs in
shallow, tropical reef environments. It occurs
on shallow margins of fringing reefs and
submerged reef patches. This species is found
from 1-10 m.
Major Threat(s): Members of this genus have a
low resistance and low tolerance to bleaching
and disease, and are slow to recover.
Acanthaster planci, the crown-of-thorns
starfish, has been observed preferentially
preying upon corals of the genus Acropora.. The numbers of
diseases and coral species affected, as well as
the distribution of diseases have all increased
dramatically within the last decade. Coral
disease epizootics have resulted in significant
losses of coral cover and were implicated
in the dramatic decline of acroporids in the
Florida Keys. In the Indo-Pacific, disease
is also on the rise with disease outbreaks
recently reported from the Great Barrier
Reef, Marshall Islands and the northwestern
Hawaiian Islands. Increased coral disease levels
on the GBR were correlated with increased
ocean temperatures.
Conservation Actions: All corals are listed on
CITES Appendix II. Parts of the species’ range
fall within Marine Protected Areas..
Local Species in the IUCN Red List 2011
Red List Category & Criteria:
LEAST CONCERN
Scientific Name: Acropora downingi
Justification: This species has a relatively
restricted distribution and is common. It is
particularly susceptible to disease, crownof-thorns
starfish predation and extensive
reduction of coral reef habitat due to
a combination of threats. However, its
distribution is in areas where reefs have not
suffered as serious declines as in other regions.
Specific population trends are unknown
but population reduction can be inferred
from declines in habitat quality based on the
combined estimates of both destroyed reefs
and reefs at the critical stage of degradation
within its range. Its threat susceptibility
increases the likelihood of being lost within one
generation in the future from reefs at a critical
stage. The estimated habitat degradation and
loss of 19% over three generation lengths
(30 years) is the best inference of population
reduction and does not meet the threshold
any threatened category and therefore is
listed as Least Concern. It will be important to
reassess this species in 10 years time because
of predicted threats from climate change and
ocean acidification.
Geographic Range: This species occurs in the
Red Sea and the Gulf of Aden, the north-west
Indian Ocean and the Arabian/Iranian Gulf.
The northern Red Sea from Rabigh to the
Sinai Peninsula escaped most of the bleaching
and the mortality of the last couple of
decades. Destroyed and critical reefs are only
6% of the total.
Native: Bahrain; Djibouti; Egypt; Eritrea; Iran,
Islamic Republic of; Iraq; Israel; Jordan; Kuwait;
Oman; Qatar; Saudi Arabia; Somalia; Sudan;
United Arab Emirates; Yemen
Population Trend: Decreasing
This is a common species.
There is no species specific population
information available for this species. However,
there is evidence that overall coral reef habitat
has declined, and this is used as a proxy for
Source: Aeby, G., Lovell, E., Richards, Z., Delbeek, J.C.,
Reboton, C. & Bass, D. 2008. Acropora downingi.
26 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 27
CORAL NEWS
Can noisy reefs ATTRACT more fish and crustaceans
FEATURE Julius Piercy, University of Essex and Dr Stephen Simpson, University of Bristol
CORAL NEWS
Is there a future for CORAL reefs in acid oceans
feature Dr DAVID Suggett
Senior Lecturer in marine & freshwater biogeochemistry, Assistant Director of the Coral Reef
Research Unit, University of Essex
The sounds of coral reefs can be recorded in the field using an underwater microphone known as a hydrophone. Apart from the hydrophone, the rest of the recording equipment is
far from waterproof. Recording off Hoga Island (2007), by Dr Stephen Simpson.
A study on sound recordings of reef noise
from different habitats has revealed that the
highest quality reefs are also the noisiest,
potentially attracting more larval recruits using
sound to orient towards reefs.
Nearly all fish and decapod crustaceans
associated with reefs spend their larval stage
in the open ocean after being broadcast from
the reefs as eggs or hatchlings. They soon
develop strong swimming abilities which allow
them to counter the effect of sea currents and
choose the direction in which to swim and
eventually return to the reef.
The precise reason why the larval stage is
spent in the open ocean is still under debate,
but generally it is agreed that this strategy
ensures that the larvae are far from the
many reef associated predators during this
vulnerable stage.
However, this strategy can only be beneficial if
some of the larvae are able to return to the
reef – not an easy task in the vast expanse of
the ocean. Over recent years it has become
clear that larvae use their sensory abilities to
home in on a reef and two senses in particular
have emerged as the most likely candidates.
Experiments have shown that larvae can be
attracted to the odour and the sound of a
reef, both of which have the potential to be
detected over distances up to 20 kilometres.
Despite the importance of this phenomenon
in determining population dynamics across
reefs, there is still very little known about the
sensory cues produced at the reefs, how they
propagate through the environment and the
actual sensory abilities of the larvae.
The sound of a Reef
Like cities, reefs concentrate a lot of life in a
small area and this, again like cities, makes them
very noisy places
Each reef also has its own signature sound and
our recent work using recordings of reefs of
similar size in the Philippines has found that
the reefs within three different well managed
Marine Protected Area (MPA) for the previous
10 years had significantly higher sound levels at
the source (average sound intensity of 133.1 ±
2.2 dB re 1 µPa) compared to three overfished
macroalgal and urchin dominated reefs (average
sound intensity 122.0 ± 1.2 dB re 1 µPa).
The clear difference between recordings
from different habitats may empower the
fish and crustacean larvae not only to detect
the location of the reef but to discriminate
between good and bad reefs.
This finding is important for the way we
manage Marine Protected Areas (MPAs),
underlining how the acoustic signature of
the reef will also need to be considered if
we want to improve the efficacy of an MPA.
It also opens up the possibility of surveying
and monitoring reef quality rapidly and cost
effectively in the future.
Our future work on Hoga Island aims to identify
if the difference in sound levels with habitat
quality can be detected on smaller spatial scales
to refine reef quality assessment surveys.
This will form part of a larger project
which aims to develop a detailed map of
the soundscape around Hoga Island up to
5km away from the reefs, combined with
behavioural experiments on fish larvae to
determine how they respond to different
reef sounds and over what distance they can
detect reef noise.
Original Publishers – Biodiversity Science
Ocean acidification microcosms incubating corals at the University of Essex 2
dissolving into the oceans
to form a very weak acid. Ocean pH has already decreased from ~8.2
at the start of the Industrial Revolution to a present day value of ~8.1;
however, models predict this will further fall to ~7.6 by the year 2100..
OA will substantially limit the ability of fish to use their sense of smell
to detect predators and locate the best sites for larval development
Replicating conditions
In order to predict how OA will impact coral reefs, researchers have
performed experiments in which key organisms are incubated under
conditions that replicate elevated CO 2
Some coral species can still successfully compete under ocean acidification conditions
over relatively short timescales (weeks), and they typically only examine
changes of CO 2
2
seeps creating reef sites
with naturally elevated CO 2
/lower pH – for example the cool water
CO 2
seeps that fringe the D’Entrecastraux Islands, Papua New Guinea.
Observations here have shown that hard coral cover is the same as
for neighbouring sites at ambient CO 2,
but diversity is lower at the high
CO 2harvesting,
is thus an obvious priority to give reefs their best chance
against our rapidly changing climate.
Original Publishers – Biodiversity Science
28 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 29
CORAL NEWS
Red Sea bUTTERFLYFISH responds TO changing coral cover
feature Philipp Gassner, Dennis Sprenger and Nils Anthes
Institute of Evolution and Ecology, FACULTY of Sciences, University of Tuebingen
3RD PUNTA SAYULITA SURF CONTEST
A WINNER FOR REEF CHECK
REEF CHECK
Chaetodon austriacus by Dennis Sprenger
A new study into changing Red Sea coral
and its effects on the butterflyfish shows a
significant variation in behaviour. The research
found increased feeding rates, aggressive
encounters and territory sizes where there
was lower coral cover, which could be an
informative bio-indicator.
Red Sea coral reefs exhibit substantial
ecological, economic, and cultural functions.
The stability of coral reef ecosystems, however,
has been challenged in the last decades by
anthropogenic impacts through tourism,
nitrification, elevated atmospheric CO 2
input,
and globally rising water temperatures. These
threats have generated rising awareness that
substantial management efforts are required
to maintain coral reef ecosystems worldwide.
While knowledge about anthropogenic
impacts on coral communities such as the
coverage of living coral and other substrate
is rife, indirect impacts via coral growth on
species at higher trophic levels within the
community remain much less understood.
Corallivorous butterflyfish (Chaetodontidae)
directly rely on the availability of live coral food
and may thus be strongly affected by changes
in coral reef condition. Their abundance is
known to tightly correlate with the spatial
distribution of specific coral species.
The blacktail butterflyfish
This study supplements current knowledge
on the effects of changes in coral cover on
butterflyfish using the Blacktail Butterflyfish
Chaetodon austriacus as a study system. The
species is highly abundant throughout its
range, strictly corallivorous, and shows diurnal
activity, pronounced site fidelity, and strong
territoriality.
We specifically investigated the link between
small scale field-variation in live coral coverage
and three target variables: feeding activity,
territory size, and intra-specific aggression.
Field observations were conducted at the
fringing reefs at Mangrove Bay (Sharm Fugani,
Egypt). Data were collected at 0.3 to 5m depth
while snorkelling along the reef-flat, reef-crest,
and reef-slope. Territories in deeper water were
not taken into account since depth is assumed
not to alter the behaviour of C. austriacus.
Corallivorous butterflyfish directly rely on the
availability of live coral food and may thus be
strongly affected by changes in coral reef conditions.
Analogous to other studies, the behaviour of
a single focal individual within each pair was
recorded, assuming that the behaviour of one
individual is representative of both. Each focal
was recorded for 30 minutes while maintaining
a minimum and apparently non-disturbing
distance of 2m. Feeding rate was recorded as
the total number of feeding bites per individual
on living coral. Aggressive encounters were
defined as rapid and directional movement
towards conspecifics. The total number of
aggressive encounters per individual during
30 minutes was used to quantify the level of
agonistic aggression. Territory size of each pair
was assessed based on hand-drawn territory
boundaries, defined as the polygon joining the
outermost locational observations within a
one-hour period as localised using prominent
features of the reef landscape. The fish typically
patrolled their almost circular territories
whilst foraging, with pairs moving along their
territory border and completing several
‘territory circuits’.
Proportional coral cover was quantified
using the Quadrat Grid Transect method.
For each recorded focal fish, the two by two
metre grid was placed at a single spot within
the territory that appeared representative
for the overall occurrence of the three
differentiated substrate categories. At each
of 121 grid intersections, the reef surface was
then categorized in living coral versus dead
coral (bleached and/or covered by algae) and
other biogenic substrate. This enabled the
proportion of live coral cover to be calculated.
Data was normally distributed and regression
analysis used to define the relationship between
coral cover and behavioural response variables.
Results and discussion
Field observations revealed a negative
relationship between live coral cover and
feeding rate (Fig a). Moreover, as predicted,
both territory size (Fig b) and the number of
agonistic encounters (Fig c) decreased when
living coral cover increased.
Our study documented feeding rates and
aggressive encounters in unmanipulated
environments, where fish had time to adapt
their behaviour to the given set of conditions.
The observed intensified competition for space
is likely to be affected by the need to enlarge
territory size. We presume that low coral cover
drove fish to cross the determined territory
boundaries more often to compensate for the
decreased food availability within their own
territories. Since all observed territories were
directly adjacent, this behaviour resulted in a
greater territory overlap and thus in more
aggressive interactions.
Indicator species
Combined, our findings show that feeding
rate, territorial behaviour, and territory size
of C. austriacus substantially vary with live
coral cover. Our study thus exemplifies the
indirect impact of variation in coral cover on
higher trophic levels in coral reef communities.
Moreover, linking the behaviour of C. austriacus
to coral cover, the reported findings validate
the earlier proposition that this species
renders as an informative indicator species
for monitoring schemes in Red Sea coral reef
ecosystems. Specifically, longitudinal studies
that find increasing feeding rates, territory
sizes, and agonistic interactions in C. austriacus
would strongly indicate gradual degradation of
the coral reef community.
ACKNOWLEDGEMENTS
We wish to thank David Righton for suggestions
on an earlier version of this manuscript. M
Herberich, R Ratzbor, and C Zell supported
coral cover surveys. We are further grateful to
Ducks Dive Centre at Mangrove Bay Resort
for logistic support..
HAITI ECODIVERS LEARN TO DIVE
feature Nikole Ordway
Reef Check EcoDiver Course Director, Ft Lauderdale, Florida
30 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 31
REEF CHECK
REEF CHECK.
NEW TRAINERS CERTIFIED
IN THE BAHAMAS.
REEF CHECK PARTICIPATES IN
BOSTON INTERNATIONAL SEAFOOD SHOW
featuredepend.
Fish surveys showed that commercially important species (groupers,
snappers and grunts) were in lower abundances when compared to
non-target fish species (parrotfish and butterflyfish). These results are
consistent with the current protection level of this area, which is nonexistent. (
REEF CHECK.
FORCE (Future of Reefs in a Changing
Environment) recently released a preliminary
report on their 2011 survey of reefs in the
Dominican Republic. Their study showed that
reefs in the Dominican Republic may improve
if regulations are set similar to La Caleta, an
area protected from fishing and anchoring,
and co-managed by Reef Check Dominican
Republic since 2007.
The FORCE project uses an ecosystem
approach that links the health of the ecosystem
with the livelihoods of dependent communities,
and identifies the governance structures
needed to implement sustainable development.
The overall aim of FORCE is to provide coral
reef managers with a toolbox of sustainable
management practices that minimize the loss
of coral reef health and biodiversity.
REEF CHECK PARTNERS WITH
ONE WORLD ONE OCEAN CAMPAIGN
featurescarnominated.
RECENT STUDY IN DR SHOWS BENEFITS OF MPA MANAGEMENT IN LA CALETA
Reef communities were surveyed at 10-15m
depth in 15 locations during June 2011. The
highest mean coral cover per site was found at
La Caleta (43%) while the lowest coral cover
was observed at Sosua (10%). La Caleta also
had the highest number (46) and density (8.2
individuals per m2) of coral recruits.
Over 4 kilometers of reef were surveyed
for fish. Cayo Arena had the highest fish
abundance, while Sosua had the lowest.
However, the mean fish species richness was
highest in La Caleta (average of 29 species per
transect), with the lowest again in Sosua (9
species per transect).
Overall, fish communities were healthiest in
protected areas such as La Caleta or remote
areas such as Cayo Arena. La Caleta also had
the healthiest bottom communities, with high
coral cover and high sponge diversity. The only
lobster and conch counted in the Dominican
Republic were also within La Caleta Reserve.
SCIGIRLS EPISODE
WINS AN EMMY AWARD
FEATURE!
Sharks are an extremely diverse group of
marine animals that can be found in various
habitats worldwide. Sharks belong to the
class Chondrichthyes, subclass Elasmobranchii
that contains 12 orders of which three
are extinct and 1100 species have been
described. Chondrichthyes are cartilaginous
fish characterized by the presence of five
or more gill slits, paired fins, a true jaw and
nostrils. There are approximately 500 shark
species ranging in size from the 27cm pygmy
sharkEuprotomicrus bispinatus to the 21m
whale shark Rhincodon typus. Collectively,
sharks have played an instrumental role in
marine ecosystems for over 400 million years
as evidenced by fossil records of the Devonian
and possibly lower Silurian. However, because
of their K-selected life history strategy (i.e. slow
maturation and reproduction, producing few
viable offspring) and increasing anthropogenic
pressures they are extremely vulnerable
and susceptible to overexploitation. Some
estimates report that global shark populations
have declined by as much as 80% within the
last 20 years. Additionally, the International
Union for Conservation of Nature (IUCN)
Shark Specialist Group (SSG) lists 30% of
shark and ray species as threatened or near
threatened with extinction.
Increased awareness about the impact of
global shark fisheries, habitat destruction
and the combined effects this will have on
the marine environment and economy has
improved collaborations between scientists,
conservationists and government officials. The
Bahamas National Trust (BNT), established in
1959, is mandated with conserving both natural
and historic resources in The Bahamas and is
the only non-governmental organization in the
world mandated to manage a country’s entire
national park system. BNT’s vision is to create
a comprehensive system of national parks
and protected areas, with every Bahamian
embracing environmental stewardship. This
vision has driven and continues to drive
the organization to establish new parks,
engage in community outreach and promote
conservation, education and research in The
Bahamas.
REEF CHECK
REEF CHECK SPOTLIGHT:
SHARK CONSERVATION IN THE BAHAMAS
feature Krista Sherman
GEF FSP Coordinator, Bahamas National Trust
Bahamian shark populations are relatively
healthy when compared to other parts of
the world, which is due in part to the 1990s
longline commercial fishing ban. However,
to ensure that shark populations within
The Bahamas remain healthy, in 2010 BNT
partnered with the PEW Environment Group
to launch a national “Protect the Sharks of The
Bahamas” campaign to ban the commercial
sale and trading of sharks and shark products
within the country’s exclusive economic zone.
The campaign launched in May 2010 with
participants including government officials,
representatives from NGOs, scientists, dive
tour operators, conservationists, media
and other key stakeholders. The benefits
of maintaining diverse and abundant shark
populations to sustain healthy ecosystems
and the associated economic benefits through
dive-related tourism (valued at approximately
$78 million per annum to the Bahamian
economy) were highlighted. BNT partnered
with PEW and local NGOs to raise public
awareness on the global status of sharks
through education and outreach programmes.
A series of presentations, public meetings,
community walk-throughs and outreach
through social network forums and the media
occurred during 2010-2011. More than 5,600
Bahamians signed handwritten petitions asking
the government to “prohibit commercial
fishing and selling of any shark or shark
related products within the Commonwealth
of The Bahamas”. In July 2011, the Bahamian
Government created an amendment to
the Fisheries Resources (Jurisdiction and
Conservation) Act (Chapter 244) to prohibit
commercial shark fishing along with the sale,
importation and export of shark products
within 630,000 km 2 (243,244 mi 2 ) of its waters.
This marked another huge accomplishment
for The Bahamas, which now protects over 40
known shark species. Shelley Cant, BNT shark
campaign manager stated, “This new legislation
has established The Bahamas as the regional
leader for shark conservation”.
Decades of scientific research on sharks
in The Bahamas has been used to assess
their diversity and abundance and address
deficiencies pertaining to their life history
characteristics, diet, behaviour and distribution.
Continued advancements in research
combined with local capacity building, fisheries
regulation enforcement and improved public
awareness will lead to better conservation
management. An ecosystem based approach
will undoubtedly be most effective to sustain
the diversity and function of sharks within
marine ecosystems.
34 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 35
REEF CHECK
REEF CHECK SPOTLIGHT: WHY IS DIVER MONITORING
SO IMPORTANT TO MANAGE REEF FISHERIES
FEATURE Dr. Jan Freiwald, Reef Check California Director
Many species of fish gather together in one
area to spawn and reproduce. Smart fishers
can target these areas and times and reap a
high catch rate. Unfortunately, this can lead to
rapid over-exploitation of these fisheries due
to the large number of mature (reproductive
fish) removed from the population before
they have a chance to reproduce. In addition, if
only data from fish catch is used by managers
– it is possible for a decline in population
size to be hidden from the managers for
some time. Therefore it is important for
fisheries managers to have access to what is
called “fisheries independent” data such as
the monitoring results carried out by Reef
Check divers. Reef Check data is fisheries
independent because the actual number of fish
on rocky reefs are counted – in comparison to
“fisheries dependent” data such as total catch.
A recent study by Brad Erisman et al., (2011)
documented this problem in two southern
California fisheries – the barred sandbass
(Paralabrax nebulifer) and the kelp bass (P.
clahtratus). Both these species aggregate
during spawning, and the commercial fisheries
were closed in 1953 because of concerns of
potential overfishing. But the annual catch
On January 6, 2012, a group from Finger Lakes
Community College (FLCC) in New York
arrived in the Caribbean nation of Montserrat
to continue their work on a reef research
project as part of the ongoing Research
Integrating Molecular and Environmental
from recreational fishing remained stable or
increased over a 30 year period through the
1990s, apparently indicating no problems. In
fact, the actual population sizes of these two
species declined dramatically by about 80%
during this period based on diver surveys of
actual numbers of fish on reefs.
The fact that the catch remained the same
for such a long time period is due to the fish
being targeted at high density aggregations
and therefore being caught in high numbers
even if overall population density is declining.
Since the majority of the annual catch of
these species is landed during these spawning
aggregations it creates the impression of
a sustainable fishery. This effect is termed
hyperstability, meaning that the fishery seems
to be stable while in reality the populations are
declining. The data based on the fishing effort
and annual catch did not reflect the true signal
of population decline. The authors state that
fisheries dependent data “created the illusion
that harvest levels for both species were
sustainable and stock abundances were stable”.
Based on this information, resource managers
maintained the same catch levels and have not
adjusted their management strategy because
the true decline of the populations was hidden
from their ‘view’.
This study demonstrates the importance
of fisheries independent data collection to
gain insights into the population dynamics of
exploited species. Without diver surveys or
other independent measures of population
density or biomass, the decline of these two
species in southern California would not have
been detected. Reef Check is monitoring
both of these species in southern California
and is working with fishers in Baja to develop
sustainable fisheries for other aggregating
species, such as groupers found along the Baja
peninsula. Unfortunately, many open-water
fisheries are very difficult to directly monitor.
The lack of fisheries independent data is one
reason why 85% of the world’s fisheries are
considered overfished or have collapsed.
VOLCANO POSES UNIQUE THREAT
TO MONTSERRAT’S CORAL REEFS
FEATURE James Hewlett, Reef Check Montserrat Coordinator and EcoDiver Course Director (runoff.
REEF CHECK
WORKING FOR BETTER REEFS AND A BETTER FUTURE
IN AMED, NORTH BALI, INDONESIA
feature Jennifer Willis, Reef Check Indonesia:.
36 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 37
FEATURES
FEATURES
NATURE will find a WAY
FEATURE Warren R. Baverstock & DAVID P. Robinson
‘Nimr’ the first daughter of Zebedee is now nearly four years old and as big as her mum
Zebra shark pups after hatching
Feeding the pups at Burj Al Arab aquarium’s quarantine
facility
be able to reproduce via parthenogenesis,
we can speculate that it is probable that most
shark species, if not all, possess this capability.
Proving to the scientific community that
parthenogenesis had occurred turned out to
be a challenge. There were several possible
causes for the reproduction that we first
had to disprove before we could confirm
parthenogenesis had occurred. We knew that
Zebedee could have possibly stored sperm and
there was also the possibility of hybridization
with the male blacktip reef shark (Carcharhinus
melanopterus) that she was housed with.
The Burj Al Arab aquarium staff preparing the zebra shark eggs for incubation
In 2007, our female zebra shark, Stegostoma
fasciatum, ‘Zebedee’ started to lay eggs in
the Al Mahara aquarium located in the Burj
Al Arab. Zebedee was introduced to the
aquarium in 2001 as a juvenile and has since
had no contact with a male of the same
species. It is not unusual for female sharks
to lay eggs in aquariums, even when there is
no male present to fertilise them, but they
are normally discarded by aquarium staff as
infertile. What is unusual in this scenario is that
some of the eggs that Zebedee laid developed
embryos, even though there was no male to
fertilise them!
Since 2007 Zebedee has produced eggs on
an almost annual basis and from these eggs
zebra shark pups have been hatched. This
reproductive process is called parthenogenesis
and Zebedee is the first shark of her species
to be confirmed reproducing via this method.
It is also the first time that successive
parthenogenesis has been seen to occur in
any shark species.
Parthenogenesis which comes from the
Greek parthenos, meaning ‘virgin’ and genesis,
meaning ‘birth’, takes place when the female’s
egg cells double their genome and then split
in two. The process involves egg cells taking
on the role of the male sperm and effectively
fertilizes the other egg as they merge back
together to produce an embryo with two sets
of chromosomes from the mother.
A zebra shark egg in the wild attaches itself to corals
and substrate by sticky fibrous threads
Zebedee lays on average 40 eggs per cycle
over a period of two to three months. We
have so far hatched 21 pups since 2007, eight
of which are still alive. Zebra shark pups are
notoriously hard to rear, only a few facilities
around the world have successfully managed
to raise them to adulthood. All of the pups
are female as there is no paternal genetic
contribution made during parthenogenesis.
For the first couple of years we had a low
success rate with rearing the pups. As the
years have progressed, we have increased our
knowledge regarding zebra shark nutrition and
husbandry and, in 2011, we have had a 100%
success rate with our latest batch.
Our oldest pup ‘Nimr’ is now nearly four
years old and is swimming around the Al
Mahara with her mum. What makes Zebedee
important is that until now, other examples of
parthenogenesis in sharks have been ‘one-off’
occurrences and the majority of pups have
died. Zebedee is producing offspring that are
surviving on an annual basis, suggesting that
parthenogenesis is indeed a viable method
of reproduction for sharks. Parthenogenesis
has been genetically confirmed in an
aquarium setting for the bonnethead shark
(Sphyrna tiburo), white spotted bamboo shark
(Chiloscyllium plagiosum) and blacktip shark
(Carcharhinus limbatus). From the increasing
number of shark families and species seen to
We worked closely with Dr Kamal
Khazanehdari, the head of molecular biology
and genetics, at the Central Veterinary
Research Laboratory in Dubai. We confirmed
that parthenogenesis took place through the
DNA analysis of some of Zebedee’s offspring.
All of the pups that were tested displayed
elevated homozygosity relative to Zebedee
and had no apparent paternal genetic
contribution, which ruled out both sperm
storage and hybridization.
Whether Zebedee’s offspring will be able to
produce pups of their own is yet to be seen,
these sharks are not clones as they differ
genetically from each other and from the
mother. We know from the post mortem
examinations of deceased pups that they have
perfectly formed and normal reproductive
systems and so we are very excited about
the next stage of research which will be to
pair them with males to see if they reproduce.
There is absolutely no reason, genetically,
developmentally or otherwise why these pups
will not be able to reproduce.
Parthenogenesis is nothing to be concerned
about and has never been recorded in wild
populations of sharks, what is interesting is the
discovery that they can do it! As long as female
The Al Mahara aquarium in the Burj Al Arab is home to ‘Zebedee’ and ‘Nimr’
sharks have access to male sharks, it is doubtful
it will ever occur in wild populations, although,
to our knowledge, nobody has actively looked
for it. Males are certainly not disposable by any
means as they keep the genetic diversity of a
population healthy. If all the male sharks were
removed, it is highly unlikely any population
would remain healthy. Parthenogenesis is
however a handy ability for female sharks
to possess and may go someway to explain
their evolutionary success and remarkable
adaptability.
In November 2011, the findings of our research
were published in the Journal of Fish Biology
and from that, coverage was generated around
the world, including National Geographic and
the BBC.
For further information about the zebra shark
story, the research paper is available online
directly from the Journal of Fish Biology or you
can contact us at: baaaquarium@jumeirah.com.
Zebedee and Nimr can be seen by visiting the
Al Mahara restaurant in the Burj Al Arab.
38 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 39
FEATURES
The Shark Whisperer
FEATURE Chantal Boccaccio PHOTOGRAPHY EDDY RAPHAEL
FEATURES
“From birth, man carries the weight of gravity on his shoulders. He is bolted to earth. But man has only to sink beneath
the surface and he is free.”
jACques COUSTEAU
“Watching her with the sharks, it almost seems like
certain sharks enjoy the sensation and nuzzle into her lap
for attention.”
In these incredible photographs, a diver recalls the moment Cristina brought a Caribbean Reef shark under control. He said: “My first time to witness Cristina feeding the sharks was
amazing. I expected an adrenaline rush, but the dive was so peaceful and calm. It was totally relaxing to watch the sharks swim slow circles around us in hopes of being fed by Cristina.
I was in awe and could not keep the smile off my face. She’s been working with sharks for more than 15 years. She’s incredibly comfortable around them and that calmness seems to
translate to the sharks as well.”
Cristina Zenato is an enigma; a quiet symphony
of fire and passion wrapped in a little girl’s
body. A world-renowned diver, mentored by
diving legend Ben Rose, Zenato defies any
sort of traditional labeling – as she’s undeniably
One Of A Kind.
As well as a tireless champion for shark
awareness.
This pint sized Italian, part ballerina, part fish
out of water, has the ability to coax – what
some might call – man’s most feared predator,
literally, into the palm of her hand. But don’t
call them Predators to her face, because to
Cristina Zenato, they’re simply “family.”
Zenato induces a “tonic” state in the shark, in
effect hypnotizing it, by rubbing the ampullae
of Lorenzini – the name given to hundreds
of jelly-filled pores around the animal’s
nose and mouth. The pores usually act as
electroreceptors for the shark to detect
nearby prey, but when gently rubbed they
bring on a natural paralysis, which can last for
up to 15 minutes. To the observer, this looks
like the shark has fallen asleep right in her lap.
Zenato’s ability to work with sharks in this
manner has enabled her to study up close, in
the wild, a mysterious world very few will ever
encounter.
As a precaution, however, Zenato wears
a chain mail suit. Sharks have rows of razor
sharp teeth and a powerful bite. The chain
mail is designed to keep those teeth from
penetrating the skin if the shark bites down
on a diver.
Hers is certainly not a traditional work week.
With over 17 years experience and a daily log
of shark diving activities, rescues and behavioral
study, Cristina Zenato is the First Lady of shark
behavior, DNA sampling, migratory patterns as
well as a leader in shark conservation. Cristina
teaches shark awareness and trains shark
professionals all over the world.
A passionate advocate for marine life, her
genuine nature betrays a love affair with the
ocean, and its inhabitants, that most of us only
speak of; few of us dare to “put their money
where their mouths are”; few dare to brave
that mostly unknown world that Zenato
inhabits on a daily basis.
“Sharks are an endangered species,” Zenato
explains, “but they are a very important
part of our eco-system, and they are so
misunderstood.”
“The only time sharks make it into the news,”
Zenato maintains “is when someone has been
injured. The only time you see them on TV –
is during SHARK WEEK. I don’t want my story
to be told like that. For me, my story with
the sharks, and what we do together, is the
opposite. There is a peacefulness. I sense that
they trust me, and they know that I trust them.”
Zenato speaks with a soft accent that’s hard
to place, as so many places have left a hand
upon her heart. Born in the African Congo,
“my tremendous passion for the sea surfaced
at a young age, and then I followed my love for
the ocean, I journeyed to the Bahamas, where
I found my calling…” Zenato smiles, recalling
the memory. – because it’s there she met Ben
Rose, who changed her life…
The legendary Ben Rose was a pioneer in
marine identification and discovered the
underwater cave and cavern system located in
the Lucayan National Park. Ben’s Cave is world
renowned and named after the man who
discovered this natural treasure.
It was Rose who taught her how to feed and
handle sharks, and from there her passion to
study shark behavior was inflamed.
Now from the Bimini Shark Lab, South Africa,
North Carolina, Florida and Mexico, Cristina
reports for newsletters about sharks, cave
diving and training, having observed first hand
the behaviors of Great Whites, Tigers, Lemons,
Reefs, and Bulls.
It was Ben Rose who first trained her in the
techniques of tonic immobility; from there, she
expanded the practice to remove hooks from
shark’s mouths, to remove parasites, and to
work her Awareness Campaign against shark
finning and capture, for shark protection, as
well as human education.
In 2000, Zenato used her own time and money
to train in Florida to become a Full Cave
Diving Instructor. She’s the recipient of the
Platinum Pro Award 5000 from Scuba Schools
International and a member of Women Divers
Hall of Fame..
For 17 years, she’s worked.
All of these feats would be enough for most,
but not for Zenato, as they pale in comparison
to her passion for studying sharks and
instructing the public about shark awareness.
The sharks at her home in the Bahamas
instinctively recognize her gentle spirit, and
warm to her touch. Visitors at the Shark
Dive at UNEXSO are encouraged to feel the
shark’s skin while in their calm state, allowing
them to dissolve any misconceptions or
preconceptions they may have had about shark
life. She teaches interested divers to feed the
local Caribbean Reef sharks by hand, hoping
to bring people closer to understanding the
secret world of these amazing creatures.
Zenato’s astounding ability to lull the ocean’s predators
into a trance-like state, allows her to literally hold what
some consider the world’s deadliest animals in the palm
of her hand.
Her techniques have allowed her to globally share
behavioral data, tend to injured sharks, extract DNA
and engage in rescues that might otherwise prove too
precarious.
40 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 41
FEATURES
FEATURES
Freelance journalist Dia Osborn concurs
“I.”
Osborn offers “what fascinated me most was
what happened in my brain while I watched.
I swear I could feel it rewiring. Some deep and
unquestioned prejudice against sharks took a
hit here, big time.” For Cristina Zenato, this is
Shoot, Score! As it is her life mission to dispel
the myths ingrained in our culture about sharks
while portraying them in a new light. “Sharks
are perhaps the most feared, maligned and
misunderstood species on the planet,” Zenato
maintains, “they are also a crucial component
of our ocean’s ecosystem and many of their
kind are now critically endangered.” Her Raison
d’etre is to instill public awareness of the plight
and danger of extinction these elegant and
amazing marine creatures face.
A sense of who Cristina Zenato is can only truly
be felt underwater. There, she is more at home
than she is on land. There, this enigma, this pint
sized ballerina of the sea, is able to realize her
life-long aspiration: She dreamed of swimming
with sharks. Now she dances with them.
Currently, the team behind the award-winning
PBS series CUISINE CULTURE is making a
documentary about this incredible woman.
The filmmakers have collectively worked on
series for A&E, National Geographic Channel,
ShowTime, Tru TV and many other networks.
SHARK WHISPERER is a film whose goal is
to spread public awareness about the plight
of sharks, and the amazing work of Cristina
Zenato. They are currently raising funds for
the documentary through Kickstarter crowd
funding. Interested parties can go to this link
to learn more about the project, and help by
passing this on to anyone whom you think can
assist in the fundraising. There are excellent
rewards for their Financial Angels – including
the opportunity to swim with Cristina and
her sharks in the Bahamas! Any help is greatly
appreciated! Together, we can make a difference!
LINK TO THE PROJECT AND ZENATO’S
AMAZING VIDEO:
shark-whisperer
Taking a Second Look.
Is There a Full-FACE Mask in Your Future
FEATURE Robert N. Rossier
Nothing in diving is more commonly recognized
than the mask. Despite the differences seen
throughout the spectrum of masks, they all
work pretty much the same way. And when
you find one that fits your face and suits your
needs and style of diving, it can be a difficult
thing to part with. But as diving needs and
styles change, many find the advantages of a
full-face mask are worth a second look.
Advantages
Full-face masks offer a variety of potential
advantages, the most important of which
perhaps is the compatibility with a plethora of
highly effective hardwire and wireless underwater
communications equipment. Although voice
communication in diving isn’t limited to full-face
masks, many count it as an advantage.
The full-face option eliminates TMJ
(temporomandibular joint) syndrome and
sore jaws that come from clenching a
regulator mouthpiece between your teeth.
While most divers seem to adapt well to
breathing through the mouth, others may find
that normal breathing through the nose with
a full-face mask is a much more comfortable
proposition.
When one dives in cold water, the fullface
mask provides additional comfort. In
conjunction with a wet or dry hood, the
full-face mask keeps the cold water off the
face and can dramatically improve overall
thermal protection. This could translate into
a reduction in air consumption and extending
bottom time. However I have found no clear
data to support the claim: and some sources
associate a higher air consumption with fullface
masks – particularly those that operate
with a positive pressure.
Full-face masks also afford a much higher
level of protection when divers operate in
contaminated, polluted or otherwise suspect
waters. Finally, a full-face mask allows an
unconscious diver to remain breathing. Some
divers who perceive a higher risk of oxygen
toxicity for their particular dive operation or
profile (including nitrox and mixed breathing
gases) favor a full face mask for just that
reason, but the same logic applies to other
forms of wreck, cave and technical diving.
The Downside
Full-face masks also have a downside. First, you
can expect to pay 10 times as much, or more,
for a full-face mask as you would for a standard
dive mask. Factor out the cost of a regulator
second stage (many full-face masks come with
integral regulators), and the apparent price
differential becomes more palatable. Still, it’s
expensive, and not likely to appeal to those
who dive infrequently. And since full-face masks
are much heavier (they have extra weight
built in to offset the increased buoyancy) and
bulkier than standard dive masks, they’re more
cumbersome when traveling.
But the real downside is that full-face masks
require a breadth of skill and knowledge
beyond that required for a standard mask.
Unless you’re willing to spend the time and
money to become proficient, you could soon
be over your head – in more ways than one.
Training Issues
The full-face mask is a breed apart from
standard dive masks. Just a cursory look at the
details of construction will make it obvious that
such a mask requires additional training. Simply
putting on the full-face mask is different, with a
“spider” consisting of four or more independent
straps forming a system designed to keep the
mask secure and ensure a proper seal.
Some full-face masks are designed for easy
donning and doffing, but others can represent
a significant challenge. Depending on the type
of full-face mask used, even experienced fullface
divers can benefit from a second pair of
hands when they suit up.
Even the basic procedures for full-face mask
diving, such as clearing the mask and entering
the water, represent a departure from the
standard skills learned in basic scuba training.
Clearing a full-face mask presents a greater
challenge: this is due in part to the greater
volume and the internal configuration of the
mask. Clearing the ears can also differ with a
full-face mask, owing in part to the fact that an
oronasal pocket typically separates the mouth
and nose from the eye space within the mask.
While some full-face masks incorporate nose
pockets similar to standard masks, others use
“nose blocks” that sit against the ends of the
nostrils to allow the diver to clear his ears.
There may be no soft area to allow you to
pinch your nose. Another feature of many fullface
masks, a surface-breathing valve (SBV),
allows the diver to breathe on the surface
without consuming the precious compressed
breathing gas supply: better add that one to
your “must check” list before submerging.
Without a doubt, the biggest differences
in training come when we progress to the
emergency training portion of the program.
Obviously, the standard air-share strategies
used by divers with standard dive masks no
longer apply or need serious modification
when they wear a full-face mask. Some full-face
masks can be fitted with a redundant regulator
to minimize the risk of failure, and a bailout
bottle is typically used to cope with out-of-air
situations. Sharing air in the traditional manner
typically means ripping off the full-face mask,
taking an octopus and then donning a standard
mask. All require a higher standard of skill,
expertise and training.
While the full-face mask is a boon to coldwater
diving, the prospect of facing an
emergency in cold water adds another risk
factor. Sudden exposure of the face to cold
water can cause serious and perhaps even
debilitating discomfort. To counter this effect,
many instructors insist that their full-face
students acclimatize their faces to the cold
water before initiating a cold-water dive. Only
anecdotal evidence suggests the efficacy of
such procedures, especially with longer time
periods between the acclimitazation and
exposure to frigid water, and I have found no
scientific evidence to support the claim.
Yet another potential disadvantage of the
full-face mask is the buoyancy factor. Full-face
masks typically offer a greater displacement
than standard masks. In addition to requiring
more weight to offset the increased buoyancy,
some divers find that the neck strain and
jaw fatigue caused by the increased mask
buoyancy is uncomfortable. The degree to
which this occurs depends on both the style
of mask, and the orientation of the diver in the
water (horizontal or vertical).
Tour de Force
Just as standard dive masks present a broad
spectrum of features, benefits, sizes and styles,
so do full-face masks. At one end of the
spectrum is the Cressi full-face mask, featuring
a molded rubber mask with two eyepieces
and an integral breathing tube designed to
mate with a conventional regulator, making this
entry-level full-face mask an affordable option.
Another variant of the full-face mask for
recreational divers is the Interspiro Divator, a
design derived from the world of firefighting.
Known also as an AGA (ah-ga) mask, it offers
a broad, curved faceplate and side-mounted
regulator that combine to provide enhanced
visibility. Unlike any standard dive mask, a diver
can operate the AGA mask in a positivepressure
mode (a plus for contaminated
waters) as well as the normal mode.
One of the newer entries into the recreational
diving market is the Ocean Reef’s Neptune
II full-face mask. With a design based on
military Nuclear / Biological / Chemical (NBC)
protection masks, the Neptune II incorporates
a unique face seal designed to accommodate
a wide variety of facial shapes and sizes.
Ease of donning and doffing is a hallmark of
the Neptune II, which also has a standard
communication system. The Neptune can be
purchased with the standard regulator, or, to
keep the price within reason, can be fitted
with any number of manufacturers’ regulators.
The Kirby Morgan Dive Systems (formerly Diving
Systems International) EXO-26 is a standard
of the industry for commercial operations.
This top-of-the-line mask incorporates a
unique suspension system that offers custom
fit and comfort, an adjustable-flow regulator,
communications ports and oral-nasal skirt. Kirby
Morgan has a full line of commercial full-face
masks to suit most any need.
A variety of manufacturers also offer various
models of full-face mask with numerous
features and functions. Scubapro’s full-face mask
– similar in appearance to the EXO – sports a
redundant regulator port, molded nose pocket
and a variety of accessory plugs. Widolf also
offers a line of rugged full-face masks designed
for commercial and technical divers.
A Full-FACE Future
As you move down the depth meter to the
realm of more advanced diving, keep in mind
the pros and cons of diving with a full-face mask.
Even if your old mask is a comfortable and
reliable friend, it may be worth taking a second
look at the full-face option. Who knows There
could be a full-face mask in your future.
The Case for Voice Communication
It’s difficult to overestimate the importance
of good communication on a dive, and
adding effective voice communication to
the mix of available communication modes
certainly reduces some risk factors. Having
good underwater voice communications
cuts through the often murky and confusing
world of hand signals, allowing divers to
communicate even when they can’t see one
another. Moreover, a diver wearing a full-face
mask with communications capability can
more readily summon a buddy, or perhaps
even personnel on the surface, to assist with a
developing problem.
In some instructional settings, the use of voice
communication can increase the efficiency
of the learning situation. According to noted
educator Sandra F. Rief of the Center for
Applied Research in Education, in West Nyack,
N.Y., students retain 10 percent of what they
read, 20 percent of what they hear, and 30
percent of what they see. However, they
retain 50 percent of that which is both seen
and heard. Although a diver’s ability to sort
out problems on his own (i.e. without voice
communications) is a key safety skill, some
types of underwater instruction – such as
marine biology and species identification – can
greatly benefit from the use of underwater
voice communication. Other types of diving
that require close coordination of dive
team members may also benefit greatly
from the application of underwater voice
communication.
42 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 43
FEATURES
Who says teckies HAVE to wear BLACK
FEATURE TRACEY WARREN
FEATURES
The problem with plastic
Feature Leanne King
What Lies Under Ferdi Rizkiyanto – 2011
My husband came home one day last Autumn
and said “guess what We are both doing our
rebreather training.” “WHAT! You have done
what booked us on another dive training
course” Ummm yep, he had. Both the basic
and advanced rebreather course at Atlantis
Dive Centre in Dubai.
OMG all I could think about was those guys
wearing those massive yellow boxes and all
wearing black and talking Klingon.
I was handed my homework and more
knowledge reviews. The structure of the
manual and accompanying DVD was easy to
follow, once you mastered the acronyms and
Klingon language (sorry technical diver speak).
I arrived with my husband at the dive centre
to find lots of black and shiny things spread all
over the table. There were three 6ft guys (one
being my husband) all wearing dive T-shirts
with logos, “dive deeper” as well as other dive
testosterone logos and little me.
I was wearing pink and had my nails done
especially for the course. And you guessed it:
PINK! Who says teckies have to wear black!
The guys looked on and I could see them
rolling their eyes! Well as the day went on and
we got more and more into the technical side
of the rebreathers. By the way, they weren’t
the big yellow boxes, but the very small and
light Poseidon MKVI units. Still, mainly BLACK.
Come on people, women dive too and keep
the little black number for going out!
The theory lesson went on and I had
downloaded the manual, on yes my pink iPAD
and also my husband’s black armoured-plated
teckie iPAD! Jason Sockett was great. We went
through things very clearly and after a short
while I too could understand and speak Klingon.
Yes, I understood rebreather teckie talk. BOV,
CCR eCCR, Bailout Gas and much more.
We began to assemble the units: lots of
wires, hoses, cylinders and more bits than an
IKEA flatpack. I followed Jason’s instructions,
constantly referring to the downloaded manual.
So long as you can read the instructions from
an IKEA flatpack you can follow instructions on
how to put together a rebreather. A sip of black
coffee from a pink flask and I was finished.
The unit was surprisingly light and compact.
In fact I think it fits the smaller person better
than the 6’+ guys. Off to the pool to start our
training. Lara Croft had nothing on me: I had
a bailout cylinder; rebreather and I could see
fear in the eyes of the guys. OK in the pool
for some skills training. Remember: “if in doubt
bail out.”
Then into the deep part of the pool for some
buoyancy work. Jason demonstrated a perfect
hover…reaching up to the surface of the pool
and not breaking the surface with his finger…
very cool. OK my turn, this should be easy…
NOT! It was like being an Open Water student
all over again. I have been diving and a PADI
professional for…well…let’s just say a few
years. You can hover and breathe normally.
Very strange at first.
Watching the guys was so funny and because
there are no bubbles you can hear everyone
laughing and talking to each other. Yes, you can
speak to each other.
After mastery of the pool skills we went into
the ocean. The guys all wearing, yes you guessed
it, black and me in pink with highlights of black.
The units were great in the ocean: light, easy
to use and best of all no bubbles. I was amazed
at how close the batfish on the Cement Barge
came up to us. It was as if they didn’t know we
were there or accepted us as marine life: No
bubbles to scare them off.
As a keen underwater photographer I can
certainly see another big advantage of
rebreathers other than the normal techie
concept of “deeper for longer”. The no bubbles
is definitely an advantage for photography.
All in all I sincerely loved the course. I haven’t
let on yet, as I wanted my husband to pay for
But I could only pretend for a short while.
Due to “Man Flu” I passed my rebreather and
advanced rebreather before him!
So ladies, don’t be put off by the macho image
of teck divers and thinking everyone wears
black…some wear PINK.
These days we are bombarded with “Refuse-
Reduce-Reuse-Recycle” advertisements, but how
much attention are people actually paying
Where does it all come from
Plastic is everywhere. It has become an
indispensable part of our modern consumer
society. More plastic has been produced in
the last ten years alone than was created in
the whole of the 20 th Century. We currently
produce over 260 million tonnes a year,
globally and the industry is growing 5% every
year. 50% of all plastic produced is only ever
used once and then thrown away. However,
putting the plastic in your rubbish bin is not
the end of the story.
Currently, plastic constitutes 10% of all
waste we generate – America alone throws
away over 35 billion plastic water bottles
every year. Producing plastic bottles uses 17
million barrels of oil every year and releases
2.5million tonnes of carbon dioxide in to the
atmosphere. In addition to that, 462 million
gallons of oil are needed just to transport the
water from the bottling plant to the shops.
Just take a moment to look around and think
of how many everyday items are made from,
or have some part of them made of plastic
– everything from food packaging (40% of
all plastics produced are used merely for
packaging) to tables, chairs and electronics
casings. It is almost impossible to avoid and has
become so ingrained in today’s society that,
most of the time, you don’t even notice it. You’ll
probably shock yourself when you realise just
how much modern man relies on it. Many
plastics have had large beneficial effects for
our lives, but the one-use, “disposable” plastics
– such as drinks bottles and caps, plastic cups,
plastic cutlery, shopping bags etc. – are having
lethal effects on the environment. They may be
cheap and convenient for us, but they are also
buoyant and durable, a deadly combination
when in the oceans.
The cost of plastic
Our obsession with plastic doesn’t just have
negative environmental impacts, it could
be costing the earth to produce. Plastic
production is responsible for using 8% of
the world’s yearly oil production – to put
that in to context, that’s roughly the same
amount as the whole of Africa uses! It takes
250ml of oil to produce a one-litre water
bottle – considering we throw away 50% of
the world’s plastic produce every year, we are
essentially throwing away 4% of the world’s
oil generation. This seems completely crazy
considering the vast evidence of how finite
these natural, non-renewable energy sources
are and the unlikelihood that we’ll ever find
any vast reserves of oil in the future.
In addition to this, plastic on beaches and
just off shore could be costing up to $1.27
billion annually, as it affects tourism, fishing and
shipping industries.
Plastic BAGS – convenience or curse
Plastic bags are irrefutably the most widely used
plastic product – approximately 500 billion
plastic bags are used every year worldwide,
that’s nearly one million every minute. The
UAE alone uses 12 billion plastic bags a year
– or nearly 23,000 a minute. Approximately
0.2%-0.3% of plastic bags end up in the sea,
intact. That may not sound like much of a
percentage, but in actual fact it’s 1-1.5 billion
bags every year, 36 million of which originate
from the UAE. Sure, they are unquestionably
convenient when out shopping, but what effect
is this convenience having on our environment
as a whole
A plastic bag has an average “working” life of
just 15 minutes. Think about what happens
to that plastic bag once you’ve brought
the shopping home. Many people use old
shopping bags to line household rubbish bins.
Reusing them like this is undoubtedly better
than buying specific bin bags just to throw the
shop carrier bags in, but then what happens to
the bag once it gets thrown out with the trash
Companies constantly reassure us that their
plastic bags are biodegradable, that after 18
months in the environment they start to break
down. But just how true is this The honest
answer is, this statement is a red herring. Plastic
bags are NOT biodegradable, in the true sense
of the word. No organism – be it microbe, plant
or animal – has ever evolved to feed on plastic.
Yes, plastic bags break down, but only in to
smaller bits of plastic. This breakdown happens
through photodegradation, where prolonged
exposure to sunlight breaks down the polymer
chains that make up plastic in to smaller pieces
of, well, plastic. Physical friction, such as that
which occurs on beaches, coastlines and
seashores, accelerates this process. However,
even the smallest molecule of plastic will not
be absorbed in to the environment, by any
means. Throwing a plastic bag in to a landfill
site will not cause it to break down – it can
last for hundreds, if not thousands of years in
this situation. If a plastic bag ends up in the sea,
44 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 45
FEATURES
FEATURES
it constantly breaks down into smaller pieces
until it forms a sort of soup with the water, but
the plastic never disappears, it remains intact
in some form in the environment, often ending
up ingested by marine animals where, once in
the gut, it causes severe harm.
Are bioplastics the answer
Starch powder has been mixed with some
forms of plastic to allow it to degrade easier, but
it still never completely breaks down. Certain
species of bacteria produce a completely
biodegradable polyester when under certain
conditions of physiological stress. Researchers
have managed to genetically engineer these
bacteria to produce “bioplastics” but the
process and resulting plastic is expensive.
Bioplastics account for 10-15% of plastics
currently produced, but they are not the
answer. Bioplastics rely on potential food crops
in their production and although they have
become synonymous with “degradable” and
“biodegradable” plastics they can take decades
to break down and when they do, they release
one of the worst greenhouse gases possible –
methane – significantly more damaging to the
atmosphere than carbon.
The only way of avoiding plastic ending up
in the environment is to store it, burn it or
recycle it at the end of it’s life – but all of
these produce other negative impacts on the
environment.
“The ocean is like a soup of plastic mostly
composed of fragments invisible to the human
eyes, killing life and affecting dangerously our
health.”
Pierre Fidenci, ESI President
Why does it matter
Negative environmental impacts are not the
only problem with our current obsession with
plastic. The small, broken-down plastic particles
attract toxins, which then enter the marine
food chain, from which approximately 60%
of humans get the majority of their protein.
The chemical compounds, known as Persistent
Organic Pollutants (POPs), cause adverse
Photo by Greenhouse Carbon Neutral Fd
biological effects in many species, including
humans and they are currently being found on
marine plastics in considerably higher orders
of magnitude than occurs in water. The same
POPs found on the debris have been linked
to cancer, diabetes and low sperm count as
well as genetic defects, low birth weight and
developmental problems in children. It is
not only plants and animals that suffer from
plastic ingestion – we may be slowly poisoning
ourselves.
In addition to attracting toxins, degrading
plastic actually releases the chemical additives
that were mixed with it on production. These
chemicals are retained within the digestive
systems of the organisms that eat them,
transferring in to the systems of the larger
organisms that in turn eat them and then on
to humans.
Many people either stick their head in the sand
when it comes to environmental issues, simply
don’t care because the issues don’t appear to
impact on them or their everyday life, or find it
very difficult to change the habits of a lifetime,
especially when it comes to something as
seemingly essential and necessary as plastic.
Yet, although plastic appears to benefit the
human race greatly, it is having an extremely
negative effect on a lot of other species we
share the planet with, which will consequently
impact on us in the future. Plastics have three
major impacts on marine ecosystems:
1. Entanglement
• Laist (1997) recorded over 250 different
species as having become entangled in or
ingested plastic.
• Allsopp et al has found that up to 7.9% of
some species of seals and sea lions become
entangled.
• In my first week in the UAE, I pulled
discarded nylon fishing net from the shore
of Al Aqah only to find it contained 9
swimming crabs and 4 conchs. Just today,
while shore-diving in Al Aqah, I picked up
plastic fishing line which had entangled two
large hermit crabs, having to bring them
back to shore to cut them free then release
them back to the water.
2. Ingestion
• Plastic artefacts have been found in the
stomachs of over 100 different species of
sea birds.
• Around 95% of albatross carcasses washed
ashore had an average of 40 pieces of
plastic in each of their stomachs, which
affects them mechanically and chemically.
• 31 species of marine mammals are known
to have ingested plastic (Allsopp et al).
3. Transport of invasive species
• It has been shown there is a correspondence
between an increase in plastic litter and an
increase in invasive species (Allsopp et al).
• Man-made litter has significantly increased
the transport opportunities for alien
species.
• The hard surfaces of plastic debris are an
attractive alternative substrate for many
organisms. While this may seem like an
opportunity for conservation the problem
with plastic is it doesn’t stay still. Plastic can
float all over the world until it eventually
gets caught up in a particular current and
landing in one of the seven gyres, taking
everything on board with it. Non-endemic
species can have a catastrophic effect on
the indigenous species and biodiversity
where they land.
Some people against the anti-plastic movement
are trying to claim there is no evidence of vast
numbers of sea creatures being killed by the
plastic discarded by modern consumer society.
They say that environmentalists constantly use
the same five photos of animals suffering from
discarded plastic (turtle swallowing a bag, an
otter and seabirds caught in bags, the stomach
of a whale containing 20 separate bits of
plastic) to promote the cause. However, the
amount of plastic currently floating around
the oceans is undeniably immense. When
the stomach contents of deceased animals
washed ashore have been analysed, all manner
of human created rubbish has been found
– everything from street signs to tampon
casings. As with a lot of other animal deaths,
ban be hard to determine the actual cause of
the ultimate demise of the animal, but whether
the trash has anything to do with the death
or not doesn’t really matter – it shouldn’t be
there in the first place and it certainly would
not have promoted healthy biology in the
animal.
Is it a JELLYFISH Is it a squid No, it’s
a plastic bag…
It may be surprising to some, but the eyesight
of a turtle is far superior to that of humans.
They see in full colour, although they are
obviously designed to see well underwater
and so when above water they are very shortsighted.
However, even to the most intelligent
creature, a floating white plastic bag can easily
resemble a jellyfish drifting along in the current.
Photo by Chris Jordan
The UAE is very privileged in having 4 of
the 7 species of sea turtles resident on our
shores. However, if plastic pollution continues
at current trends, this situation may not last.
All seven species of sea turtles currently carry
“endangered” status on the IUCN’s Red List.
Plastic pollution only makes their situations
more urgent.
In 2009, marine biologists from Disney’s
Animal Programs discovered a green sea
turtle off Melbourne Beach, Florida, who was
seemingly having difficulty digesting food. Upon
investigation, the biologists found a piece of
plastic was lodged in the gastrointestinal tract
of the turtle. Once the plastic was removed,
the turtle proceeded to defecate 74 foreign
objects in the following month including latex
balloons, various types of string, nine different
types of soft plastic, four types of hard plastic,
a piece of carpet-like material and, horrifyingly,
two tar balls. For one turtle to have so many
foreign items, all due to human disposal, is
more than worrying and is a big eye-opener in
to what we are doing to our planet with our
current “disposable” lifestyle.
In November 2008, “Whitey” a 10-foot-long
crocodile tagged as part of an Australian
government wildlife-tracking program was
found dead. Examination showed it had
consumed 25 plastic shopping bags and
garbage bags.
Current conservation estimates suggest that
plastic kills over 100,000 marine animals
and 1,000,000 birds every year. The number
of fish killed is hard to estimate, but it could
be millions. It doesn’t take a huge stretch of
imagination to consider one plastic bag being
able of killing more than one animal in it’s life,
given that they survive in the environment for
so long.
Why is there so much plastic in
the oceans
In 2010, Cinque Terre, Italy actually banned
plastic bottles from the region as it was
estimated 2 million were left behind on the
region’s beaches every year. Beach clean-ups
are no doubt of benefit to the immediate
environment. Too many people are in the
habit of standing up and walking away from
the beach, leaving their litter behind. For some,
it seems out of sight really is out of mind.
Many plastic bags, bottles and cigarette ends
are removed from beaches on clean ups, but
is it merely delaying the inevitable 80% of
the rubbish collected in the oceans originates
from rubbish intended for landfill, where it has
blown from either bins or the landfill itself and
found its way to streams or rivers, all of which
ultimately end up connecting with the sea.
Plastic debris has been found in all the world’s
oceans, it is everywhere, whether humans
inhabit nearby or not. 46% of plastics float,
so if they end up in the sea they can travel
around for years. They get swept along by
currents, sometimes travelling thousands of
miles until they end up in one of the large
ocean gyres, where vast amounts of plastics
are congregating. The North Pacific is the
most infamous example of this, where a large
area of the ocean surface (some estimate it to
be twice the size of France) now has a high
concentration of plastic – most of which is
now particulate plastic, having been broken
down from waves, wind and UV rays. For every
six pounds of plastic floating at the surface of
the gyre there is just one pound of plankton.
One example of the power of the currents
and the durability of plastic occurred in 1989,
when 29,000 plastic toys were lost at sea in
the Pacific Ocean. 15 years and 17,000 miles
later, they started washing ashore on the
coasts of Great Britain.
What can we do about it
Even if the plastic did make it to the landfill,
it wouldn’t solve the environmental problems,
it would merely contain them in one place.
Quite frankly, the only way to stop plastic
ending up in the sea is to stop using it. Take
reusable bags shopping; refuse the plastic bags
at the till. Take a proper coffee mug to work
and use it; don’t keep picking up a new plastic
one every time you go for a coffee or for
some water. If you know of a recycling system
nearby, use it; don’t just throw an item in to
the rubbish bin because it is nearer to you.
Instead of wrapping leftovers in cling film or
taking sandwiches for lunch in bags, use airtight
boxes; they keep food fresh and can be reused
for years. The human race, as a whole, needs
to desperately reduce the amount of plastic it
produces and discards.
A lot of damage has already been done. There
are vast amounts of plastic floating around our
oceans already, amounts too immense to even
consider removing. The oceans are not just like
a lake that has suffered from litter from picnics
– you cannot simply wade out in to it with a
net and scoop up all the rubbish.
According to the Middle East Waste Summit,
in 2009 the UAE was responsible for
generating 22% of the total waste produced
by Gulf countries. In 2010, 4.8 million tonnes
of rubbish went in to landfill sites around the
UAE. Initiatives to tackle the rubbish problem
are being instigated. Starting next year, nonbiodegradable
plastic bags will be banned
from the whole of the UAE. Similar programs
in other countries have resulted in immense
drops in new plastic bags being used. In 2002,
Ireland placed a steep tax on the purchasing
of plastics and as a result, plastic bag use
dropped by 90%, with the money generated
from the tax being used to fund recycling
programs. In 2003, Taiwan starting charging for
plastic bags in markets and disposable plastic
cutlery in restaurants in a bid to reduce the
amount used. 90% of Australia’s retailers have
joined the voluntary ban on plastic bags and
their consumption has dramatically fallen.
On announcing a ban on thin, one-use only
plastic bags, a spokesperson for China’s State
Council remarked: “Our country consumes a
huge amount of plastic shopping bags each
year. While plastic shopping bags provide
convenience to consumers, this has caused a
serious waste of energy and resources and
environmental pollution because of excessive
usage, inadequate recycling and other reasons.”
Everybody should grasp the soon to be
implemented ban as an opportunity and invest
in completely re-usable woven shopping bags,
readily available from supermarkets across the
nation. Put some plastic bags that you currently
have at home in the car for emergencies. This
will go someway to vastly reducing the amount
of plastic departing from the shores of the
UAE. Imagine the difference the disappearance
of 36,000,000 plastic bags from the ocean
could make to our marine wildlife, never mind
the aesthetic value of the loss of 12 billion
plastic bags from the UAE countryside. You
might just prevent the entanglement of a shark
or the death of a turtle.
46 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 47
Chain Feeding Reef Manta Rays
FEATURES
Introducing the Manta Trust
photography GUY STEVENS
Two piggy-back feeding Chevron Reef Manta Rays
Reef Manta Ray feeding; barrel rolling
Oceanic Manta Ray at a fish market, Sri Lanka
Almost every diver is familiar with
the manta ray, and if they haven’t
seen one of these magnificent
creatures yet, it’s certainly on their
to-do list. The beauty, grace, and
curiosity of mantas makes them one
of the most engaging animals to dive
with, and their harmless demeanor
often invites a close, personal
interaction that is unlikely with other
large animals.
Mass feeding Mantas at Hanifaru Bay, Maldives
Unfortunately, like much of what
we love in the oceans, mantas are
in trouble. Over the last decade,
manta and mobula ray gill rakers
– the cartilaginous structures that
allow them to strain plankton from
the water – have been increasing in
popularity as a ‘traditional’ Chinese
remedy. While the use of dried
manta gill rakers to treat a number
of illnesses is not, in fact, part of the
Traditional Chinese Medicine (TCM) literature,
traditional practitioners have nevertheless
been using gill rakers more and more in recent
years, much to the detriment of manta and
mobula populations around the world.
Targeted fisheries have cropped up in
developing countries around the world,
with fishing hotspots in Sri Lanka and
Indonesia. Historically, manta fisheries have
led to collapses of small, vulnerable manta
populations in countries such as Mexico, and
due to their low reproductive rates and small
population sizes, manta rays now face a very
real threat of global population crashes.
To address the growing fisheries pressures on
mantas and mobulas around the world, while
educating local communities and providing
sustainable alternatives to exploiting manta
and mobula populations, a group of scientists,
conservationists, filmmakers and photographers
has formed the Manta Trust. With the goal of
protecting manta rays, their close relatives, and
the immensely productive ecosystems which
these animals inhabit, the Manta Trust, now a UK
registered charity, is conducting crucial research
on the basic life history of mantas, such as
identifying migratory routes, feeding strategies,
and critical habitats such as breeding and
nursery grounds. Using this new information,
we’re working with local collaborators,
international conservation organisations and
governments to enact critical legislation to
protect mantas, mobulas, and diverse marine
habitats, while encouraging economical and
sustainable alternatives to manta fisheries, such
as responsible dive ecotourism.
Be sure to check back in each issue of
Divers For The Environment for updates
on the Manta Trust’s work, important new
discoveries in manta and mobula ecology,
and global conservation efforts for mantas
and their relatives. In the meantime, be sure
to visit or.
facebook.com/MantaTrust to learn more
about mantas and find out how you can help
protect them and feel free to contact us on
info@mantatrust.org if you’d like any further
information on our work.
Manta
T R U S T
48 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 49
UW PHOTOGRAPHY
DIGITAL ONLINE 2012 RESULTS
THE UAE’S ONLY UNDERWATER PHOTOGRAPHY AND FILM COMPETITION
3 rd Place WIDE ANGLE © Alastair McGregor
On the surface
our work looks perfect
We would like to congratulate all of the winners of Digital Online
2012 and thank all 49 participants for taking part and especially thank
all the sponsors who took part in this year’s competition prizes.
We would also like to give a big thank you to our printing sponsor,
Print Works Mediatech that printed all 233 images for the exhibition.
The winners and prizes are as follows:
PROFESSIONAL
3 rd Place Fish: SIJMON DE WAAL
Atlantis Dive Centre – Rebreather Course
3 rd Place Macro: PETER MAINKA
Atlantis Dive Centre – Rebreather Course
3 rd Place Wide Angle: ALASTAIR MCGREGOR
NOMAD Ocean Adventures – 2 Day/2 Night Diving Package
AMATEUR
3 rd Place Fish: JOHN HAGER
The Underwater Photographer by Martin Edge
3 rd Place Macro: JONATHAN CLAYTON
The Underwater Photographer by Martin Edge
3 rd Place Wide Angle: KARIM SAAD
The Underwater Photographer by Martin Edge
3 rd Place Fish © Sijmon de Waal
3 rd Place Fish © John Hager
Deep down
it also makes a difference
3 rd Place MACRO © Peter Mainka
3 rd Place MACRO © Jonathan Clayton
PRINT WORKS
Employing Green Works ® , the eco-friendly printing technology.
Beirut
Karantina: +961 (1) 57 77 72
Bechara El Khoury: +961 (1) 33 44 14/15
info@printw.com
Dubai
Deira: +971 (4) 266 9997
info@pwmtdubai.com
50 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 51
UW PHOTOGRAPHY
UW PHOTOGRAPHY
3 rd Place WIDE ANGLE © Karim Saad
2 nd Place MACRO © Alastair McGregor
2 nd Place Fish © Dominique Zawisza
AMATEUR
2 nd Place Fish: DOMINIQUE ZAWISZA
Al Boom Diving – Dubai Mall Aquarium Shark Dive
2 nd Place Macro: RICHARD BAJOL
Al Boom Diving – Dubai Mall Aquarium Shark Dive
2 nd Place Wide Angle: COLLIN WU
Al Boom Diving – Dubai Mall Aquarium Shark Dive
VIDEO
2 nd Place MARINE LIFE: AWNI HAFEDH
Sheesa Beach Dive Center – Camp and Dive Package
2 nd Place WRECK: JOHN HAGER
Al Mahara Diving Center LLC – Diving Day Trip
VIDEO
3 rd Place MARINE LIFE: KARIM SAAD
The Underwater Photographer by Martin Edge

PROFESSIONAL
2 nd Place Fish: ALASTAIR MCGREGOR
Discover Orient Holidays – Destination Package 4 Days/3 Nights
(Marsa Alam, Egypt)
2 nd Place Macro: ALASTAIR MCGREGOR
Discover Orient Holidays – Destination Package 4 Days/3 Nights
(Terengganu, Malyasia)
2 nd Place Wide Angle: SIMONE CAPRODOSSI
Discover Orient Holidays – Destination Package 4 Days/3 Nights
(Aqaba, Jordan)
2 nd Place Fish © Alastair McGregor
2 nd Place WIDE ANGLE © Simone Caprodossi
2 nd Place MACRO © Richard Bajol
PROFESSIONAL
1 st Place Fish: WARREN BAVERSTOCK
DIEVAS – Watch
1 st Place Macro: WARREN BAVERSTOCK
DIEVAS – Watch
1 st Place Wide Angle: WARREN BAVERSTOCK
DIEVAS – Watch
AMATEUR
1 st Place Fish: KELLY TYMBURSKI
Tourism Malaysia – Sipadan Destination Package 5 Days/4 Nights
1 st Place Macro: DOMINIQUE ZAWISZA
Tourism Malaysia – Sipadan Destination Package 5 Days/4 Nights
1st Place Wide Angle: HOLLIE BURROUGHS
Delma Marine – Scuba Pro Regulator Set
VIDEO
1st Place MARINE LIFE: KHALED SULTANI
Al Boom Diving – East Coast Day Trip
1st Place WRECK: KHALED SULTAN
Al Boom Diving – East Coast Day Trip I
2 nd Place WIDE ANGLE © Collin Wu 1 st Place Fish © Kelly Tymburski
52 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 53
1 st Place Fish © Warren Baverstock –
UW PHOTOGRAPHY
1 st Place MACRO © Warren Baverstock – 1 st Place WIDE ANGLE © Warren Baverstock –
1 st Place MACRO © Dominique Zawisza 1 st Place WIDE ANGLE © Hollie Burroughs
OVERALL DIGITAL ONLINE WINNERS
PROFESSIONAL: WARREN BAVERSTOCK
Biosphere Expeditions – 1 week in the Maldives
AMATEUR: JONATHAN CLAYTON
Biosphere Expeditions – 1 week in the Musandam
VIDEO: KHALED SULTANI
Freestyle Divers – Speciality Course
Digital Online 2012 allowed photographers to submit 2 images per
category. A participant could win multiple times in each category but
only the top prize in each category was awarded. The next prize was
passed on to the next winner down the line and so forth.
Points were not awarded to images that did not follow category
regulations. For example: In fish, you could not submit mammals,
crustaceans, molluscs etc. and in macro it was not permitted to submit
photos of fish being the main element.
EDA will be introducing new photography categories and challenges
for both the photography section and video section of Digital Online
2013. They will be announced on January 1 st 2013 and will be open for
submission until midnight on April 30 th , 2013.
NOTE: Not all images have been made available in this issue. We will release
all the remaining photography that was submitted to Digital Online 2012 in
the following September magazine issue of “Divers for the Environment.
All images and videos can be viewed on the EDA website and on the
EDA Facebook page.
54 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 55
UW PHOTOGRAPHY
DIGITAL ONLINE 2012 SPONSORS:
Sheesa Beach Dive Center
PROFESSIONAL FISH CATEGORY
NAME PHOTO TOTAL
1 Warren Baverstock 16.1 418
PROFESSIONAL MACRO CATEGORY
NAME PHOTO TOTAL
1 Warren Baverstock 16.1 461
PROFESSIONAL WIDE ANGLE CATEGORY
NAME PHOTO TOTAL
1 Warren Baverstock 15.2 413
PROFESSIONAL OVERALL PLACE TOTAL
Warren Baverstock 1 2503
2 Alastair McGregor 5.1 410
3 Sijmon de Waal 13.2 383
4 Anna Bilyk 7.1 378
2 Warren Baverstock 16.2 438
3 Alastair McGregor 5.2 411
4 Alastair McGregor 5.1 402
2 Warren Baverstock 15.1 411
3 Simone Caprodossi 13.1 395
4 Alastair McGregor 4.1 351
Alastair McGregor 2 2112
Simone Caprodossi 3 2026
5 David Robinson 8.2 378
6 Iyad Suleyman 10.1 372
7 Iyad Suleyman 10.2 370
5 Peter Mainka 11.1 391
6 Stewart Clarke 15.2 377
7 Abdulla A Shuhail 1.1 353
5 David Robinson 7.1 349
6 David Robinson 7.2 349
7 Anna Bilyk 6.2 341
Iyad Suleyman 4 1996
Anna Bilyk 5 1900
8 Anna Bilyk 7.2 362
9 Warren Baverstock 16.2 362
10 David Thiesset 9.2 355
8 Iyad Suleyman 10.1 341
9 Simone Caprodossi 14.1 330
10 Ahmed A Shuhail 2.1 326
8 Simone Caprodossi 13.2 341
9 Sijmon de Waal 12.2 321
10 Stewart Clarke 14.2 321
David Robinson 6 1879
Stewart Clarke 7 1833
11 Stewart Clarke 15.1 355
12 Abdulla A Shuhail 1.2 351
13 Ahmed A Shuhail 2.1 347
11 Iyad Suleyman 10.2 315
12 Stewart Clarke 15.1 312
13 Ahmed A Shuhail 2.2 311
11 Sijmon de Waal 12.1 317
12 Philippe Lecomte 11.2 313
13 David Thiesset 8.1 308
Ahmed El Agouza 8 1722
Peter Mainka 9 1612
14 Alexander Nikolaev 6.2 345
15 Simone Caprodossi 14.2 340
16 Ahmed El Agouza 4.1 334
14 Peter Mainka 11.2 308
15 Simone Caprodossi 14.2 301
16 Alexander Nikolaev 6.1 299
14 Philippe Lecomte 11.1 307
15 Iyad Suleyman 9.2 304
16 Iyad Suleyman 9.1 294
Ahmed A Shuhail 10 1537
Sijmon de Waal 11 1519
17 Ahmed El Agouza 4.2 322
18 Simone Caprodossi 14.1 319
19 Alexander Nikolaev 6.1 306
20 Philippe Lecomte 12.2 303
21 Ahmed A Shuhail 2.2 291
22 Ahmed Abdulla Yousif Al Ali 3.2 286
23 Ahmed Abdulla Yousif Al Ali 3.1 285
24 Philippe Lecomte 12.1 265
17 David Robinson 8.1 293
18 Anna Bilyk 7.2 284
19 David Robinson 8.2 282
20 Anna Bilyk 7.1 279
21 Ahmed Abdulla Yousif Al Ali 3.2 275
22 Ahmed El Agouza 4.2 274
23 Ahmed El Agouza 4.1 264
24 Sijmon de Waal 13.2 235
17 David Thiesset 8.2 286
18 Alastair McGregor 4.2 279
19 Ahmed El Agouza 3.2 269
20 Ahmed El Agouza 3.1 259
21 Anna Bilyk 6.1 256
22 Peter Mainka 10.2 247
23 Stewart Clarke 14.1 229
24 Alexander Nikolaev 5 227
Ahmed Abdulla Yousif Al Ali 12 1474
David Thiesset 13 1426
Philippe Lecomte 14 1188
Alexander Nikolaev 15 1177
Abdulla A Shuhail 16 924
25 Sijmon de Waal 13.1 263
26 Alastair McGregor 5.2 259
25 Ahmed Abdulla Yousif Al Ali 3.1 231
26 David Thiesset 9.2 229
25 Ahmed Abdulla Yousif Al Ali 2.2 224
26 Peter Mainka 10.1 211
AMATEUR OVERALL PLACE TOTAL
27 David Thiesset 9.1 248
28 Peter Mainka 11.2 241
29 Stewart Clarke 15.2 239
27 Abdulla A Shuhail 1.2 0
28 Alexander Nikolaev 6.2 0
29 David Thiesset 9.1 0
27 Ahmed Abdulla Yousif Al Ali 2.1 173
28 Ahmed A Shuhail 1.1 134
29 Ahmed A Shuhail 1.2 128
Jonathan Clayton 1 1858
Dominique Zawisza 2 1774
30 David Robinson 8.1 228
31 Abdulla A Shuhail 1.1 220
32 Peter Mainka 11.1 214
30 Philippe Lecomte 12.1 0
31 Philippe Lecomte 12.2 0
32 Sijmon de Waal 13.1 0
AMATEUR WIDE ANGLE CATEGORY
NAME PHOTO TOTAL
1 Hollie Burroughs 11.2 341
Collin Wu 3 1727
Hollie Burroughs 4 1713
AMATEUR FISH CATEGORY
NAME PHOTO TOTAL
1 Kelly Tymburski 19.2 388
AMATEUR MACRO CATEGORY
NAME PHOTO TOTAL
1 Dominique Zawisza 6.2 378
2 Collin Wu 6.1 333
3 Collin Wu 6.2 312
4 Karim Saad 16 306
Rima Jabado 5 1693
Kelly Tymburski 6 1656
2 Dominique Zawisza 8.1 366
3 Kelly Tymburski 19.1 363
4 John Hager 15.1 353
2 Richard Bajol 22.2 371
3 Jonathan Clayton 14.2 368
4 Awni Hafedh 2.2 343
5 Jonathan Clayton 14.2 303
6 Jonathan Clayton 14.1 296
7 Hollie Burroughs 11.1 291
Claire Barker 7 1645
Richard Bajol 8 1610
5 Claire Barker 6.2 352
6 Rima Jabado 24.1 347
7 Shadi J.S. Alzaeem 25.1 347
5 Erika Rasmussen 7.2 338
6 Simon Long 25.1 328
7 Erika Rasmussen 7.1 326
8 Yousif Jasem Al Ali 21.1 285
9 Richard Bajol 19.2 268
10 Josofina Ng 15.1 261
Yousif Jasem Al Ali 9 1597
Awni Hafedh 10 1589
8 Rima Jabado 24.2 346
9 Jeffrey Catanjal 14.2 342
10 Dominique Zawisza 8.2 338
8 Nicola Bush 19.2 319
9 Awni Hafedh 2.1 318
10 Jonathan Clayton 14.1 318
11 Jeffrey Catanjal 12.1 257
12 Claire Barker 5.1 242
13 Kelly Tymburski 17.2 240
Josofina Ng 11 1481
John Hager 12 1370
11 Collin Wu 7.1 332
12 Josofina Ng 17.1 310
13 Jonathan Clayton 16.2 310
11 John Hager 13.2 316
12 Dominique Zawisza 6.1 315
13 Claire Barker 4.2 315
14 Ahmed Abd Elsalam Elsayed 2.1 239
15 John Hager 13.1 235
16 Claire Barker 5.2 230
Jeffrey Catanjal 13 1316
Erika Rasmussen 14 1301
14 Yousif Jasem Al Ali 28.1 305
15 Erika Rasmussen 9 296
16 Awni Hafedh 3.1 289
14 Rima Jabado 23.1 310
15 Rima Jabado 23.2 294
16 Nicola Bush 19.1 293
17 Dominique Zawisza 7.1 225
18 Christopher Gawronski 4.1 223
19 Jeffrey Catanjal 12.2 223
Christopher Gawronski 15 1202
Shadi J.S. Alzaeem 16 974
17 Hollie Burroughs 12.1 285
18 Awni Hafedh 3.2 283
19 Collin Wu 7.2 282
17 Yousif Jasem Al Ali 27.2 292
18 Yousif Jasem Al Ali 27.1 289
19 Hollie Burroughs 10.1 282
20 Christopher Gawronski 4.2 220
21 Gisela S. Vargas 10 216
22 Ghazi Gashut 9.2 208
Louis Girard 17 861
Redentor Vargas 18 829
20 Richard Bajol 23.2 281
21 Claire Barker 6.1 266
22 Hollie Burroughs 12.2 266
20 Richard Bajol 22.1 282
21 Karim Saad 16 279
22 Collin Wu 5.2 278
23 Ahmed Abd Elsalam Elsayed 2.2 207
24 Erika Rasmussen 8.1 205
25 Rima Jabado 20.1 202
Ghazi Gashut 19 795
Karim Saad 20 770
23 John Hager 15.2 265
24 Jonathan Clayton 16.1 263
25 Simon Long 26.2 258
26 Redentor Vargas 22.1 253
27 Terry Garske 27 251
28 Andrew Roughton 2.1 244
29 Jeffrey Catanjal 14.1 233
30 Gisela S. Vargas 11 230
31 Louis Girard 20.1 230
32 Ghazi Gashut 10.1 227
33 Richard Bajol 23.1 225
34 Yousif Jasem Al Ali 28.2 224
35 Louis Girard 20.2 222
36 Ismail Mohammed El Fakhry 13 216
23 Jérôme Devie 12.1 272
24 Josofina Ng 15.2 269
25 Josofina Ng 15.1 266
26 Kelly Tymburski 17.2 261
27 Jérôme Devie 12.2 260
28 Gisela S. Vargas 9 258
29 Shadi J.S. Alzaeem 24.1 258
30 Terry Garske 26 253
31 Hollie Burroughs 10.2 248
32 Claire Barker 4.1 240
33 Kelly Tymburski 17.1 224
34 Louis Girard 18.2 219
35 Redentor Vargas 21.1 217
36 Christopher Gawronski 3.1 201
26 Yousif Jasem Al Ali 21.2 202
27 John Hager 13.2 201
28 Rima Jabado 20.2 194
29 Awni Hafedh 3.1 193
30 Ghazi Gashut 9.1 186
31 Richard Bajol 19.1 183
32 Kelly Tymburski 17.1 180
33 Josofina Ng 15.2 171
34 Rania Mostafa 18.2 164
35 Awni Hafedh 3.2 163
36 Abdulazeez A. Alkarji 1.2 162
37 Rania Mostafa 18.1 159
38 Dominique Zawisza 7.2 152
39 Erika Rasmussen 8.2 136
Rania Mostafa 21 758
Gisela S. Vargas 22 704
Abdulazeez A. Alkarji 23 702
Nicola Bush 24 612
Simon Long 25 586
Jérôme Devie 26 532
Terry Garske 27 504
Ahmed Abd Elsalam Elsayed 28 446
Andrew Roughton 29 244
37 Redentor Vargas 22.2 214
38 Christopher Gawronski 5.1 209
39 Josofina Ng 17.2 204
37 Louis Girard 18.1 190
38 Shadi J.S. Alzaeem 24.2 176
39 Ghazi Gashut 8.1 174
40 Abdulazeez A. Alkarji 1.1 120
MARINE LIFE VIDEO CATEGORY
Ismail Mohammed El Fakhry 30 216
Beverly Humphreys 31 0
40 Shadi J.S. Alzaeem 25.2 193
41 Karim Saad 18 185
40 Christopher Gawronski 3.2 173
41 Rania Mostafa 20 140
NAME VIDEO TOTAL
1 Khaled Sultani 6 353
VIDEO OVERALL PLACE TOTAL
42 Abdulazeez A. Alkarji 1.2 185
43 Christopher Gawronski 5.2 176
44 Rania Mostafa 21.2 152
42 Abdulazeez A. Alkarji 1.2 92
43 Jeffrey Catanjal 11.2 0
44 Jeffrey Catanjal 11.1 0
2 Awni Hafedh 2 274
3 Karim Saad 5 223
4 Fazaluddin Jayanth 3 187
Khaled Sultani 1 759
John Hager 2 370
45 Abdulazeez A. Alkarji 1.1 143
46 Rania Mostafa 21.1 143
47 Beverly Humphreys 4 0
45 Collin Wu 5.1 0
46 Redentor Vargas 21.2 0
47 Abdulazeez A. Alkarji 1.1 0
5 John Hager 4 173
6 Ahmed El Agouza 1 168
WRECK VIDEO CATEGORY
Awni Hafedh 3 274
Karim Saad 4 223
48 Simon Long 26.1 0
49 Andrew Roughton 2.2 0
50 Ghazi Gashut 10.2 0
48 John Hager 13.1 0
49 Ghazi Gashut 8.2 0
50 Simon Long 25.2 0
NAME VIDEO TOTAL
1 Khaled Sultani 2 406
2 John Hager 1 197
Fazaluddin Jayanth 5 187
Ahmed El Agouza 6 168
56 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 57
C H I N A . M A L A Y S I A . F R A N C E . G E R M A N Y. S I N G A P O R E . T H A I L A N D . T A I W A N . U N I T E D S T A T E S . U N I T E D K I N G D O M
4 th Place FISH
© Anna Bilyk
5 th Place FISH
© David Robinson
6 th Place FISH
© Iyad Suleyman
7 th Place FISH
© Iyad Suleyman
2 nd Place WA
© Warren Baverstock
5 th Place WA
© David Robinson
6 th Place WA
© David Robinson
7 th Place WA
© Anna Bilyk
8 th Place FISH
© Anna Bilyk
9 th Place FISH
© Warren Baverstock
10 th Place FISH
© David Thiesset
11 th Place FISH
© Stewart Clarke
8 th Place WA
© Simone Caprodossi
9 th Place WA
© Sijmon de Waal
10 th Place WA
© Stewart Clarke
11 th Place WA
© Sijmon de Waal
12 th Place FISH
© Abdulla Shuhail
13 th Place FISH
© Ahmed Shuhail
14 th Place FISH
© Alexander Nikolaev
15 th Place FISH
© Simone Caprodossi
12 th Place WA
© Philippe Lecomte
13 th Place WA
© David Thiesset
14 th Place WA
© Philippe Lecomte
15 th Place WA
© Iyad Suleyman
16 th Place FISH
© Ahmed El Agouza
17 th Place FISH
© Ahmed El Agouza
2 nd Place MACRO
© Warren Baverstock
4 th Place MACRO
© Alastair McGregor
16 th Place WA
© Iyad Suleyman
3 rd Place FISH
© Kelly Tymburski
5 th Place FISH
© Claire Barker
6 th Place FISH
© Rima Jabado
6 th Place MACRO
© Stewart Clarke
7 th Place MACRO
© Abdulla Shuhail
8 th Place MACRO
© Iyad Suleyman
9 th Place MACRO
© Simone Caprodossi
7 th Place FISH
© Shadi J.S. Alzaeem
8 th Place FISH
© Rima Jabado
9 th Place FISH
© Jeffrey Catanjal
10 th Place FISH
© Dominique Zawisza
11 th Place FISH
© Collin Wu
12 th Place FISH
© Josofina Ng
13 th Place FISH
© Jonathan Clayton
14 th Place FISH
© Yousif Jasem Al Ali
10 th Place MACRO
© Ahmed Shuhail
11 th Place MACRO
© Iyad Suleyman
12 th Place MACRO
© Stewart Clarke
13 th Place MACRO
© Ahmed Shuhail
14 th Place MACRO
© Peter Mainka
15 th Place MACRO
© Simone Caprodossi
16 th Place MACRO
© Alexander Nikolaev
17 h Place MACRO
© Simone Caprodossi
15 th Place FISH
© Erika Rasmussen
16 th Place FISH
© Awni Hafedh
17 th Place FISH
© Hollie Burroughs
18 th Place FISH
© Awni Hafedh
58 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 59
19 th Place FISH
© Collin Wu
20 th Place FISH
© Richard Bajol
21 st Place FISH
© Claire Barker
22 nd Place FISH
© Hollie Burroughs
25 th Place MACRO
© Josofina Ng
26 th Place MACRO
© Kelly Tymburski
2 nd Place WA
© Collin Wu
5 th Place WA
© Jonathan Clayton
23 rd Place FISH
© John Hager
24 th Place FISH
© Jonathan Clayton
25 th Place FISH
© Simon Long
26 th Place FISH
© Redentor Vargas
6 th Place WA
© Jonathan Clayton
7 th Place WA
© Hollie Burroughs
8 th Place WA
© Yousif Jasem Al Ali
9 th Place WA
© Richard Bajol
4 th Place MACRO
© Awni Hafedh
5 th Place MACRO
© Erika Rasmussen
6 th Place MACRO
© Simon Long
7 th Place MACRO
© Erika Rasmussen
8 th Place MACRO
© Nicola Bush
10 th Place WA
© Josofina Ng
11 th Place WA
© Jeffrey Catanjal
12 th Place WA
© Claire Barker
13 th Place WA
© Kelly Tymburski
9 th Place MACRO
© Awni Hafedh
10 th Place MACRO
© Jonathan Clayton
11 th Place MACRO
© John Hager
12 th Place MACRO
© Dominique Zawisza
14 th Place WA
© Ahmed Abd Elsalam Elsayed
15 th Place WA
© John Hager
16 th Place WA
© Claire Barker
17 th Place WA
© Dominique Zawisza
13 th Place MACRO
© Claire Barker
14 th Place MACRO
© Rima Jabado
15 th Place MACRO
© Rima Jabado
16 th Place MACRO
© Nicola Bush
18 th Place WA
© Christopher Gawronski
19 th Place WA
© Jeffrey Catanjal
20 th Place WA
© Christopher Gawronski
21 st Place WA
© Gisela S. Vargas
18 th Place FISH
© Simone Caprodossi
19 th Place FISH
© Alexander Nikolaev
20 th Place FISH
© Philippe Lecomte
21 st Place FISH
© Ahmed Shuhail
17 th Place MACRO
© Yousif Jasem Al Ali
18 th Place MACRO
© Yousif Jasem Al Ali
19 th Place MACRO
© Hollie Burroughs
20 th Place MACRO
© Richard Bajol
21 st Place MACRO
© Karim Saad
22 nd Place MACRO
© Collin Wu
23 rd Place MACRO
© Jérôme Devie
24 th Place MACRO
© Josofina Ng
22 nd Place FISH
© Ahmed Abdulla Yousif
Al Ali
23 rd Place FISH
© Ahmed Abdulla Yousif Al Ali
24 th Place FISH
© Philippe Lecomte
25 th Place FISH
© Sijmon de Waal
60 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 61
GALLERY OF LIGHT
The Awards and Exhibition evening was held at The Dubai Community Theatre and Arts Centre (DUCTAC) in the Gallery of Light at Mall of The
Emirates on Wednesday, 30 th May 2012 at 7pm. Photos by Roy Sison Alexis.
49 Participants in total, 16 professional (SLR), 31 amateur (Point & Shoot), 6 filmers, 233 photographs and 8 videos!
Ibrahim N. Al-Zu’bi
EDA Executive Director
Mr. Khalfan Khalfan Al Mohiari – EDA Financial Director and
Mr. Omar Al Huraiz – EDA Head of the Technical Committee
Jonathan Ali Khan
Managing Diretor of Wild Planet Productions
62 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 63
64 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 65
UW PHOTOGRAPHY
Macro and Super MACRO Photography
feature and PHOTOGRAPHY Alastair McGregor
UW PHOTOGRAPHY
and even 32 to ensure that the image is sharp.
F8 on point and shoot cameras. Shutter speeds
need to be from 1/60 to 1/250 depending
on the colour you want the back ground to
be. Faster shutter speeds will give a darker
background. Although on occasion I will use
an f-stop of 16 or so to give me a nice bokeh
background. Bokeh (blurred) backgrounds
often give nice pastel colour to show off your
macro subjects. The pygmy sea horse is an
example of this. The other images of the fang
blenny show a different approach using higher
f stops and shutter speeds in order to create
a dark/black background. The urchin shrimp
and the whip coral shrimp show that even at
F25 when using a close up diopter, the Depth
of Field is very limited and can lead to some
important parts of the image (eyes) being out
of focus. In some cases, the zone of sharpness
is barely a pencil line in thickness. I have tried
to choose images that show a 1:1 image and
then 2:1 image to show the difference.
Pygmy Sea Horse 1:1 macro with 105mm lens F16, 1/200. | Pygmy Sea Horse 2:1 macro with 105mm and macro mate diopter f16, 1/200. | Pygmy Sea Horse 2:1 macro with 105mm
and macro mate F25, 1/200.
In this article we are going to explore macro
photography and super macro photography
and what it means to the underwater
photographer. What do we need to have on
our cameras to do it and how. All the photos
in this article are taken with a Nikon D90 using
either a 60mm or 105mm micro Nikkor lens.
In most images I have used two Inon Z240
strobes. Note: super macro is not a type of
photography for people with a low patience
threshold as it can take a long time to line up
shots, oh and an understanding buddy.
Macro photography is close-up photography,
usually of very small subjects in which the size
of the subject in the photograph is greater
than life size. Traditionally a macro photograph
is one in which the size of the subject on
the image sensor is life size or greater(1:1).
The ratio of the subject size on the sensor
to the actual subject size is known as the
reproduction ratio. Likewise, a macro lens is
a lens capable of reproduction ratios greater
than 1:1, although it often refers to any lens
with a large reproduction ratio, despite rarely
exceeding 1:1. In underwater photography, we
tend to refer to macro having reproduction
ratios of 1:1 and super macro as 2:1 or greater.
Generally, we will use a 60mm lens for standard
macro photography. For underwater, using the
Nikon or Canon 60mm or the sigma 50mm
lens makes a good starting lens for portraits
or macro. The Advantage of the 60mm is that
its minimum focus distance where it achieves
1:1 magnification is close to the lens. This is
very beneficial underwater as it cuts down
the amount of water between camera and
subject. Water absorbs light starting with the
red end of the spectrum hence the reason
many photos you see are very blue or very
green. Flash restores this colour but we have
to get closer and closer. 60mm DSLR lenses
achieve 1:1 macro in less than a hand span
from your subject and some point and shoots
are even closer. For P&S cameras you will have
to have your camera on the macro setting
and generally zoomed all the way out to get
macro and fin closer – zooming in puts more
water between you and the subject and does
not give you a true macro photograph. Good
buoyancy control is a must for any underwater
photography but even more so for macro and
super macro where even a slight movement
will cause your image to be out of focus or
cause damage to reef or the subject. The last
thing you want to do is destroy that which you
are taking a photo of!
Most good macro photographs are taken
when you are looking up at the subject, that
famous rule still applies, get close and shoot
up. This is the most important fundamental
rule; to try and be below your subject and
look up to it with the lens. This is not always
easy and some animals are just in the wrong
place.
You will see a lot of my photos on this page are
taken with the 105mm macro lens. This lens
has the same reproduction ratio as the 60mm
but due to its longer focal length it will put
more distance between you and your subject.
This is good if animals are nervous or skittish
around divers, but again bad as it puts water
between you and the target, so you will need
to increase your strobe duration (power)
and this can lead to backscatter requiring
an adjustment in strobe position. The 105 is
harder to use as it also has a narrow angle of
view meaning that it enables us to tightly frame
our subject in the image without resorting to
post image cropping, but it also means that we
can, with a small inhalation or rough use of the
shutter release, chop off bits of our subject by
causing the camera to move. But as we move
on to super macro, the longer focal length is
important and worth perseverance.
Macro underwater photos start with knowing
your subject. Read field guides and talk to
other divers and dive guides to find out what
is around and where to find a suitable subject
– inchcape 2 springs to mind and obviously the
Musandam. You will need to know the animal’s
behavior and understand it a little more, this
will help you get that shot. And most of all
patience!
For super macro photography, the 105mm lens
is the weapon of choice due to its increased
focus distance. There are many ways to get
greater than life size magnification and some
I discuss below:
Extension tubes: extension tubes work
by moving the last element of the lens away
from the focal plane to increase magnification,
this has a large disadvantage in that your port
will have to be long to accommodate these
and you will lose light and probably the ability
to autofocus.
Teleconverters: these are small add on
lenses in a variety of strengths that mount on
your camera and then the lens mounts into
the teleconverter, multiplying the focal length
Fang Blenny 1:1 macro 105mm f25, 1/200 Fang Blenny 2:1 macro 105mm f29, 1/200
of the lens. For example, a 2x teleconverter
will turn your 60mm lens into a 120mm macro
lens. You also retain the full range of autofocus
on your lens. The downside is a dim view finder
as they absorb light – for 1.4x you will lose 1
stop of light, 1.7x you will lose 1½ stops and
2x you will lose 2 stops of light. All this makes
it harder to get enough light to your subject.
Diopters: Dry diopters are close up lenses
that screw on to the front of your lens inside
the housing and give you magnification
denoted by + number on the side. These
work by reducing the minimum focus distance
to get greater magnification. The disadvantage
is that they are on for the whole dive and the
camera will no longer be able to focus in the
distance. A better and more flexible way to
super macro is to use the wet diopter and
these are available for the DSLR shooter
and point and shoot cameras and work the
same way as the dry diopter. They come in
a variety of strengths +5, +8, +10 being the
most common. There are two types of these
add on lenses. The first looks like a magnifying
glass and screw or push onto the front of the
port. The magnifying element is in the water
and some sharpness is lost due to that fact.
Examples are some of the inon diopters and
the woody diopter. The other type have two
elements and the magnifying elements are
inward facing and in a sealed housing so that
the magnification is done in air which leads to
greater sharpness and retains the magnifying
power of the glass. Examples are the Sub See
(+5 and +10) and the Macro Mate (+8). All
the super macro photos in this article were
with the macro Mate. Use of a longer lens
(100 or 105) is recommended here as the
diopter will halve
the minimum focus
distance for the lens.
With a 60mm you
are in real danger of
crushing your target.
When using any of
the above devices
or combinations,
special care has to
be given to aperture
and depth of field.
It is not uncommon
to use F-stops of 25
The far east is without a doubt the place for
macro photography, but here in the UAE,
we are lucky and we have a large amount
of subjects for super macro photography.
Everything from shrimps, gobies, nudibranchs,
soft coral crabs, juveniles and anemone fish.
There are some very good websites and
facebook pages around for macro underwater
photography. One of them is the macro
underwater page on facebook where some
really good pictures are displayed.
You can view more of my photos on:
Bobtail Squid 2.2:1 The extreme end of my lenses capability. The Bobtail is approx 5mm long.
Urchin Shrimp 2:1 macro 105mm with macromate diopter f25, 1/180. | Whip Coral Shrimp 2:1 macro 105mm with Macro mate f22, 1/125 for a blue back ground.
66 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 67
DIVING DESTINATIONS
DIVING IN CYPRUS
A TASTE OF THE MEDITERRANEAN
FEATURE AND PHOTOGRAPHY ALLY LANDES UNDERWATER PHOTOGRAPHY SIMONE CAPRODOSSI
THE CAVES | One of the small caves at this dive site is the shape of the map of Cyprus!
An invitation to dive the Mediterranean doesn’t
come around very often and when the Larnaca
Tourism Board sent us a request to cover diving
in the region in our magazine, ‘Divers For The
Environment’, we were delighted to oblige!
Diving in the Mediterranean was a first for me,
so I was really looking forward to exploring the
Cypriot med and see what it had to offer.
Our diving was organised through AAK Larnaca
Napa Sea Cruises – owners of the Zenobia
wreck which they purchased in 1997. The
Zenobia is rated one of the 10 best wreck dives
in the world, making this a must dive to add to
your list! It lies 1.5km off the coast of Larnaca,
its depths starting at 17 metres and ending at
43 metres, is 174 metres long lying on its port
side on a flat bed of sand and rocks and has 3
massive cargo holds to explore! The Zenobia is
huge and it has been described by some to take
up to 20 dives to fully explore all it has to offer.
Its fate came in May 1980 when it set sail for
Syria with a cargo of over 100 lorries, industrial
machinery, cars and extensive cargo when it ran
into some difficulties and the ship’s computers
failed and caused a continuous flow of water
to be pumped into the side ballasts. Without a
chance to recover the Zenobia, it was towed
out to avoid any collisions inside the port.
Visibility is fantastic and can be as good as 50
metres. Water temperatures are between 16˚C
to 28˚C. Marine life is as much expected from
the overfished Mediterranean, but when you
dive a wreck of that scale and content, you are
not necessarily there for the marine life. The
Zenobia offers you its own unique adventure
that requires several dives to explore the
majority of its layout and it’s pretty spectacular.
The only down side to our trip was the
disappointing experience we had with
AAK Larnaca Napa Sea Cruises. There is
unfortunately no way around them as they
own the wreck and give the rights to dive it. We
recommend you find yourself a nice little dive
centre to dive with through this company that
will look after you and show you all the best bits
and the trip will be fantastic.
I had sent AAK an email a week prior to
our departure to prepare and learn about
the types of diving that we would do before
arriving there, but found communication to
be very slack. I managed to get an email back
from one person asking us for our equipment
requirements and sizes and when we arrived,
no one had been made aware that we had
made such a request. If you have your own
diving equipment, we recommend you take
from it your fins, mask and your regulator or
you may be disappointed as it was a mission to
get proper fitting equipment together as they
were very limited with what they had on board.
The AAK captain (and also the dive guide) on
our second day was very unpleasant when we
went to check in with him. We had expected
the same dive guide from the day before, but
learnt he was not there and our new man
was not expecting us and to top it off, had no
clue as to who we were. We were told to go
and wait for him while he made a few angry
calls and then came back a little later singing a
slightly different tune – mockingly. A slightly wry
start to our morning and not sure at this point
whether this was going to get any better. All
the diving equipment we had spent the prior
afternoon getting together, was no where to be
found and we had to start all over again.
Seeing our slight frustration while getting his
own divers ready for the wreck dive, we were
very fortunate to meet Simon Banks, owner of
the Windmills Diving School based in Protaras
(45 minutes to an hours drive from Larnaca).
Simon always comes down with students to
dive the Zenobia and was the friendliest face
on the boat along with his colleague, Doc. As
well as being incredibly social, Simon is also full
of facts and knowledge about diving in Cyprus
and within 10 minutes of chatting, we had
learnt a great deal! He highly recommended
that we get our dive guide to take us inside
the Zenobia, if not he was happy for us to tag
along with him (our guide did not recommend
us going in with the intent to take photos or
just couldn’t be bothered).
As we kitted up, we finally got to plan our
dives and explained where we had dived on
the Zenobia the day before and our reluctant
dive guide decided we would dive the same
place but slightly deeper! On seeing we would
only have 3 dives in total on the Zenobia to
photograph it, we asked if diving the same
area was really worthwhile photographywise.
It turned out it wasn’t as Simone had already
accomplished the shots we had needed in this
part and we had to pin this guy’s arm back to
get him to take us inside the wreck to see all
the commotion we had read up on for our
third dive. He eventually agreed.
68 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 69
DIVING DESTINATIONS
A small fishing port
We have to say, that entering the Zenobia was
an incredible experience and was a fantastic
opportunity for photography. It also allowed
us to see certain things we didn’t even know
were there as our guide did not take any
torches down or point anything of interest out
to us which you expect. We were led in blind.
Simone’s flash highlighted certain bits of colour
that we later discovered were all sorts of fun
things when we got back to our hotel to check
out the days shoot. We knew that we had
ended up in the restaurant at one point as the
red tartan carpet was highlighted! Our entry
point was through a fairly large opening and
buoyancy is important as you do pass through a
lot of corridors and change depths a few times.
You do need to make sure you have a dive
computer to monitor your dives. The dive on
the inside is every bit as good as it is described
and you see how it rates as one of the top 10!
One thing is for sure, we all agreed we would
love to go back and dive the Zenobia all over
again, but next time we would go through
another dive centre as it makes all the difference
to your diving experience and what you pay
for as our diving saga with AAK continued the
following morning.
The last morning’s dive was meant to have
been at the Pyla Caves so that we could
experience another type of diving. I made sure
to call the manager from AAK first thing in
the morning (we had still not met him at this
point into our trip) before heading down to
breakfast to confirm we were in fact sticking to
the intinerary. He nonchalently confirmed that
we were going and closed with “and we’ll have
a coffee!” The ending to that conversation was
not convincing.
We headed out on our 10 minute morning
walk to the Larnaca Marina from our hotel and
found out that we were in fact not going to
the Pyla Caves but instead to dive the Zenobia
again because the winds had changed direction.
To cut a long story short, we had to at this
point say our thank you’s and make a break for
it as this was one diving experience we could
not end our memories on – least of all share
with our readers!
We went back to our hotel and immediately
called Simon Banks up and he opened an
invitation for us to get to his dive centre at our
earliest and he would take us to The Caves, a
lovely little dive site in Cape Greco.
The cheapest option to getting around and the
most fun, is to of course hire your own car!
We’ll skip the part about us walking out to
find a car dealer we were recommended and
couldn’t find (although hilarious, the people we
met along the way were incredibly friendly and
helpful) and just tell you about the part our
lovely hotel receptionist organised a rental for
us in 2 minutes flat! Within 30 minutes, the car
was delivered to our hotel.
The Pyla Caves
Larnaca Marina
The AAK Larnaca Napa Sea Cruises boat
Windmills Hotel Apartments
A look at a one bedroom aparment’s kitchen and living room
70 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 71
DIVING DESTINATIONS
A LITTLE BIT ABOUT OUR HOTEL:
The Livadhiotis City Hotel is situated just 100
metres from the famous Larnaca Seafront
(Phinikoudes Beach) and located in the heart
of Larnaca’s town centre. It is surrounded by
lots of great cafés, pubs and restaurants, only
10 minutes away from the Larnaca International
Airport and just a stone’s throw away from the
towns main shopping and commercial centre.
The surrounding area is steeped in history with
the historical Saint Lazarus Church directly
opposite the hotel, while the Pierides Museum,
the Larnaca Marina, the Medieval Castle, and
the Larnaca Archaeological museum are all
within a short walking distance. It was a great
place to stay and the staff were brilliant.
So, we ended up on a road trip to Protaras, saw
some lovely scenery along the way and made
it over in a relaxed 45 minutes. The dive centre
is conveniently located at the Windmills Hotel
Apartments, a family owned and run complex
which offers studios for 2 or 3 persons or one
bedroom apartments for 2 or 4 persons with all
the amenities required for a comfortable stay.
Simon got us each sorted out with properly
fitting equipment in no time and we left the
dive centre and followed Simon’s pickup in
our little rental and he went out of his way
and stopped to show us a couple of beautiful
landmarks on our way to the dive site which
are great to see.
If you are not going to dive in Cyprus, there is
plenty to see and visit. They have a very rich
history and culture that is worth exploring and
finding out about.
We reached our lovely spot, parked the cars
and got ourselves ready for the walk down
to the water’s edge. Slightly tricky with steel
tanks and a little extra weight on our backs, plus
hauling the heavy camera and video equipment
while trying to keep our balance – but with
careful footing we made it down the rocky path
and did a backwards roll off the ledge into the
very clear blue water and descended beneath
the surface where a new world lay before us!
A fun fact we learnt: Believe it or not,
there are no tides or currents there.
The Caves is a really fun dive site and the
topography is beautiful and so different. It’s
an easy shallow dive with a maximum depth
of 12 metres consisting of holes, tunnels and
overhanging rocks. Photographers can have a
lot of fun here using diver models to add some
depth to their images.
Simon had seen a seal at this dive site a few
days before, but we unfortunately did not get
a visit. We did see a lovely little orange moray
eel and we saw our first Neptune’s Lace. Katie
Brooks, a marine biologist for The Manta Trust
came along as part of the EDA team and gives
a detailed description of the marine life we got
to see on our dives in Cyprus on page 74.
If it were not for the 5mm wetsuits, gloves and
booties, I don’t think I would have managed to
stay down as long as we did on our dives. It
was 16˚C at one point, which does take a little
getting used to. I highly recommend getting a
hoodie as most of the local/resident divers
were all (maybe not all of them…but most of
them) wearing one. I know I will be investing
in one for future dives in those temperatures.
As 3 divers sent out on a mission for this
latest EDA FAM trip, we have learnt to make
a turnaround out of something not so good,
into something so good you envisage coming
back to do it all over again, but this time with
the added value of good knowledge and
experience. We enjoyed our dives so much
and there are many more dive sites to explore.
That will be one to plan and look forward to
for another time.
Oh, and if you love to eat – as we sure do (you
won’t be able to get enough of the halloumi!)
– then Cyprus with the added bonus of good
food, diving and historical sites makes a great
long weekend destination. All the tastes and
sites of Cyprus are ever so close.
Emirates have a 4 hour direct flight from Dubai
to Larnaca!
Places We Recommend:
The hotel we stayed at in Larnaca:
Livadhiotis City Hotel
50 Nikolaou Rossou Street
P.O. Box 42800, 6021 Larnaca
Tel: +357 24 626 222
These are the restaurants we experienced and suggest
you ask them to give you a sample of their choice (food
is fresh, homemade and incredibly sumptious that you
always manage to find room in the bottom of your
stomach for one last bite):
MILITZIS RESTAURANT
(they offer a rich selection of genuine homemade Cypriot
dishes – they make their own delicious halloumi)
42, Piale Pasia Street, Larnaca
Tel: +375 24 655 867
CHARMERS RESTAURANT
(famous for their meat dishes, but we opted for fish not
knowing this little fact)
Piale Pasia Street, Lordos Seagate, Larnaca
Tel: +375 24 624 127
TARATSA
(opposite the beautiful St. Lazarus Church)
Corner of Mehmet Ali & Pavlou Valsamaki, Larnaca
Tel: +375 24 621 782
KARAS VILLAGE TAVERN
(known for their exquisite seafood dishes)
Kennedy Avenue, Kappari 55, Paralimni
Tel: +375 23 820 565
The dive centre we can recommend to dive with:
WINDMILLS DIVING SCHOOL
Simon Banks
128 Prenera Avenue 69, Protaras
Tel: +357 96 213 982
The dive centre accomodation:
WINDMILLS HOTEL APARTMENTS
Pernera Avenue 75, Protaras, P.O. Box 33075, Paralimni
Tel: +357 23 831 120
Thank you to the Larnaka Tourism Board in Cyprus for
arranging the FAM Trip itinerary and the guided tour on
our rest day to Agia Napa and thank you to the Cyprus
Tourism Board in Dubai for the overall invitation.
72 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 73
DIVING DESTINATIONS
THE SMALL ISLAND OF CYPRUS
and WHAT lies BENEATH its SURFACE
FEATURE KATIE BROOKS PHOTOGRAPHY SIMONE CAPRODOSSI
DIVING DESTINATIONS
Neptune’s Lace, photo by Katie Lee
The small island of Cyprus lies in the eastern
corner of the Mediterranean Sea a far cry
from the shores of the UAE and an even
further cry from seas in which I have spent the
majority of my career as a marine biologist.
Nearly all of my scientific experience has been
in the tropics and the vast majority of that
in the Indian Ocean, so when I was asked to
join a trip with EDA to Cyprus to look at the
marine life, I jumped at the chance.
Being a marine biologist is probably one
of the most enjoyable jobs in the world,
but it encompasses so much more than
simply knowing your fish. Working as a
marine biologist also means working with
governments, fishermen, tourists and a
whole host of other stakeholders and
involves issues as wide ranging as fishing,
protection, enforcement, research, education,
communication, management and recreation
as well as knowing and understanding the
marine life of a particular area. So it was
with all this in mind that I travelled to the
Mediterranean to learn more about what it’s
really like in the waters of Cyprus!
The name Mediterranean is derived from the
Latin mediterraneus, meaning in the middle
(medius) of the land (terra) and even the
quickest of glances at a map confirms this
to be true. The sea is confined by Europe
in the north, Africa in the South and Asia to
the east with the only natural entrance to
the open ocean at the Straits of Gibraltar, a
narrow 14km wide passage to the Atlantic.
The Mediterranean is a ‘young’ sea geologically
speaking having mostly filled just 5million years
ago through the narrow channel at the Straits
of Gibraltar and it is due to this, that most of
the biota are primarily derived from Atlantic
species.
Historically the Mediterranean has played an
import role in the history of a number of
ancient civilisations and today some 21 states
have a coastline on the Mediterranean. It is
a major shipping route, especially since the
opening of the Suez Canal in 1879, linking
its water’s to those of the Red Sea meaning
ships can avoid passing around Africa. It is a
source of food for many and in addition has
an important role as a tourism centre – a lot
of pressure over a small area. So, how does
the Mediterranean hold up And what is it
like to venture into and under these waters…
my trip to Cyprus afforded me some major
insights.
The first site we dived was a wreck called the
Zenobia, which is boasted by the many dive
operators around Larnaka, where it lays just
one and a half kilometres from shore, to be
amongst the top wrecks to dive in the world.
There’s no doubt that it is amazing and the
excellent visibility we experienced, common
throughout the Mediterranean, enhances your
dive as you can take in so much of the 178
metres of this wreck even at a glance.
In terms of marine life there is much to be
seen, but unlike other wrecks you might have
experienced it perhaps quite not as abundant
or as diverse as some of the other ‘top wreck
dives’ in the world. Wrecks are renowned for
their ability to create a habitat where there
would otherwise not be one, they provide a
hard substrate which many species need as
a holdfast to start populating an area. Having
been submerged for just over 30 years, life
has had time to infiltrate the Zenobia. The
outside surfaces of the ship, every spare inch,
are coated in a variety of species of seaweed,
algae and seagrasses including peacock’s tail
(Padina pavonica), common caulerpa (Caulerpa
prolifera) and creeping caulerpa (Caulerpa
racemosa var. occidentalis) a non native species
possibly originating from the Red Sea via the
Suez Canal or even Australia. Although these
algae cover her every surface, they don’t
obscure the outlines of the ship itself. Such a
habitat makes her the perfect site for the twobanded
bream (Diplodus vulgaris) who are
usually found in seagrass meadows and algae
covered rocks and are very unafraid of divers!
This bream alongside the planktivourous
damsel fish (Chromis chromis) are by far
the most prolific and conspicuous species
you’ll see on the wreck. Looking beyond this
reveals a number of other species feeding and
busying themselves amongst the algae gardens
that are the Zenobia, including the ornate
wrasse (Thalassoma pavo), white bream
(Diplodus sargus sargus) and if you look very
closely, white tipped nudibranchs (Coryphylla
pedata) and bearded fire worms (Hermodice
caruncalata) who look unassuming, but are
active predators. They’re even able to cause
nasty burn-like skin irritations to divers if
they rub against the white tufts which line
their sides. There’s even the odd barracuda
(Sphyraena sphyraena) hanging in the blue
above the wreck.
Amongst the algae a variety of sponges have
also colonised the wreck, in particular the
black sponge (Ircinia spinosa) encrusts the
surfaces of the wreck, its colony’s up to 20cm
in diameter. Less common species include
the yellow tube sponge (Verongia aerophoba),
this specimen with a small rockfish (Scorpaena
nota) hiding within it. A variety of tube worms
including the stunning spiral tube worm
(Spirographis spallanzani) and the white tufted
worm (Protula tubulaira) also dot the surfaces
of the ship, picking plankton from the water
with their feathery arms.
Inside the ship, away from the sunlight
required for the green algae and seagrasses
to thrive, encrusting algae and bryzoans
coat the surfaces. Tube worms here too find
corners upon which to unfurl their featherlike
branches and large dusky groupers
(Epinephelus marginatus) gather to stalk out
their prey.
Amongst the species I have mentioned, a
few come from beyond the waters of the
Mediterranean and our next dive experience
was to go further in highlighting the issue of
invasive species in Mediterranean waters.
When the Suez Canal was opened in 1869, for
the first time the Red Sea and Mediterranean
waters were linked and as well as allowing the
movement of ships and cargo, it also enabled
the movement of species from the Red Sea
into the Mediterranean. These species are
known as invasive species and in the case
of the Suez Canal, the higher waters of the
Red Sea mean that water flows from there
into the Mediterranean. Invasive species can
often be more adapted to an environment
than their endemic counterparts and in this
case the Red Sea is saltier with less nutrients
than the Mediterranean which allows certain
species to slip into the same ecologic niches
which have been filled by endemic species
for millennia, using the same resources and
competing for space and food. Sometimes
they live alongside their existing counterparts
and in other instances they have caused major
problems. This isn’t an overnight issue, barriers
to migration do exist, but gradually over
decades certain physical barriers are lessened
and one at a time, species find their new
niches. A 2006 paper reported that 65 species
of fish had migrated to a new environment in
the Mediterranean from the Red Sea. Talking
to the divers who have been in Cyprus even
for as short a time as 6 years, it appeared that
new species were seen with each passing year
and that something as simple as a fish ID slate
could not be kept up to date.
Our second site, The Caves, was certainly
a real hotspot for these invasive species,
situated on the Cape Greco Peninsula. The
74 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 75
DIVING DESTINATIONS
site was beautiful and allowed us to explore
the shoreline of this area and the caves just
below the water. The lunar-like rocky seascape
was coated in many of the same species of
algae and seagrass as the Zenobia, but from
the corner of my eye about 2 minutes into
the dive, I spotted a blue spotted cornet fish
(Fistularia commersonii) a broadly distributed
Indo-Pacific fish! Within the algal habitat I also
spotted many of the same species I had seen
on the Zenobia, including the bearded fire
worms which were even more abundant at
this site and a second species of nudibranch,
the purple nudibranch (Flabellina affinis), but
again, fish in large were absent.
Inside the caves, algae encrusted the surfaces
and Bryzoans such as Neptune’s lace (Sertella
septentrionalis) coated the roofs. The cardinal
fish (Apogon imperbis) hovered around the cave
entrances and a school of Vanikoro sweepers
(Pempheris vanicolensis) first recorded in
the Mediterranean in 1991, lurked in the
shadows. There was even the odd solider fish
(Sargocentron rubrum), another invader.
My time in Cyprus although short, revealed
that the Mediterranean is a sea, like most,
under pressure. Invasive species and over
fishing were the two most obvious threats
it faces, especially from a diving perspective,
with much talk amongst the divers that I
spoke with about the conflict between them
and fishermen. The Zenobia, for example, is
reputed to be a protected site although there
was much scepticism about whether or not
this was indeed the case amongst the local dive
community. Problems like this unfortunately
do not have simple solutions and it will remain
to be seen what the future might hold for the
waters around this small island nation upon
which its population rely on so heavily.
Thailand is well known for its food, massages,
nice people and the beautiful landscapes
depicting temples and rice fields.
But if you look on a map, Thailand has a lot of
sea coast on both sides of the peninsula with
some islands. The Gulf of Thailand (part of the
China sea) is in the west and the Andaman sea is
in the east. Phuket is an island on the Andaman
Sea side that is linked to the peninsula by a
bridge. If you want to reach this island from the
UAE, you have to pass by Bangkok. There is a
flight to Phuket every hour from Bangkok.
Most of the dive sites are on the south of the
island. Karon Beach, Kata is a good place to
stay there. Finding a dive club on the island is
very easy but you need to be careful of the
prices for a full day of diving. Ask if you will
have a guide from the shop with you or not. In
Phuket, all the dive clubs drive you to the pier
in Chalong Bay in order to get a bigger boat
for the days diving. All the boats belong to
different companies and not to the dive clubs
themselves. On board there is a dive guide
that does not belong to your dive club.
Diving in Phuket is really nice and surprising
too. Racha Noi, Koh Racha Yai, Koh Doc Mai,
DIVING DESTINATIONS
Phuket – THAILAND
feature and PHOTOGRAPHY philippe lecomte
Anemone Garden, Shark Point or the wreck
dive, King Cruiser are just some of the beautiful
sites to discover.
Phuket is well known for its Leopard shark
population but unfortunately, like in most
asian countries, these sharks are over fished,
they are however very common around Shark
Point. There are chances of also seeing Black
tips or Bamboo sharks that are common
around Phi Phi Island. Mantas can also be seen
on the extreme south of Racha Noi. Macro
photographers will have plenty of stuff to see
too. Nudibranchs, Ghost pipe fish, Clown fish,
Anemone shrimps or even Harlequin shrimps.
In the far west of Phuket, the amazing dive
sites of Koh Similan can be visited. You need
to stay over one night to thoroughly enjoy this
beautiful island. Whale sharks are common as
well as all the other big fish such as mantas,
sharks and schools of barracudas.
Phuket is not just about diving, resting on the
beach or drinking in the pub. Phuket has a lot to
offer all the family with its water park, elephant
rides, day trip to the James Bond Island and
much more. So why not try Thailand for your
next holiday destination!
76 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 77
DIVING DESTINATIONS
WHYTECLIFF MARINE PARK: CANADA‘s First
Feature and Photography Marc Anthony Viloria
in calm water conditions. The right side of the
bay on the other hand are a more popular
spot for recreational and student divers alike.
The shallow wall (5-10 meters) is covered
with tons of star fish with the more common
one, the sunflower sea star. Not to be out
run, macro photographers will enjoy finding
different species of nudibranches such as the
Acanthodoris nanaimoensis or the Dendronotus
albus most of the time.
Around the corner of the starfish wall is the
artificial reef of boat fragments that commonly
houses greenlings, cod fish or groups of rock
crabs. Venturing west, you will be welcomed
by an astonishing view of the white plumose
garden. Little facts about plumose anemone;
also called frilled anemone, it has a wide base,
sometimes 12cm across. The column may grow
to a height of 50cm. The color of the cylinder and
the tentacles can be in shades of white, yellow,
orange and brown. Large specimens may have
a thousand tentacles. If patient enough to hang
around and nibble in some plumose anemone,
you may find some anemone shrimps that
camouflage around its tentacles.
Further west, the bottom turns darker as
you approach the site called “The Cut’. This
spot is usually a challenge for technical divers
from different diving associations. You will find
some PADI sidemount divers or GUE (Global
Underwater Explorers) divers either training
or doing longer no deco diving. The Cut is also
accessible from the other side of the park but
requires strong legs to climb up or down a
steep hill and jumping across some driftwood.
DIVING DESTINATIONS
If you have ever overstayed a dip in the ocean,
or dawdled too long in the bathtub, you
know that being in the water can be a chilling
experience. Water conducts heat away from
your body 20 times faster than air does, so you
cool much more rapidly in water.
That is why cold water diving is another
diving adventure to reckon with. Being in
Vancouver, British Columbia, a much colder
place compared to Dubai, didn’t stop me from
diving or teaching it. I first braved the waters
of the North Eastern Pacific in February 2012
where the water temperature is 2˚C. Well, of
course you won’t be able to get away with
that freezing temperature without wearing a
dry suit and trained to use one. Thanks goes to
the International Diving Centre (.
com) for providing that training and teaching
me the opportunities. I finally got my ticket to
teach the Dry Suit Specialty course last March.
British Columbia is an awesome scuba diving
and vacation destination, and Vancouver is
definitely a ‘world-class’ city. There are plenty
of dive sites in British Columbia, the more
famous and inviting ones are located at
Vancouver Island. However, we are blessed
to have several dive sites in Vancouver, at the
Horseshoe Bay which is part of the Strait of
Georgia. The widely visited dive site is the
Whytecliff Marine Park which is only a 20
minute drive from downtown Vancouver.
Whytecliff Marine Park’s rugged shoreline
and cobble beach lies in West Vancouver’s
Horseshoe Bay neighborhood. In 1993, the
municipal Whytecliff Park became Canada’s
first Marine Protected Area. Harvesting
or collecting any marine life beneath the
waters of this sanctuary is prohibited. 200+
marine animal species with exotic names
such as the speckled sanddab, the sunflower
seastar, Califonian sea cucumber or plumose
anemone call these waters home, yet pay no
property taxes, despite living in Canada’s most
affluent community. Although the majority of
park visitors prefer gum boots over wet suits,
Whytecliff has become a magnet for divers.
As you make your way along the beach at
Whytecliff Marine Park, you’ll see wet-suited
figures emerge from the embankment and
make their way towards the ocean. Often
times, after a day at the office, scuba divers
complete their day with a little weightlessness
as they float off into the nether water world,
where temperatures matter little year-round,
provided you dress appropriately.
The bay is shaped like a half bowl and can go as
deep as 80-100 meters. As you look out from
the shore, the left side is a rocky breakwater
that leads out to nearby Whyte Islet. At low
tide you can clamber up its steep slopes and
find a sheltered spot beneath a lone shore
pine. Keep an eye on the progress of the
tide. It’s a cold swim back to shore! However,
the Whyte Islet is one dive site that you can
ponder because it houses a variety of not
so usual catch. If you are lucky, you may find
a resident giant octopus in a spot called the
crack or a seal that swims and plays around
Beside the beach, interpretive signs explain
in words and pictures the variety of marine
life to be found beneath the surface.
modest surf list of rules of conduct, prominently displayed
in the parking lot, are directed primarily at the
divers who are encouraged to change in the
washrooms and to keep their language clean!
Finding your way to Whytecliff Marine Park, is
rewarding enough as you will drive a scenic
route of Marine Drive.
78 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 79
HEALTH
HEALTH
Deep Thoughts. The Make-Up of Nitrogen Narcosis.
FEATURE Renée Duncan Westerfield
Photo by Pedro De Ureta
“I am personally quite receptive to nitrogen
rapture. I like it and fear it like doom…
L’iv the narcotic
effect of compressed air at depths came in
1937 when two United States Navy scientists,
C.W. Shilling and W.W. Willgrube, tested the
effects of compressed air between 20 meters, it caused “euphoria,
retardation of the higher mental processes
and impaired neuromuscular coordination.”
At 30 meters, the signs and symptoms became
more apparent. Divers experienced “a feeling
of stimulation, excitement and euphoria,
occasionally accompanied by laughter and
loquacity,” sign and symptoms similar to those
effected by 30 meters.
Narcosis has hit other divers sooner, however,
as shown with Behnke and associates’
experiments, demonstrating that individuals
have varying levels of susceptibility. A recent
test in a Navy recompression chamber, for
example, showed a definite
alteration in thinking skills when
divers reached 10 meters.
Nitrogen narcosis has been called
“the martini effect,” or “Martini’s
Law,” because of its alcohol-like
effect, a feeling often compared
to drinking a martini on an empty
stomach: being slightly giddy,
woozy, a little off-balance. One
rule of thumb states that divers
should consider the narcotic
effect of one martini for every 15 for lipids, or fat. When nitrogen
seeps into the fatty structures ground-breaking
experiments he conducted in 1966. Mixed
with oxygen and called heliox, this mixture is
less likely to impair deep divers, although they
still have to undergo decompression in order
to prevent decompression sickness (DCS).
Helium has its sensitivities can vary from day to day.
The fact is that if you dive, you take the chance
of getting narked. The good news is that if you
do experience narcosis, the shallower you get,
the less you’ll feel the effects. And it doesn’t
take long at all for the effects to wear off once
you’re topside.
Before you dive, however, stop and take
stock of these suggestions:
•.
• Watch your carbon dioxide levels.
Increased levels of CO2 can increase your
potential for narcosis. The working or
swimming diver wearing a breathing device
is more susceptible to narcosis than a diver
in a chamber. And the effect is synergistic:
that means they can sum up.
• Avoid alcohol. When you’re planning your
dive excursion, keep in mind that alcohol
augments the signs and symptoms of
narcosis. Why “Because of similar (and
additive) effects to excess nitrogen,
alcohol should be avoided before any dive.
A reasonable recommendation is total
abstinence at least 24 hours before diving;
by that time effects of alcohol should be
gone,” advises dive physician Dr. Lawrence
Martin.
• Be rested when you dive. Refrain from hard
work and its resultant fatigue before and
immediately after your dives. Work and
fatigue can causes higher levels of CO2 in
the body, which results in metabolic effects
on the neurotransmitters in your brain.
• Be calm before you dive. Go well prepared
so you can look forward to your trip.
Anxiety increases your susceptibility to
narcosis. “The exact mechanism isn’t
known,” adds Dr. Peter Bennett, “but it has
an effect on the brain’s neurotransmitters,
in the same place anxiety operates.”
• Descend slowly on deep dives. Experiments
have shown that rapid compression
affects divers more severely than a slower
compression.
• Stay warm. Cold makes narcosis worse.
As with anxiety, the precise mechanism is
unknown, but cold can have analgesic and
anesthetic effects. These reactions in turn
can be synergistic, packing a greater-thanexpected
punch.
If you, like our diving friend Mr. Zeimer, the
DAN Medical Information Line.
80 DIVERS FOR THE ENVIRONMENT, JUNE 2012
JUNE 2012, DIVERS FOR THE ENVIRONMENT 81
HEALTH
Prevention of Malaria for SCUBA divers
FEATURE BARBARA KARIN VELA, MD
UPCOMING EVENTS
UPCOMING EVENTS
area until 4 weeks after leaving the area. The
side effects are mild, but you could experience
nausea, vomiting, diarrhea, allergy and rash
caused by exposure to the sun (the use of
sunscreens and hats is recommended). This
drug/medicine should not be used during
pregnancy or by children younger than 8 years
of age (SCUBA diving is not considered safe
for these two groups in any case). This is the
drug that is recommended by DAN (Divers
Alert Network) Southern Africa as the drug
of choice for protection against malaria for
divers in Sub Saharan Africa.
The second drug, which is available in the
UAE, is Malaron, a combination of atovaquone
and proguanil. Safety in diving has not been
confirmed, but many divers have used it
with no adverse effects. There are reports
of additional sensitivity to motion sickness. It
has 98% overall efficacy against malaria strains
that are resistant to other drugs. It is taken
daily, 1-2 days before the arrival within the
malaria area, during the stay there and 7 days
upon return. The most often side effects are
heartburn, mouth ulcers and headaches. It is
not considered safe in pregnancy, in patients
with kidney disease and in children
84 DIVERS FOR THE ENVIRONMENT, JUNE 2012 | https://www.yumpu.com/en/document/view/34662436/download-emirates-diving-association | CC-MAIN-2022-33 | refinedweb | 37,803 | 57.91 |
Introduction | Terms and Concepts | Web Services Protocols and Technologies | Java Developer Tools and Servers for Web Services
Although web services are designed to be language and platform neutral, the Java programming language is ideal for developing web services and applications that use web services. The portability and interoperability of applications written in the Java programming language mesh well with the objective of web service interoperability. A core set of Java technologies for web services is integrated into the Java 2 Platform, Enterprise Edition (J2EE) 1.4 platform. These technologies are designed for use with XML, and conform to web services standards such as SOAP, WSDL, and UDDI. You can take advantage of the technologies by developing and deploying web services and applications to the J2EE 1.4 platform. (In addition, the J2EE 1.4 platform also offers a wide variety of enterprise application features such as resource pooling and transaction management.) Implementations of the Java technologies for web services are available in the J2EE 1.4 SDK and Sun Java Application Server 8.1. The J2EE 1.4 SDK and Sun Java Application Server 8.1 are available as a single, "all-in-one" bundle, or available separately.
Supplementing J2EE 1.4 is the Java Web Services Developer Pack (Java WSDP) 1.5, which provides implementations of additional Java technologies for web services as well as updates to the web service technology implementations in the J2EE 1.4 SDK and Sun Java Application Server 8.1.
This section briefly describes the Java technologies for web services and their implementations in J2EE 1.4 and Java WSDP 1.5.
The Java technologies for web services in the J2EE 1.4 platform are:
Java API for XML Processing (JAXP) is a Java API for processing XML documents. Using JAXP, you can invoke a SAX or DOM parser in an application to parse an XML document. A parser is a program that scans the XML document and logically breaks it up into discrete pieces. It also checks that the content is well-formed. Some parsers also validate an XML document against an associated XML Document Type Definition (DTD) or XML schema. The parsed content is then made available to the application. Recall that XML has been generally adopted as the data language for web services. It's the language that's used in documents that are exchanged between clients and web services. So an XML parser, a program that essentially feeds the contents of an XML document to an application, is an important part of a web services-based SOA. The primary functional addition in JAXP 1.2 over previous JAXP releases is support for W3C XML Schema.
Simple API for XML Parsing (SAX) and Document Object Model (DOM) are parsing standards, and are the most frequently used approaches to parsing XML documents..
Here is a simple example that illustrates how JAXP is used to invoke a SAX parser:
First, an instance of the
SAXParserFactory class is used to generate an instance of a
SAXParser class. This is the SAX parser. Next, the
SAXParser class is used to get an XML reader that implements the
XMLReader interface. The parser must implement this interface. The parser (through the
XMLReader interface) reads the XML document. Notice that the
XMLReader implementation is also used to set a content handler that does any parsing-related processing. The content handler part of the application would define methods to be notified by the parser when the parser encounters something significant (in SAX terms, an "event") such as the start of an XML tag, or the text inside of a tag. These methods, known as callback methods, take any subsequent actions based on the event.
Also notice that the
SAXParserFactory can be configured to set various parser characteristics. In this example, validation and namespace awareness are turned on. This means that the parser will verify that the contents of the XML document conforms to the associated schema, and that the parser will be aware of namespaces.
Here is a simple example that illustrates how JAXP is used to invoke a DOM parser:
As in SAX parsing, a factory is used to create an instance of a DOM parser. However, unlike SAX parsing, DOM parsing does not use a content handler or callback methods. The
parse method of the
DocumentBuilder instance returns a
Document object that represents the entire XML document as a tree. The application must then explicitly traverse the tree and process what it finds.
JAXP comes with its own parser. (The J2EE 1.4 SDK implementation of JAXP 1.2 uses Xerces 2 as the default parser.) However the API is designed to allow any parser to be plugged in and used (provided that the parser conforms to the API).
In addition to its features for invoking parsers, JAXP can be used to transform XML documents (for example, to HTML) in conformance with the XSL Transformations (XSLT) specification. The J2EE 1.4 SDK implementation of JAXP 1.2 includes an XSLT stylesheet compiler called XSLTC.
Java API for XML-Based RPC (JAX-RPC) is a Java API for accessing services through XML (SOAP-based) RPC calls. The API incorporates XML-based RPC functionality according to the SOAP 1.1 specification. JAX-RPC allows a Java-based client to call web service methods in a distributed environment, for example, where the client and the web service are on different systems. From an application developer's point of view, JAX-RPC provides a way to call a web service. From a service developer's point of view, it provides a way to make a web service available so that it can be called from an application. Although JAX-RPC is a Java API, it doesn't limit the client and the web service to both be deployed on a Java platform. A Java-based client can use JAX-RPC to make SOAP-based RPC calls to web service methods on a non-Java platform. A client on a non-Java platform can access methods in a JAX-RPC enabled web service on a Java platform.
JAX-RPC is designed to hide the complexity of SOAP. When you use JAX-RPC to make an RPC call, you don't explicitly code a SOAP message. Instead you code the call in the Java programming language, using the Java API. JAX-RPC converts the RPC call to a SOAP message and then transports the SOAP message to the server. JAX-RPC on the server converts the SOAP message and then calls the web service. Then the sequence is reversed. The web service returns the response. JAX-RPC on the server converts the response to a SOAP message, which is then transported back to the client. JAX-RPC on the client converts the SOAP message and then returns the response to the application.
To make a web service available to clients through JAX-RPC, you need to provide a JAX-RPC service endpoint definition. This involves defining two Java classes for each endpoint: one that defines the JAX-RPC service endpoint interface (which identifies the remote methods that can be called by the client), and the other that implements the interface. The JAX-RPC specification defines the mapping between the definition of a JAX-RPC service endpoint and a WSDL service description. In fact, all JAX-RPC implementations must be able to produce a WSDL document from a service endpoint definition. However the mapping also makes it possible for an implementation to do the reverse -- produce a JAX-RPC service endpoint definition from a WSDL document. The J2EE 1.4 SDK provides a mapping tool, called
wscompile. You can use a mapping tool such as
wscompile to generate a WSDL file from a JAX-RPC service endpoint definition, or a JAX-RPC service endpoint definition from a WSDL file. The latter ability is important for web services that are not on a Java platform.
Here's an example of a JAX-RPC service endpoint definition for a web service that returns stock quotes. First, here's the JAX-RPC service endpoint interface:
Next, the implementation class:
wscompile
JAX-RPC provides a lot of flexibility in the way a client can invoke a method on a web service. The client can invoke the remote method through a local object called a stub (this is one of the artifacts generated by
wscompile). Alternatively, the client can use a dynamic proxy to invoke the method. Or the client can dynamically invoke the method using the JAX-RPC Dynamic Invocation Interface (DII). Stubs are used when a JAX-RPC client knows what method to call and how to call it (for example, what parameters to pass). A dynamic proxy is a class that dynamically supports service endpoints at runtime, without the need to generate stubs. DII gives a client a way to invoke a remote method dynamically, for example, when a client doesn't know the remote method name or its signature until run time.
Invoking a remote method through a stub is like invoking a remote method using Java Remote Method Invocation (RMI). As is the case for RMI, in JAX-RPC, a stub is designed to simplify remote method calls, that is, by making them appear like local method calls. A local stub object is used to represent a remote object. To make a remote method call, all a JAX-RPC client needs to do is make the method call on the local stub. The stub (using the underlying runtime environment) then formats the method call and directs it to the server -- this process is called marshalling. On the server, a class called a tie (also called a skeleton) unmarshals this information and makes the call on the remote object. The process is then reversed for returning information to the client.
Here, for example, is part of what the code might look like for a client class that uses a stub to invoke the
getLastTradePrice method in the stock quote web service:
SOAP with Attachments (SAAJ) is another API (that is, in addition to JAX-RPC) for creating and sending SOAP messages. In fact, SAAJ is used, "under the covers," by other Java for XML APIs, such as JAX-RPC and JAXR, to create and send SOAP messages. The SAAJ API 1.2 conforms to the SOAP 1.1 specification and the SOAP with Attachments specification. This means that you can use SAAJ to create and send SOAP message with or without attachments.
To create a SOAP message using SAAJ for sending to a web service, a client gets a connection to the service, creates a message, adds content to the message, and then adds attachments (if any).
Here's an example that illustrates the steps:
Notice that the client constructs the content part-by-part, following the SOAP message structure shown in the earlier discussion of SOAP.
After creating the message, the client can use SAAJ to send the message synchronously (and then wait for a reply), or asynchronously (and continue processing without waiting for a reply). Here's an example of sending the message synchronously:
The web service then processes the message and returns a reply:
Java API for XML Registries (JAXR) is a Java API that you can use to access standard registries such as those that conform to UDDI or ebXML. Using the API, you can register a service in a registry or discover services in a registry. JAXR 1.0 is compatible with UDDI 2 and the ebXML registry specifications. However the JAXR 1.0 implementation in the J2EE 1.4 SDK currently supports access to UDDI 2 registries only. Note though that support for ebXML registries is coming soon -- see The Future: Sun's SOA Initiative. implementation of the JAXR API in the J2EE 1.4 SDK is a JAXR provider. A registry provider is an implementation of a registry specification, for instance, an actual UDDI registry.
Here's how the roles interact..
Here is part of an example that shows a client using the JAXR API to discover book store services (these are classified in the NAICS classification scheme as code number 451211) in a UDDI registry:
Java WSDP is a package that provides early access to the latest technology implementations and tools for web services development. The most current version of the package, Java WSDP 1.5, includes implementations of a number of web services technologies that are not in J2EE 1.4, as well as newer releases of the web services technology implementations that are in the J2EE 1.4 SDK. The implementations of web services technologies that are not in J2EE 1.4 are:
The newer releases of web services technology implementations that are in the J2EE 1.4 SDK are:
In addition to these web services technology implementations, Java WSDP 1.5 also provides an Early Access implementation of the Sun Java Streaming XML Parser (SJSXP), which implements the Streaming API for XML (StAX).
Java Architecture for XML Binding (JAXB) gives you a way of mapping an XML document into a set of Java classes and interfaces that are based on the document's XML schema. The value of doing that is that your application can work directly with Java content rather than working with XML content (as it would have to using an API such as JAXP).
The mapping is done in two steps. First you use a binding compiler provided with the JAXB implementation to bind the document's XML schema into a set of Java classes and interfaces. These classes and interfaces form the core of a binding framework. After compiling the classes and interfaces, you use methods in the binding framework to marshal the XML document into a tree of content objects. You can then access the data in the tree through other methods in the binding framework.
For example, suppose that you had an XML document,
books.xml that was associated with a schema,
books.xsd. Click here to see the schema..
Running the
books.xsd schema through the binding compiler generates a set of interfaces and a set of classes that implement the interfaces. Here is the interface the binding compiler generates for the
<Collection> element complex type. Here is the generated class that implements the complex type.
To unmarshal an XML document, you:
JAXBContextobject that provides the entry point to the JAXB API.
Unmarshallerobject. This object contains methods that perform the actual unmarshalling.
unmarshalmethod to do the unmarshalling.
After unmarshalling, your application can use
get methods in the schema-derived classes to access the XML data. Here, for example, is a program that unmarshals the data in the
books.xml file and then displays the data. Notice that the program includes the following statement:
This validates the source data against the associated schema.
Some other things you can do using JAXB are:
InputStreamobject, a URL, or a DOM node.
OutputStreamobject or a DOM node.
JAXB 1.0.4 supports a subset of the W3C XML Schema (some schema constructs, such as the identity construct, are not supported). In addition, JAXB also unofficially supports the OASIS RelaxNG schema language.
XML and Web Services Security (XWS Security) provides message-level security for applications that use JAX-RPC to access web services. Think of message-level security as a way of supplementing the transport-layer security provided by secure Internet transport protocols such as HTTPS. In message-level security, security information is contained in the SOAP message header. One of the primary uses of message-level security is to secure a SOAP message from unauthorized access at intermediate nodes along the message path. Recall that a SOAP message can pass through a set of intermediate nodes as it travels from a client to a service, and that each node can independently process part or all of the message before forwarding it. Using message-level security, you can do things like encrypt a SOAP message and permit decryption only by the target web service (that is, not by any of the intermediate nodes). For example, this could be used to protect credit card information from exposure until it's received by the target service -- perhaps a credit verification service from a credit card company. In addition, XWS security gives you the flexibility of securing different parts of a SOAP message and in different ways. Specifically, you can secure an entire service, one or more service ports, or one or more service operations. For instance, you can encrypt some parts of the message and not other parts, you can sign with a digital signature some parts of the message and not sign others. In addition to including encryption information, the SOAP message header might include other security-related items such as an X.509 certificate or a security token. The SOAP header can also point to a repository of security information for the message.
XWS Security 1.0 implements the following WS-Security standards: SOAP Message Security V1.0, Username Token Profile V1.0, and X.509 Token Profile V1.0 (each of these describes a particular aspect of SOAP message security). XWS Security also implements the following W3C security standards: XML Signature and XML Encryption.
To use XWS Security, you need to create security configuration files for the client and service. A security configuration file is an XML file that describes the security operations to be performed on the SOAP message. For example, here is part of a simple security configuration file for a client that applies an XML Digital Signature to a SOAP message:
<xwss:JAXRPCSecurity>element identifies this as a security configuration file.
<xwss:SecurityConfiguration>element specifies the security operations to be performed.
<xwss:Sign>element specifies that a digital signature will be applied.
<xwss:X509Token>element specifies an X.509 certificate token that indicates the key used for the digital signature. The digital signature refers to this token.
<xwss:RequireSignature>element specifies that the client expects the response it receives from the service to be signed.
<xwss:SecurityEnvironmentHandler>element specifies a
CallbackHandler, a class that gets the security information (such as private keys and certificates) needed for the signing operation.
After the needed security configuration files are created, you invoke the
wscompile tool -- the same tool that you use to create a WSDL file and artifacts for a JAX-RPC-based web service. When you run the
wscompile tool, you specify the
-security option and identify a security configuration file. The tool then generates the artifacts needed for the security operations specified in the configuration file.
XML Digital Signatures v1.0 EA2 is an early access implementation of the Java Digital Signature API. You can use the API to sign the content of an XML document (actually, you can use the API to sign any binary data) in conformance with the W3 standard, XML-Signature Syntax and Processing, and also to validate the signature.
Here is an example that uses the Java Digital Signature API for signing an XML document. The example is adapted from a sample program provided in Java WSDP 1.5 for the Java Digital Signature API. The program uses an
XMLSignatureFactory to create the digital signature:
It then creates objects that map to corresponding elements in the XML signature. For example the
SignedInfo object corresponds to a
<SignedInfo> element in the XML signature. The
<SignedInfo> element contains signature information and a reference to the data to be signed.
Next, the program creates a 512-bit DSA key and a
KeyInfo object that contains the public key to be used in decrypting the signature. Here are the statements that create the DSA key:
The program then creates an instance of the document, generates a digital signature for the document, and puts the digital signature in a file. Here are the statements that generate the digital signature:
JAXP 1.2.6_01 is an updated reference implementation of JAXP 1.2. A number of enhancements have been made to the JAXP reference implementation since the release of the JAXP 1.2 implementation in the J2EE 1.4 SDK. These include new releases of the Xerces parser, and new security-related properties -- for example, you can specify that the parser not allow a specific DTD. The JAXP v1.2.6_01 implementation in Java WSDP 1.5 includes a new version of the Xerces parser (v2.6.2) and a new version of the Xalan transformer (v2.6.0).
JAX-RPC 1.1.2_01 is an updated reference implementation of JAX-RPC 1.1. The major enhancement in JAX-RPC 1.1.2_01 is support for WS-I Basic Profile 1.1. Recall that the WS-I Basic Profile identifies a core set of web services technologies, that when implemented in different platforms, helps ensure interoperability of web services across those platforms. WS-Basic Profile 1.1 adds a number of technologies to the core set, including SOAP Messages With Attachments and the part of the WSDL 1.1 specification that covers MIME bindings. MIME stands for Multipurpose Internet Mail Extensions. It's a standard that extends the format of Internet mail to allow for things like multipart message bodies. The SOAP Messages With Attachments specification takes a MIME approach in describing how to build a SOAP message.
Through this added support, a client can use JAX-RPC to access services through XML (SOAP-based) RPC calls (just as a client could previously), but include attachments in the call (JAX-RPC maps the attachments as method parameters). The primary difference is that the
<binding> element of the WSDL description needs to specify one or more MIME parts that correspond to the MIME parts of the SOAP message with attachments. For example, here is an example of a
<binding> element for an operation that adds a photo (provided as an attachment) to a photo catalog:
SAAJ 1.2.1_01 implements SAAJ 1.2. It does not provide any additional features beyond those in SAAJ 1.2 implementation in the J2EE 1.4 SDK, but does include a number of bug fixes.
JAXR 1.0.7 is an updated reference implementation of JAXR 1.0. The implementation allows you to set a number of properties on a JAXR connection. For example, you can specify a property that gives a hint to the JAXR provider about the authentication method to use. Or you can specify a property that sets the maximum number of rows to be returned by a UDDI provider. Most of these properties are specified by the client. For example, here is how a client would specify the authentication method hint to the JAXR provider:
Sun Java Streaming XML Parser Version (SJSXP) 1.0, is a high performance implementation of the Streaming API for XML (StAX). Java WSDP 1.5 includes an Early Access release of SJSXP. StAX is a "pull" API for XML. By comparison, SAX and DOM are "push" APIs. In other words, the SAX and DOM APIs read-in the XML data they encounter and provide it to the application. In the case of SAX, the API provides the data, piece-by-piece, to callback methods. In the case of DOM, the API reads in the entire document and makes it available as an in-memory tree for subsequent processing by the application. A "pull" API such as StAX, simply points to a specific item of data in an XML document. The application then processes the item, as appropriate. In implementing StAX, SJSXP provides an object, called a cursor, that moves sequentially through an XML document. Using methods provided by the cursor, an application can move the cursor, item-by-item, through the document, and determine what type of XML construct the item represents. The application can then process the item appropriately.
For example, here is a snippet of code adapted from one of the sample programs provided with SJSXP 1.0. The sample program uses SJSXP to parse an XML file:
First an
XMLInputFactory class is created for the input stream. The
setProperty method of the
XMLInputFactory class specifies properties for the parser. In the example, the property tells the parser to replace entity references it encounters. Various other parser properties can be set. The cursor's
hasNext() method determines if there's an item in the file (that is, beyond where the cursor is currently pointing). If there is another item, the
next() method moves the cursor to it. The
next() method returns an integer constant that indicates the type of XML construct that the cursor is pointing to. For example, the START_ELEMENT constant indicates the start of an XML element. The action taken in the example is to print the returned constant.
You have the option of filtering an XML document so that the parser doesn't have to parse the entire document. You can also use SJSXP to write an XML document.
[ <<BACK] [ TOP] [ NEXT>>] | http://www.oracle.com/technetwork/articles/javase/javatechs-139852.html | CC-MAIN-2014-52 | refinedweb | 4,141 | 55.44 |
Subject: Re: [Boost-build] Building OpenMP targets
From: Juraj Ivanèiæ (juraj.ivancic_at_[hidden])
Date: 2013-08-05 08:39:26
On 3.8.2013. 2:41, Pascal Germroth wrote:
> To prevent building the example on clang I could use
>
> exe example : example.cpp : <toolset>gcc ;
> exe example : example.cpp : <toolset>intel ;
You could also use
exe example : example.cpp : <toolset>clang:<build>no ;
> Is there a way to define/use OpenMP as a feature, so that my Jamfile
> wouldn't need to name every toolset but could just require OpenMP for a
> target, which would magically set the flag on a supporting compiler or
> ignore the target on an unsupported compiler? So that I could write:
Try this:
import feature ;
feature.feature openmp : yes : optional composite propagated ;
feature.compose <openmp>yes :
<toolset>gcc:<cxxflags>-fopenmp
<toolset>gcc:<linkflags>-fopenmp
<toolset>intel:<cxxflags>-openmp
<toolset>intel:<linkflags>-openmp
<toolset>clang:<build>no
<toolset>msvc: ...
...
;
project : requirements ... ;
exe example : example.cpp : <openmp>yes ;
> Supporting OpenMP in the global toolset definitions could benefit other
> projects, too, would this be hard to implement? Where could I look for a
> starting point?
If you want to add a support module for openmp have a look at how
jamfiles in the contrib directory are implemented.
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/boost-build/2013/08/26896.php | CC-MAIN-2020-34 | refinedweb | 237 | 52.46 |
Because of the performance problems one may encounter with the pod function, yet another solution has been imagined, consisting in repeating a whole document.
From the section about statements, you know that a POD statement applies to a specific part of the document. In order to fulfill our need to repeat a whole document, a part named doc has been added, that represents it.
Let's use the same example as the one used to illustrate the pod function. Suppose you are a successful company, generating and sending a lot of invoices to its clients. From your accounting system, you need to generate one PDF per invoice, or also, sometimes, PDF files containing a bunch of invoices, ie, all invoices for a given period or client. All such PDF files can be produced using a single template, using a « do doc » statement, provided you pass the appropriate context to it.
Here is what such a template would look like.
I was so inspired when creating it. When you need to produce a single invoice from this template, define, in context variable invoices, a list containing this unique invoice.
class Invoice:
def __init__(self, number):
self.number = number
invoices = [Invoice('001')]
With this context, rendering the hereabove template produces this result.
When you need to produce a PDF containing a bunch of invoices, write down your most insightful invoice query, like in the following example.
class Invoice:
def __init__(self, number):
self.number = number
invoices = [Invoice('001'), Invoice('002'), Invoice('003')]
The result would then be this one.
Unreadable, indeed, but mentally readable, though.
The second statement in the hereabove template uses some concepts borrowed from other pages: | https://appyframe.work/189/view?page=main&nav=ref.19.pages.18.18&popup=False | CC-MAIN-2022-27 | refinedweb | 277 | 63.9 |
tcom -- Access COM objects from Tcl
The tcom package provides commands to access COM objects through IDispatch and IUnknown derived interfaces.
These commands return a handle representing a reference to a COM object through an interface pointer. The handle can be used as a Tcl command to invoke operations on the object. In practice, you should store the handle in a Tcl variable or pass it as an argument to another command.
References to COM objects are automatically released. If you store the handle in a local variable, the reference is released when execution leaves the variable's scope. If you store the handle in a global variable, you can release the reference by unsetting the variable, setting the variable to another value, or exiting the Tcl interpreter.
The createobject subcommand creates an instance of the object. The -inproc option requests the object be created in the same process. The -local option requests the object be created in another process on the local machine. The -remote option requests the object be created on a remote machine. The progID parameter is the programmatic identifier of the object class. Use the -clsid option if you want to specify the class using a class ID instead. The hostName parameter specifies the machine where you want to create the object instance.
The getactiveobject subcommand gets a reference to an already existing object.
This command returns a reference to a COM object from a file. The pathName parameter is the full path and name of the file containing the object.
This command compares the interface pointers represented by two handles for COM identity, returning 1 if the interface pointers refer to the same COM object, or 0 if not.
This command invokes a method on the object represented by the handle. The return value of the method is returned as a Tcl value. A Tcl error will be raised if the method returns a failure HRESULT code. Parameters with the [in] attribute are passed by value. For each parameter with the [out] or [in, out] attributes, pass the name of a Tcl variable as the argument. After the method returns, the variables will contain the output values. In some cases where tcom cannot get information about the object's interface, you may have to use the -method option to specify you want to invoke a method.
Use the -namedarg option to invoke a method with named arguments. This only works with objects that implement IDispatch. You specify arguments by passing name and value pairs.
This command gets or sets a property of the object represented by the handle. If you supply a value argument, this command sets the named property to the value, otherwise it returns the property value. For indexed properties, you must specify one or more index values. The command raises a Tcl error if you specify an invalid property name or if you try to set a value that cannot be converted to the property's type. In some cases where tcom cannot get information about the object's interface, you may have to use the -get or -set option to specify you want to get or set a property respectively.
This command implements a loop where the loop variable(s) take on values from a collection object represented by collectionHandle. In the simplest case, there is one loop variable, varname. The body argument is a Tcl script. For each element of the collection, the command assigns the contents of the element to varname, then calls the Tcl interpreter to execute body.
In the general case, there can be more than one loop variable. During each iteration of the loop, the variables of varlist are assigned consecutive elements from the collection. Each element is used exactly once. The total number of loop iterations is large enough to use up all the elements from the collection. On the last iteration, if the collection does not contain enough elements for each of the loop variables, empty values are used for the missing elements.
The break and continue statements may be invoked inside body, with the same effect as in the for command. The ::tcom::foreach command returns an empty string.
This command specifies a Tcl command that will be executed when events are received from an object. The command will be called with additional arguments: the event name and the event arguments. By default, the event interface is the default event source interface of the object's class. Use the eventIID parameter to specify the IID of another event interface. If an error occurs while executing the command then the bgerror mechanism is used to report the error.
This command tears down all event connections to the object that were set up by the ::tcom::bind command.
Objects that implement the IDispatch interface allow some method parameters to be optional. This command returns a token representing a missing optional argument. In practice, you would pass this token as a method argument in place of a missing optional argument.
This command returns a handle representing a description of the interface exposed by the object. The handle supports the following commands.
This command returns an interface identifier code.
This command returns a list of method descriptions for methods defined in the interface. Each method description is a list. The first element is the member ID. The second element is the return type. The third element is the method name. The fourth element is a list of parameter descriptions.
This command returns the interface's name.
This command returns a list of property descriptions for properties defined in the interface. Each property description is a list. The first element is the member ID. The second element is the property read/write mode. The third element is the property data type. The fourth element is the property name. If the property is an indexed property, there is a fifth element which is a list of parameter descriptions.
This command sets and retrieves options for the package..
This option sets the concurrency model, which can be apartmentthreaded or multithreaded. The default is apartmentthreaded. You must configure this option before performing any COM operations such as getting a reference to an object. After a COM operation has been done, changing this option has no effect.
Use the ::tcom::import command to convert type information from a type library into Tcl commands to access COM classes and interfaces. The typeLibrary argument specifies a type library file. By default, the commands are defined in a namespace named after the type library, but you may specify another namespace by supplying a namespace argument. This command returns the library name stored in the type library file.
For each class in the type library, ::tcom::import defines a Tcl command with the same name as the class. The class command creates an object of the class and returns a handle representing an interface pointer to the object. The command accepts an optional hostName argument to specify the machine where you want to create the object. You can use the returned handle to invoke methods and access properties of the object. In practice, you should store this handle in a Tcl variable or pass it as an argument to a Tcl command.
For each interface in the type library, ::tcom::import defines a Tcl command with the same name as the interface. The interface command queries the object represented by handle for an interface pointer to that specific interface. The command returns a handle representing the interface pointer. You can use the returned handle to invoke methods and access properties of the object. In practice, you should store this handle in a Tcl variable or pass it as an argument to a Tcl command.
The ::tcom::import command generates a Tcl array for each enumeration defined in the type library. The array name is the enumeration name. To get an enumerator value, use an enumerator name as an index into the array.
Each Tcl value has two representations. A Tcl value has a string representation and also has an internal representation that can be manipulated more efficiently. For example, a Tcl list is represented as an object that holds the list's string representation as well as an array of pointers to the objects for each list element. The two representations are a cache of each other and are computed lazily. That is, each representation is only computed when necessary, is computed from the other representation, and, once computed,. The internal representations built into Tcl include boolean, integer and floating point types.
When invoking COM object methods, tcom tries to convert each Tcl argument to the parameter type specified by the method interface. For example, if a method accepts an int parameter, tcom tries to convert the argument to that type. If the parameter type is a VARIANT, the conversion has an extra complication because a VARIANT is designed to hold many different data types. One approach might be to simply copy the Tcl value's string representation to a string in the VARIANT, and hope the method's implementation can correctly interpret the string, but this doesn't work in general because some implementations expect certain VARIANT types.
Tcom uses the Tcl value's internal representation type as a hint to choose the resulting VARIANT type.
Tcl value to VARIANT mapping
The internal representation of a Tcl value may become significant when it is passed to a VARIANT parameter of a method. For example, the standard interface for COM collections defines the Item method for getting an element by specifying an index. Many implementations of the method allow the index to be an integer value (usually based from 1) or a string key. If the index parameter is a VARIANT, you must account for the internal representation type of the Tcl argument passed to that parameter.
This command passes a string consisting of the single character "1" to the Item method. The method may return an error because it can't find an element with that string key.
In line 1, the for command sets the internal representation of $i to an int type as a side effect of evaluating the condition expression {$i <= $numElements}. The command in line 2 passes the integer value in $i to the Item method, which should succeed if the method can handle integer index values. | http://docs.activestate.com/activetcl/8.4/tcom/tcom.n.html | CC-MAIN-2016-44 | refinedweb | 1,726 | 55.84 |
This is a quick setup guide to develop C++ desktop application with Gtkmm toolkit library using Geany IDE on Ubuntu. In other words, you can start learning how to create cross-platform GUI application. You know, Gtkmm is a C++ binding to the C-based GTK+ library, and this library has been used to create great desktop apps such as Inkscape and MySQL Workbench (also GParted and GNOME System Monitor!). This tutorial is a continuation to Quick Setup of C, C++, and Java on Ubuntu. This works on other Debian-based distros as well such as Mint, Trisquel, and elementary OS (however, I used this last one to write this tutorial). Follow instructions below and happy coding!
Some Basic Knowledge
- GTK+ is a cross-platform, free library for C language. It's the library that builds GNOME.
- Gtkmm is not GTK+, Gtkmm is a C++ binding to GTK+, meaning, you can code with C++ by using the power of the C-based GTK+.
- Gtkmm is comparable to Qt Framework (library that builds KDE) as both are C++ libraries to create desktop GUI applications.
- gcc compiler is used to compile and build GTK+ application, while g++ compiler is used to build Gtkmm apps.
- Gtkmm uses C++ language so you have C++ benefits over C if you use it instead of GTK+. Read more here.
1. Install Needed Packages
Solve all the dependencies by this command alone:
$ sudo apt-get install libgtkmm-3.0-dev
2. Try First Example
I copied here an example from GNOME.org. You need to write the source code and compile it with g++ while linking it to gtkmm libraries. Just practice this.
1) Write the source code file.
Filename: base.cc
// copied from // this source code is from GNU FDLv1.2-or-later licensed documentation by Murray Cumming #include <gtkmm.h> int main(int argc, char *argv[]){ auto app = Gtk::Application::create(argc, argv, "org.gtkmm.examples.base"); Gtk::Window window; window.set_default_size(200, 200); return app->run(window); }
2) Compile the file and link it with gtkmm libraries.
$ g++ base.cc -o base `pkg-config gtkmm-3.0 --cflags --libs`
(This produces a single file named base without extension.)
3) Finally, run the executable file to show the application.
3) Finally, run the executable file to show the application.
$ ./base
See picture below. It's a basic application with a basic window without other items.
3. Try Second Example
(This introduces you how to compile each of multiple files, manually)
You will (1) create three distinct files, and then (2) compile two of them separately, and finally (3) link together the two into one final file. Once you understand this example, you can grasp the concept, and go to next level (using Makefile). Go through all these source code examples I copied from GNOME.org and follow the compiling instruction below.
1) Write these source code files and save them.
1) Write these source code files and save them.
Filename: helloworld.h
// copied from // this source code is from GNU FDLv1.2-or-later licensed documentation by Murray Cumming #ifndef GTKMM_EXAMPLE_HELLOWORLD_H #define GTKMM_EXAMPLE_HELLOWORLD_H #include <gtkmm/button.h> #include <gtkmm/window.h> class HelloWorld : public Gtk::Window { public: HelloWorld(); virtual ~HelloWorld(); protected: //Signal handlers: void on_button_clicked(); //Member widgets: Gtk::Button m_button; }; #endif // GTKMM_EXAMPLE_HELLOWORLD_H
Filename: helloworld.cc
// copied from // this source code is from GNU FDLv1.2-or-later licensed documentation by Murray Cumming #include "helloworld.h" #include <iostream> HelloWorld::HelloWorld() : m_button("Hello World") // creates a new button with label "Hello World". { // Sets the border width of the window. set_border_width(10); // When the button receives the "clicked" signal, it will call the // on_button_clicked() method defined below. m_button.signal_clicked().connect(sigc::mem_fun(*this, &HelloWorld::on_button_clicked)); // This packs the button into the Window (a container). add(m_button); // The final step is to display this newly created widget... m_button.show(); } HelloWorld::~HelloWorld() { } void HelloWorld::on_button_clicked() { std::cout << "Hello World" << std::endl; }
Filename: main.cc
// copied from // this source code is from GNU FDLv1.2-or-later licensed documentation by Murray Cumming #include "helloworld.h" #include <gtkmm/application.h> int main (int argc, char *argv[]) { auto app = Gtk::Application::create(argc, argv, "org.gtkmm.example"); HelloWorld helloworld; //Shows the window and returns when it is closed. return app->run(helloworld); }
2) Compile the two .cc files and then link those into one final executable.
Compile commands:
$ g++ -c -Wall helloworld.cc -o helloworld.o `pkg-config gtkmm-3.0 --cflags` $ g++ -c -Wall main.cc -o main.o `pkg-config gtkmm-3.0 --cflags` $ g++ helloworld.o main.o -o helloworld `pkg-config gtkmm-3.0 --libs`
3) Then again, finally, run the final file.
$ ./helloworld
It looks like this. Notice the small application with Hello World button. If I push the button five times, it prints out Hello World phrase five times.
4. Try Third Example (With Makefile)
(This introduces you how to compile multiple files, automatically)
1) Create a file with this name and below lines of code.
Filename: Makefile
# thanks to Dorku, I modified his Makefile from CC=g++ CFLAGS=-c -Wall SOURCES=helloworld.cc main.cc OBJECTS=$(SOURCES:.cc=.o) EXECUTABLE=helloworld all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(OBJECTS) -o $@ `pkg-config gtkmm-3.0 --libs` .cc.o: $(CC) $(CFLAGS) $< -o $@ `pkg-config gtkmm-3.0 --cflags` clean: rm -rf *.o helloworld
(Note that Makefile accepts only one TAB not SPACES in each of three lines starting with a TAB above.)
2) Before building new ones, first, delete previously built files first. This is famous with make clean command.
$ make clean
3) Build your program with this command in the same directory with Makefile.
$ make
The result is the same as the previous example as this only automates the manual steps. Picture below showing the steps with final result after pushing the button ten times. See the command lines called by make command alone, they're the same as previously three commands.
5. Geany Builds
Now, after you have multiple source code files with the Makefile, using Geany is just as easy as pressing one button. Using the same example as Section 3 above, plus a Makefile, here's step by step to do build exactly the same as Section 4.
1) Press Make All (Shift+F9) so it compiles & links based on Makefile.
2) Press Run (F5) so the application runs over the terminal. See picture below, I push the button seven times and the same Hello World text showing seven times on the terminal. It works!
3) If Geany says "make: Nothing to be done for 'all'",
then it needs make clean command. Do that by pressing Make Custom Target (Shift+Ctrl+F9) > type clean > click OK > files removed.
1) Press Make All (Shift+F9) so it compiles & links based on Makefile.
(Building, see three command lines on the status bar? They're the same once again.)
2) Press Run (F5) so the application runs over the terminal. See picture below, I push the button seven times and the same Hello World text showing seven times on the terminal. It works!
(Final result, again, it works!)
3) If Geany says "make: Nothing to be done for 'all'",
then it needs make clean command. Do that by pressing Make Custom Target (Shift+Ctrl+F9) > type clean > click OK > files removed.
End Words
That's all. I hope with this small guide you can copy and build many code examples fom the net without problems. So, you can learn. Learn more about gtkmm on the site, tutorials and examples, docs and references, FAQ, and also recommended books to read. Last but not least, as I mentioned in the beginning, you can learn from the source code of really cool GNOME / Gtkmm applications such as GParted, Inkscape, K3D, Ardour 2, and MySQL Workbench. Go ahead, happy learning!
Unless otherwise noted, this article is licensed under CC BY-SA 3.0. | http://www.ubuntubuzz.com/2018/11/setup-cpp-gtkmm-programming-tools-on-ubuntu-for-beginners.html | CC-MAIN-2020-05 | refinedweb | 1,312 | 67.86 |
IRC log of ws-ra on 2011-02-01
Timestamps are in UTC.
20:24:11 [RRSAgent]
RRSAgent has joined #ws-ra
20:24:11 [RRSAgent]
logging to
20:24:13 [trackbot]
RRSAgent, make logs public
20:24:13 [Zakim]
Zakim has joined #ws-ra
20:24:15 [trackbot]
Zakim, this will be WSRA
20:24:15 [Zakim]
ok, trackbot, I see WS_WSRA()3:30PM already started
20:24:16 [trackbot]
Meeting: Web Services Resource Access Working Group Teleconference
20:24:16 [trackbot]
Date: 01 February 2011
20:27:07 [gpilz]
gpilz has joined #ws-ra
20:27:49 [Zakim]
+Bob_Freund
20:28:01 [Zakim]
+Gilbert_Pilz
20:28:29 [dug]
dug has joined #ws-ra
20:28:30 [li]
li has joined #ws-ra
20:28:48 [Dave]
scribenick:dave
20:29:16 [Zakim]
+ +1.908.696.aaaa
20:29:18 [Zakim]
+Doug_Davis
20:29:35 [li]
zakim, aaaa is li
20:29:35 [Zakim]
+li; got it
20:30:02 [Ram]
Ram has joined #ws-ra
20:30:15 [asoldano]
asoldano has joined #ws-ra
20:30:28 [Zakim]
+Yves
20:30:50 [Zakim]
+asoldano
20:31:55 [Zakim]
+Tom_Rutt
20:32:07 [gpilz]
~30F here - but I'm at 7,500 feet
20:32:08 [Ashok]
Ashok has joined #ws-ra
20:32:31 [Zakim]
+[Microsoft]
20:33:10 [Zakim]
+Ashok_Malhotra
20:33:27 [Tom_Rutt]
Tom_Rutt has joined #ws-ra
20:33:55 [Dave]
Topic: Agenda
20:34:23 [dug]
20:35:41 [Dave]
Agenda accepted.
20:35:42 [Ram]
q+
20:36:17 [BobF]
ack ram
20:36:39 [Dave]
Ram: Still working on some of the issues still, but some are ok.
20:37:02 [Dave]
Bob: Which ones are resolvable?
20:37:27 [Dave]
We will do this as we come to them.
20:37:40 [Dave]
The minutes are accepted:
20:37:53 [Dave]
Topic: F2F Logistics
20:38:23 [Dave]
Gil was to look for a dinner spot.
20:38:49 [Dave]
No changes to the table of implementations
20:39:00 [Dave]
Topic: New Issues.
20:39:04 [dug]
+q
20:39:27 [dug]
20:39:38 [Dave]
11874: Accepted as a new issue.
20:40:06 [Dave]
Recolved: Issue 11874 as proposed.
20:40:21 [dug]
20:40:21 [Dave]
Topic: Issue 11882
20:40:24 [dug]
-q
20:40:31 [dug]
q+
20:41:09 [Dave]
Accepted as a new issue.
20:41:11 [BobF]
ack dug
20:41:24 [Dave]
Proposal to fix in the obvious way.
20:41:35 [Dave]
Resolved as proposed.
20:41:51 [Dave]
Topic: Issue 11894
20:41:52 [dug]
20:42:58 [Dave]
Issue accepted as a new issue.
20:43:14 [Dave]
People need time on this one.
20:43:23 [Dave]
Topic: Issue 11899
20:43:27 [dug]
20:43:36 [dug]
20:44:58 [Dave]
Accepted as new issue.
20:45:29 [Dave]
Resolved: As proposed.
20:45:39 [Dave]
Topic: Issue 11928
20:45:47 [dug]
20:46:58 [Dave]
Bob: Yves will this be fixed by the publication process.
20:47:25 [Dave]
Yves: I will take care of it.
20:47:33 [Dave]
Resolved as proposed.
20:47:45 [Dave]
ACTION: Yves to fix as recommended.
20:47:45 [trackbot]
Created ACTION-174 - Fix as recommended. [on Yves Lafon - due 2011-02-08].
20:47:56 [gpilz]
q+
20:49:04 [gpilz]
20:49:40 [Dave]
Topic: Issue 11850
20:50:04 [Dave]
Gil this has no semantic change. It is just clarification
20:50:23 [dug]
+1 - its non-normative text that clarifies
20:50:57 [Dave]
Resolved: as proposed.
20:51:09 [Dave]
Topic: Issue 11790
20:51:10 [dug]
20:51:50 [Dave]
Dug: Is a qname a proble for the dialogue type.
20:52:23 [dug]
preferred solution: <mex:Dialect Type="{nsURI}localPart" ...
20:52:24 [Dave]
Dug: Some parsers don't hold onto name spaces for qnames longenough.
20:53:15 [Tom_Rutt]
q+
20:53:31 [Dave]
Dug: It looks like we might be stuck with existing parses.
20:53:39 [BobF]
ack tom
20:53:51 [gpilz]
q?
20:53:54 [gpilz]
q+
20:53:57 [gpilz]
q-
20:53:59 [gpilz]
q+
20:54:06 [Dave]
Tom: Do you mean schema changes?
20:54:24 [Dave]
Dug: Yes, but also definitions of what goes on the wire.
20:54:25 [BobF]
ack gp
20:55:36 [dug]
+q
20:55:47 [BobF]
ack dug
20:55:56 [Dave]
Dave: Says this approach works.
20:56:17 [Tom_Rutt]
q+
20:56:29 [BobF]
Note that dateTime is at risk
20:56:32 [Dave]
Dug: We could support both. Make it a string and test for the first character "{"
20:56:37 [gpilz]
q+
20:56:46 [BobF]
ack tom
20:57:20 [Dave]
Tom: Described how XML processors work, e.g. they need to do some context setting.
20:58:09 [dug]
its an attribute
20:58:10 [Dave]
Tom: In the XPath case there was no real context present. In the case we need to force it.
20:58:32 [Dave]
Tom: I don't like the both ways options.
20:58:40 [BobF]
ack gp
20:59:09 [Dave]
Gil: I like the "{" approach - because I am lazy.
20:59:27 [Dave]
Gil: Both is bad.
20:59:48 [dug]
+q
20:59:54 [dug]
<mex:Dialect Type="{nsURI}localPart"
20:59:57 [BobF]
ack dug
21:00:24 [Dave]
BoB; Can we drop both?
21:00:55 [Dave]
Ram: I don't know yet what the final picture is.
21:01:18 [Dave]
Ram: Directionally, the above makes sense, but I need to talk.
21:01:31 [asoldano]
I'm fine with single way
21:01:46 [gpilz]
q+
21:01:55 [Dave]
Bob: It sounds like a single way is the prefered approach.
21:02:04 [Tom_Rutt]
q+
21:02:12 [Dave]
Bob: Is there a common approach?
21:02:26 [Dave]
Gil: The "{" approach is reasonable common.
21:02:39 [Dave]
Dave: I has seen it too.
21:02:42 [BobF]
ack gp
21:02:45 [dug]
I'm pretty sure { isn't a valid char in a NS
21:03:05 [Dave]
Gil: Is it too much work to work out the type?
21:03:08 [asoldano]
that's a common way of doing a to-string conversion of NS
21:03:23 [BobF]
ack tom
21:03:24 [Dave]
Dug: I was only going to put it in the string.
21:03:38 [Dave]
Tom: This is an application level issue.
21:03:42 [gpilz]
q+
21:04:17 [BobF]
ack next
21:04:20 [Dave]
Tom: We define it the way we want.
21:04:40 [Ashok]
q+
21:04:50 [Dave]
Gil: Can we refine a string to enforse the format?
21:05:18 [BobF]
ack next
21:05:39 [Dave]
Ashok: You can do this with a pattern.
21:05:46 [dug]
q+
21:06:01 [BobF]
ack next
21:06:08 [Dave]
Ashok: E.g. { + characters + } + charcaters.
21:06:23 [Dave]
Ashok: I can help.
21:07:00 [Dave]
Dug: This doesn't help much, since schema validation is usually off.
21:07:19 [Dave]
Gil: With schema, the text is easier.
21:07:42 [dug]
Ashok if you can send me the xsd I'll make a more formal proposal
21:07:54 [gpilz]
my proposal for creating a simple type that defines "{namespace URI}local part" is about using schema as a spec documentation tool
21:08:28 [gpilz]
it's not so much about schema validation - it's just that a human reader can look at the schema and know exactly what is required
21:08:33 [Dave]
Resolution: The "{" approach looks like the direction. Please raise concerns ASAP.
21:08:44 [Ashok]
I need to do a bit of work to figure out how to write the pattern
21:09:22 [Dave]
Action: ashok will help Dug do this in schema.
21:09:23 [trackbot]
Created ACTION-175 - Will help Dug do this in schema. [on Ashok Malhotra - due 2011-02-08].
21:10:03 [Dave]
Issue 11865 needs a proposal
21:10:13 [Dave]
Topic: Issue 11766
21:10:14 [dug]
21:10:15 [dug]
21:11:20 [Dave]
Dug: The TX-Create was strange wrt empty representations. It seemed to imply support for empty was required.
21:11:53 [Dave]
Dug: The text in Put looked better, so the proposal ti to apply this text to Create.
21:13:01 [Dave]
Dug: Dug: There were some other text changes as well, including support for a fault.
21:13:25 [Dave]
Dug: In Put there was a minor alignment change included.
21:13:57 [gpilz]
q+
21:14:10 [Dave]
Ram: This looks like the right directin.
21:14:11 [BobF]
ack gp
21:14:22 [Dave]
Gil: It looks OK, but for a nit.
21:15:01 [Dave]
Gil: The reference to schema validation isn't really needed.
21:15:51 [Dave]
Dug: I just want them consitent.
21:16:26 [Dave]
Tom: Drop the word Schema.
21:16:37 [BobF]
ack yve
21:16:51 [Dave]
Yves: Maybe we just Put.
21:17:11 [Dave]
Dug: It's not the semantics, but the text.
21:17:13 [gpilz]
If an implementation that validates the presented representation detects that the presented representation is invalid
21:17:44 [Dave]
Dug: Looks OK.
21:18:01 [gpilz]
If an implementation that validates the presented representation detects that that representation is invalid . . .
21:18:04 [Dave]
Gil: There just might be other ways to do this.
21:18:20 [Ram]
q+
21:18:27 [Dave]
Dug: I can update the propose.
21:18:37 [BobF]
ack ram
21:19:04 [Dave]
Ram: Can empty still happen?
21:19:28 [Dave]
Dug: Yes, but only if the schema supports it.
21:20:16 [Dave]
Gil: So only some resources can do empty, but it is their choice.
21:20:32 [Tom_Rutt]
q+
21:20:45 [Dave]
Yves: The empty constructor was there to suppot later put to the resource
21:20:53 [Dave]
Gil: There are other use cases.
21:21:12 [dug]
q+
21:21:16 [BobF]
ack tom
21:21:26 [Dave]
Gil: The object defines what emapy constructure means.
21:21:45 [Dave]
Tom: What does empty constriuctor mean?
21:22:01 [Dave]
Gil: Various shades of nothingness.
21:22:33 [Dave]
Gil: Either use default value, or actuallyempty.
21:23:11 [Dave]
Gil: If it can't do it, fault (emptyness not allowed).
21:23:42 [Dave]
Tom: How does the client know?
21:23:50 [Dave]
Gil: It's out of band.
21:24:05 [dug]
q+
21:24:13 [Dave]
Bob: There is some work to do on this.
21:24:29 [BobF]
ack dug
21:24:36 [Dave]
Dug: I can update based on this discussion.
21:25:21 [Dave]
Resolved: This is the right direction, e.g. do the right thing or fault.
21:25:32 [Dave]
Action: Dug to revise the proposal.
21:25:32 [trackbot]
Created ACTION-176 - Revise the proposal. [on Doug Davis - due 2011-02-08].
21:25:54 [Dave]
Bob: Can we make progress of these others?
21:26:24 [Dave]
Topic: Issue 11698
21:26:26 [dug]
21:26:43 [Dave]
Dug: This is editorial.
21:27:19 [Dave]
Dub: The wording is not clear. there are other examples in the specs that are already clearer in similar cases.
21:27:26 [Dave]
Ram: Need more time.
21:28:59 [Dave]
Topic: Issue 11703
21:29:09 [dug]
21:29:30 [gpilz]
q+
21:29:41 [Dave]
Dug: This is just editorial, using the text from enumeration.
21:29:50 [BobF]
ack gp
21:30:40 [Dave]
Gil: You support filter, you supprt teh dialect I am using. Is this what I get back?
21:30:44 [Dave]
Dug: Yes.
21:32:10 [Dave]
Ram: there is a punction problem in the early part of the spec.
21:32:41 [Dave]
Dug: I can do a comma, but it's a different spec.
21:32:53 [Dave]
Bob: Just leave it.
21:33:16 [Dave]
Resolved: Issue as proposed.
21:33:51 [Dave]
Topic: Issue 11697
21:34:05 [dug]
21:34:13 [Dave]
Dug: Couldn't find a generic fault.
21:34:52 [Dave]
Dig: I lean to CWNA.
21:35:37 [Dave]
There was a slight drift into fantacy land.
21:35:50 [Dave]
Ram: Isn't there a fault for this.
21:36:00 [Dave]
Bob: Please find one.
21:36:35 [Dave]
Bob: this appears to be outside of the protocol.
21:36:49 [Dave]
Resolved: Close with no action.
21:36:50 [dug]
q+
21:37:01 [BobF]
ack dug
21:37:09 [dug]
21:37:09 [Dave]
Topic: Issue 11723
21:37:29 [Dave]
Dug: This is the same as above. Close with no action.
21:37:58 [dug]
q+
21:37:58 [Dave]
Resolved: Close with no action.
21:38:27 [Dave]
Topic: Issue 11776
21:38:42 [Dave]
Bob: Proposal close with no action.
21:38:51 [Dave]
Bob: Its a breaking change.
21:39:38 [Tom_Rutt]
q+
21:39:38 [Dave]
Dug: The semantic stays the same, but the name changes.
21:39:50 [BobF]
ack dug
21:39:53 [BobF]
ack tom
21:40:15 [Dave]
Tom: In SCA this did happen. If there is other real important stuff, we will fix this too.
21:40:28 [Zakim]
-Tom_Rutt
21:40:30 [dug]
q+
21:40:34 [gpilz]
q+
21:40:47 [BobF]
ack dug
21:41:10 [Zakim]
+Tom_Rutt
21:41:12 [BobF]
ack gp
21:41:14 [Dave]
Dug: There is no real implementation history to protect.
21:41:46 [Dave]
Gil: The proposal isn't clear. What is changing, the qname?
21:42:17 [Dave]
Dig: The text is OK, but on the wire we send maxelements, but get back elements.
21:43:08 [Dave]
Bob: Defer if we do get breaking changes.
21:43:12 [Dave]
+1
21:43:43 [Dave]
Resolved: Defer this to later if there is another breaking proposal (that matters).
21:44:49 [Dave]
Topic: Issue 11723
21:44:56 [Dave]
Already closed.
21:45:06 [Dave]
Topic: Issue 11724
21:45:26 [Dave]
Ram: Needs more time.
21:45:35 [Dave]
Topic: Issue 11725
21:45:51 [Dave]
Dug: Close with no action.
21:46:05 [Dave]
Resolved: Close with no action.
21:46:19 [Dave]
Topic: Issue 11772
21:46:37 [Dave]
Dug: Straight forwad.
21:47:19 [Dave]
Ram: will look into this.
21:47:30 [dug]
q+
21:47:36 [Dave]
Bob: Well done on the issues.
21:47:50 [Dave]
Topic: Testing Scenario
21:47:55 [BobF]
ack dug
21:48:21 [Dave]
Dug: The NS for the specs and the scenarion are different
21:48:36 [Dave]
Dug: Let the scenarion stay in the editor space.
21:48:55 [Dave]
Bob: Where would they stay long term?
21:49:14 [Dave]
Yves: They would stay in the group's namespace.
21:50:00 [Dave]
Bob: Put them in the scenarios directory within the wg.
21:50:07 [dug]
now:
21:50:55 [dug]
bob:
21:51:04 [dug]
21:51:14 [dug]
21:52:08 [Dave]
Bob: Yves is this OK?
21:52:12 [Dave]
Yves: Yes.
21:52:57 [Dave]
Dug: The scenario docs will start differently than teh other specs.
21:53:05 [Dave]
Gil: Thsi is oK.
21:53:06 [Yves]
if you used an entity for the ns, then it's easy to rebuild and adjust
21:53:20 [dug]
well, I need to move it in cvs too
21:53:23 [Dave]
Resolved: As proposed.
21:54:11 [Dave]
Nothing more on the scenarions.
21:54:11 [li]
q+
21:54:27 [BobF]
ack li
21:54:32 [Dave]
Li: When will these be stable?
21:54:39 [dug]
12:01pm the first day of the f2f
21:54:59 [Dave]
Dug: No we still have some more work on eventing.
21:55:35 [Dave]
Dug: What is there is pretty stable, But we may add more feature tests.
21:56:10 [Dave]
Bob: You can write code now.
21:56:28 [Dave]
Li: Is there an actual freeze time.
21:56:45 [Dave]
Bob: Can we freeze until the F2F.
21:56:54 [Dave]
Bob: it will change at the F2F.
21:57:20 [Dave]
bob: Is a week OK?
21:57:57 [Ram]
q+
21:58:06 [Dave]
Bob: This always happens.
21:58:27 [dug]
q+
21:59:01 [Dave]
Bob: Some freeze is needed, but will never be stable once we get to the F2F.
21:59:27 [Dave]
Bob: We still need to make as few changes as possible as wel approach the F2F.
21:59:30 [gpilz]
q+
21:59:32 [Dave]
Ram: Agrees.
21:59:36 [BobF]
ack ram
22:00:10 [Dave]
Ram: Wants a freze date if possible.
22:00:32 [BobF]
ack dug
22:01:27 [Dave]
Dug: I understand. I hope people are coding to the spec and not the scenario document. This is to test the spec, not the scenarion.
22:01:30 [BobF]
Time gentlemen
22:02:13 [Dave]
Bob: We will test the spec. and the meeting has ended.
22:02:47 [Dave]
Bob: But we will try to be as stable as we can.
22:02:51 [dug]
@yves - I hope I said "NOT code for the test"
22:03:24 [Yves]
the second time, yes, not the first time :) (but it's late here)
22:03:44 [Zakim]
-li
22:03:44 [asoldano]
bye
22:03:45 [Zakim]
-Gilbert_Pilz
22:03:46 [Zakim]
-??P6
22:03:47 [Zakim]
-Yves
22:03:47 [Zakim]
-[Microsoft]
22:03:47 [Zakim]
-Doug_Davis
22:03:48 [Zakim]
-Bob_Freund
22:03:49 [Zakim]
-asoldano
22:03:51 [Zakim]
-Tom_Rutt
22:03:53 [BobF]
rrsagent, generate minutes
22:03:53 [RRSAgent]
I have made the request to generate
BobF
22:08:50 [Zakim]
disconnecting the lone participant, Ashok_Malhotra, in WS_WSRA()3:30PM
22:08:51 [Zakim]
WS_WSRA()3:30PM has ended
22:08:55 [Zakim]
Attendees were Bob_Freund, Gilbert_Pilz, +1.908.696.aaaa, Doug_Davis, li, Yves, asoldano, Tom_Rutt, [Microsoft], Ashok_Malhotra
22:17:03 [gpilz]
gpilz has left #ws-ra | http://www.w3.org/2011/02/01-ws-ra-irc | CC-MAIN-2016-18 | refinedweb | 3,074 | 82.75 |
Products.statusmessages 4.0
statusmessages provides an easy way of handling internationalized status messages managed via an BrowserRequest adapter storing status messages in client-side cookies.
Introduction
statusmessages provides an easy way of handling internationalized status messages managed via an BrowserRequest adapter storing status messages in client-side cookies.
It is quite common to write status messages which should be shown to the user after some action. These messages of course should be internationalized. As these messages normally are defined in Python code, the common way to i18n-ize these in Zope is to use zope.i18n Messages. Messages module is to store the status messages inside a cookie. In version 1.x a server side session like storage has been used, but this turned out not to be caching friendly for the usual web caching strategies.
Changelog
4.0 - 2010-07-18
- Use the standard libraries doctest module. [hannosch]
4.0b1 - 2010-03-01
- Stopped the cookie from being expired if a redirect (301, 302) or not-modified (304) response is sent. This means that if you set a redirect and then (superfluously) render a template that would show the status message, you won't lose the message. [optilude]
4.0a2 - 2009-12-17
- Changed the default type of a new message from the empty string to info. [hannosch]
4.0a1 - 2009-12-17
- Simplified the interface to use simpler add/show method names while keeping backwards compatibility. [hannosch]
- More code simplification. Make the code itself independent of Zope2. [hannosch]
- Removed a five:implements statement, as the ZPublisher.HTTPRequest is always an IBrowserRequest in Zope 2.12. [hannosch]
- This version depends on Zope 2.12+. [hannosch]
- Package metadata cleanup. [hannosch]
- Declare package and test dependencies. [hannosch]
3.0.3 - 2007-11-24
- Use binascii.b2a_base64 instead of base64.encodestring; the former doesn't inject newlines every 76 characters, which makes it easier to strip just the last one (slightly faster). This fixes tickets #7323 and #7325. [mj]
3.0.2 - 2007-11-06
- Fixed encoding format for the cookie value. The former format imposed a serious security risk. The full security issue is tracked at:. This also fixes. [hannosch, witsch, mj]
3.0.1 - 2007-10-07
- Added the IAttributeAnnotatable interface assignment for the request to this package as well as the inclusion of the zope.annotation, as we rely on it. [hannosch]
3.0 - 2007-08-09
- No changes. [hannosch]
3.0rc1 - 2007-07-10
- Removed useless setup.cfg. [hannosch]
3.0b2 - 2007-03-23
- Fixed duplicate message bug. Showing identical messages to the end user more than once, doesn't make any sense. This closes. [hannosch]
- Added 's support for statusmessages without a redirect. This uses annotations on the request instead of direct values, so we avoid the possibility of sneaking those in via query strings. [tomster, hannosch]
3.0b1 - 2007-03-05
- Converted to a package in the Products namespace. [hannosch]
- Added explicit translation of statusmessages before storing them in the cookie. This makes sure we have a reasonable context to base the translation on. [hannosch]
- Changed license to BSD, to make it possible to include it as a dependency in Archetypes. [hannosch]
2.1 - 2006-10-25
- Updated test infrastructure, removed custom testrunner. [hannosch]
- Fixed deprecation warning for the zcml content directive. [hannosch]
2.0 - 2006-05-15
- Total reimplementation using cookies instead of a server-side in-memory storage to store status messages. The reasoning behind this change is that the former approach didn't play well with web caching strategies and added an additional burden in ZEO environments (having to use load-balancers, which are able to identify users and keep them connected to the same ZEO server). [hannosch]
1.1 - 2006-02-13
- Added tests for ThreadSafeDict. [hannosch]
- Fixed serious memory leak and did some code improvements. [hannosch, alecm]
1.0 - 2006-01-26
- Initial implementation [hannosch]
- Author: Hanno Schlichting
- Keywords: Zope CMF Plone status messages i18n
- License: BSD
- Categories
- Package Index Owner: hannosch, optilude, wichert
- DOAP record: Products.statusmessages-4.0.xml | http://pypi.python.org/pypi/Products.statusmessages | crawl-003 | refinedweb | 671 | 59.4 |
I recently got fed up with an issue with the extremely lame Actiontec router that is build into every Qwest DSL modem. The issue basically has to do with how the NAT port forwarding was being performed. Packets sent to my external IP address from inside the local LAN were being dropped instead of forwarded….
Year: 2007
AddressAccessDeniedException: HTTP could not register URL<…>.
A while back, when I was first doing WCF development I ran into the following exception: AddressAccessDeniedException: HTTP could not register URL<…>. Your process does not have access rights to this namespace. The exception message included a link to an MSDN article that explained the concept of HTTP Namespace Reservations. Unfortunately the page suggests…
Getting Started with Microformats
[Updated: Fixed typo, “microformats are created on a whim” should have been “microformats are not created on a whim”] Attended a really interesting talk about Microformats from Tantek Çelik of technorati. I knew a little bit about microformats going in, so before I go into his presentation, let me fill my readers in. Microformats are a way of…
MIX 07 Keynote Highlights
Keynote video is up! (it says its a live stream, but its working as a non-live stream for me now) Just got out of the MIX 07 Keynote w/ Ray Ozzie and Scott Guthrie. The big announcement was the cross platform .Net framework that will ship with silverlight. Keith Smith has a good write up over…
I’m at MIX
I’m at the MIX 07 conference this week. I’ll be attending sessions, meeting customers, and talking to people about Microsoft’s services platform (not on stage). And I’ll be blogging about all of it here! So stay tuned. | https://blogs.msdn.microsoft.com/paulwh/?m=20074 | CC-MAIN-2016-50 | refinedweb | 285 | 63.29 |
Red Hat Bugzilla – Bug 240877
Review Request: archivemail - A tool for archiving and compressing old email in mailboxes
Last modified: 2007-11-30 17:12:05 EST
Spec URL:
SRPM URL:.
Good:
+ Naming seems ok.
+ License seems ok.
+ Tar ball matches with upstream
+ Rpmlint is quite on source package
+ Rpmlint is quite on binary package
+ Pakcage contains verbain copy of the license text
+ Mock build works fine.
Bad:
- Strangs Build stanza:
You make a 'chmod 0644' on setup.py and test_archivmail.py, but you didn't call
any of this python scripts.
- Calling of the text script shows following error message:
python test_archivemail.py
The archivemail script needs to be called 'archivemail.py'
and should be in the current directory in order to be imported
and tested. Sorry.
I've chmod-ed those scripts because they're part of the upstream tarball but are
extraneous in an rpm install, and rpmlint complained about unnecessarily
executable docs. Should I provide a patch to test_archivemail.py so that it can
be used as-is after the rpm is installed?
Yes, this will be nice.
Hmm, my python-fu has been defeated. What if, instead, I put
test_archivemail.py in /usr/bin alongside archivemail and ln -s archivemail
archivemail.py? It makes test_archivemail.py happy.
How about putting installing the "archivemail" file as
/usr/share/archivemail/archivemail.py (chmod 644), then writing a small Python
wrapper script to import that module and run the main() method? Something like:
------------
#!/usr/bin/python
if __name__ == '__main__':
import sys
sys.path.insert(0, '/usr/share/archivemail')
from archivemail import main
main()
------------
Not only will this solve your test_archivemail.py problem (with _little_
tweaking), it doesn't clutter up /usr/bin and would actually give you a (very)
small performance boost as the Python interpreter will only parse and process
the text of your small startup script but will use the precompiled byte-code
.pyc file of archivemail, rather than having to re-parse this large file every
time it's executed.
Also, is it actually necessary to package the project's unit tests in the final
binary RPM? If I wanted that, i would go for the src.rpm, but maybe that's just
me...
Oops, I forgot to mention, of course the wrapper script should then be called
"archivemail" and installed to /usr/bin, chmod 755... :-)
That didn't work, it said it needed archivemail to be called archivemail.py.
When I renamed it that, it still failed, with a whole slew of errors.
Spec URL:
SRPM URL:
Maybe it'd be better just to put the test_archivemail in /usr/bin with
archivemail.py and be done with it?
(In reply to comment #7)
> That didn't work, it said it needed archivemail to be called archivemail.py.
> When I renamed it that, it still failed, with a whole slew of errors.
It works just fine.
Your "slew of errors" all have to do with the file location and execution
permission of the archivemail.py file. Comment #5 was just an _example_. You
still need to patch both archivemail.py (one-liner) and test_archive.py
(one-liner + simple sed substitution) to work with these new file locations and
permissions. I'll attach example patches shortly.
> Spec URL:
> SRPM URL:
>
> Maybe it'd be better just to put the test_archivemail in /usr/bin with
> archivemail.py and be done with.
Also, in general your average end-user isn't interested in unittests; I'm not
even convinced they should be shipped in the binary RPM... but putting them
under %docs at least gives intereste people (aka developers) the option of using
them, I suppose.
(In reply to comment #8)
> It works just fine.
Ah, I misunderstood. I told you, my python is not nearly so developed as my PHP
or even my.
Agreed, and I was not proposing to use symlinks, I was proposing to move the
test_ script there directly.
> Also, in general your average end-user isn't interested in unittests; I'm not
> even convinced they should be shipped in the binary RPM... but putting them
> under %docs at least gives intereste people (aka developers) the option of using
> them, I suppose.
Which is why I put it in %doc in the first place (which would still be my
preference). I was thinking I could put it in put with a note detailing how a
user could get the test script to work, namely copy it and the main script to
the same place and run. Given the Python shipped in Fedora, it should be
working from the get-go once the rpm is installed. Hence my use of the word
'extraneous in Comment #2.
Created attachment 155242 [details]
Example of proposed modifications to the archivemail package
Ok, I've created a tar.gz file containing an example archivemail startup
script, a patch for test_archivemail.py, and the spec file that ties all this
stuff together, heavily commented (based on your original spec file).
The resulting RPM install and runs fine. You can use the command "python
/usr/share/doc/archivemail-0.7.0/test_archivemail.py" to run the unit tests.
Mid-air collision! :-)
(In reply to comment #9)
> Agreed, and I was not proposing to use symlinks, I was proposing to move the
> test_ script there directly.
Fair enough, but I don't think its necessary to expose the unittests to all
end-users; to put in bluntly: they don't do anything useful[1]
> Which is why I put it in %doc in the first place (which would still be my
> preference).
Mine too, in this case. :-)
> I was thinking I could put it in put with a note detailing how a
> user could get the test script to work, namely copy it and the main script to
> the same place and run.
Yes, very good idea. But if you apply my proposed changes, no copying/renaming
is required, only a simple Readme.tests (or whatever) saying pretty much what I
said in the last paragraph of comment #10.
> Given the Python shipped in Fedora, it should be
> working from the get-go once the rpm is installed. Hence my use of the word
> 'extraneous in Comment #2.
Yes. Btw, I also removed some unecessary %doc entries (maifest, setup.py, etc) -
these are required for application installation, which is RPM's job.
Jochen: By the way, are you reviewing this package? If so, please assign the bug
to yourself to avoid confusion (I noticed the flag when I was about to do
this... :-) )
[1] except if you are debugging/developing, which normal users of the app won't
be doing ;-)
Very nice. I added the readme and fixed a permissions issue with the new source
file. The only rpmlint warning is about the test script not being executable,
which is expected.
Spec URL:
SRPM URL:
Good:
+ Testscript run fine.
+ Local install and uninstall works fine.
Bad:
- Script in %{_docdir} contains a shebang line at the first line. This coused
the rpmlint meessage you refered. So please remove this shebang line from the
script.
Complains about the shebang in /usr/share/archivemail/archivemail.py, fixed that
too.
Spec URL:
SRPM URL:
Good:
+ Rpmlint quite on binary RPM.
*** APPROVED ***
New Package CVS Request
=======================
Package Name: archivemail
Short Description: A tool for archiving and compressing old email in mailboxes
Owners: limb@jcomserv.net
Branches: FC-5 FC-6 F-7
InitialCC:
Great, thanks for the review!
FYI, upstream bug report:
I have included this patch for the initial import. Introduces no problems.
cvs done
Imported and built.
Package Change Request
======================
Package Name: archivemail
New Branches: EL-4 EL-5
cvs done. | https://bugzilla.redhat.com/show_bug.cgi?id=240877 | CC-MAIN-2017-04 | refinedweb | 1,277 | 75.2 |
Searching for Names in all the Wrong Places
Searching for Names in all the Wrong Places
Zone Leader Tim Spann shows us how to use the Soundex library with Elasticsearch to find people's names based on phonetic similarities and variations (Jim vs. Jimmy, etc.).
Join the DZone community and get the full member experience.Join For Free
How to Simplify Apache Kafka. Get eBook.
Algorithms for Searching for People
As you can imagine, searching for people's names is not trivial. Besides the usual text issues of mixed case, you have name variations, nicknames, and the like. Phonetic searching is very interesting.
You want to find Catherine when you search for Katherine. And searching for James should find you Jim and Jimmy.
There's a number of algorithms and libraries to help you do this more advanced name-matching.
Soundex is the standard and is very commonly used. It's in most of the major databases and is reasonably good at finding matches. Soundex was created by for the US census, so they had a pretty good test data set.
For those lucky enough to have ElasticSearch, there are a number of heavy duty options. The examples I have been talking about have been related to US/English names—obviously there's other languages and countries that have their own algorithms that make more sense for them.
So how do I put this cool searching into practice? Here is a list of some common Java solutions:
import org.apache.commons.codec.language.Soundex; import org.apache.commons.codec.language.Nysiis; import org.apache.commons.codec.language.DoubleMetaphone; // ... Soundex soundex = new Soundex(); String soundexEncodedValue = soundex.encode("Timothy"); String soundexEncodedCompareValue = soundex.encode("Tim"); String s3 = soundex.encode("Timmy"); // Timothy = T530 Tim = T500, Timmy = T500 Nysiis n = new Nysiis(); // Timothy = TANATY, Tim =TAN, Timmy = TANY DoubleMetaphone m = new DoubleMetaphone(); // Timothy = TM0, Tim = TM, Timmy = TM
Apache Commons Codec Soundex Javadoc
Soundex Algorithm at Princeton Java Class
Soundex in Oracle Database
Levenshtein Distance is another option or at least an enhancer.
A slightly better alternative is NYSIIS. NYIIS is implemented in Java by the Apache Commons Codec library.
NYIIS is also pretty simple to implement on your own.
Also, Metaphone is very good and also in the Swiss army knife of text searching, Apache Commons Codec.
In my example code, for my name, it seems Double Metaphone is the best. For really advanced queries you may need to use multiple algorithms. Since Apache Commons Codec has them all and they all use the same encoding method, you should have no issues integrating this into your Java 8, Spring, Hadoop, or Spark code. It would be really easy to write a REST service that looks up names and similar names in Spring Boot with Apache Commons Codec running in a CloudFoundry instance.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/searching-for-names-in-all-the-wrong-places | CC-MAIN-2018-43 | refinedweb | 488 | 56.76 |
W3C Releases Drafts For DOM L2 And More 150
TobiasSodergren writes "People at W3C seem to have had a busy Friday, according to their website. They have released no less than 4 working drafts (Web Ontology Language (OWL) Guide, the QA Working group - Introduction, Process and Operational Guidelines, Specification Guidelines) and 2 proposed recommendations: XML-Signature XPath Filter 2.0 and HTML DOM 2. Does the this mean that one can expect browsers to behave in a predictable manner when playing around with HTML documents? Hope is the last thing to leave optimistic people, right?"
W3C standards getting out of hand (Score:3, Funny)
Re:W3C standards getting out of hand (Score:1)
Re:W3C standards getting out of hand (Score:4, Funny)
doesn't matter... (Score:3, Insightful)
Re:doesn't matter... (Score:3, Interesting)
Re:doesn't matter... (Score:5, Insightful)
I work in a student computer lab for a fairly large university, about 28,000 students. You wouldn't *believe* the problems I have to deal with because of stupid, and I stress stupid, professors using stuff like MSword/powerpoint for their class notes and webpages.
I'll give you a few examples. Powerpoint is the most common for posting class notes. All good and fine because thanks to OpenOffice even a linux box can read pp slides just fine. The problem is printing them. Since we have only dot matrix printers (long story...) if the professor uses too weird a color scheme the slides don't print worth a damm, even with 'print only black/white' option checked. Problem #1.
The bigger problem is when they use MSword to post syllabi, notes, etc. Students have a problem viewing them at home for whatever reason (most likely they are using an old version of word) and they have to come back to campus to look at this stuff. It is insane. I always direct them to install OpenOffice but sometimes they might only have a modem so it isn't really an option. And if you talk to these professors about only posting stuff in MSWord they get defensive and say such things like 'everyone uses it' and other to the like. Try pointing out that just clicking 'save as rich text format' will cover 99% of the stuff they publish just doesn't work. Sigh. It is becoming a real problem. Same with webpages - what standards, microsoft is a stanard, I'm sure this would work fine if you would use a *microsoft* browser, etc, etc.
Not that all professors are dumb, a lot use things like rich text format and try to stay away from word but alot don't. It is a major headache to some students, and for me. And don't even get me started about how IE handles word documents - has the nasty tendancy to embed them within the current frame which causes havoc with printing, saving, etc - at least for your average student.
Seriously, more teachers need to be educated on thigns like open formats. For instance, it wouldn't be that hard to devolp a campus-wide XML format and a nice little front-end for making syllabus's, class notes, outlines, etc available to all faculty. That way you could ensure that everyone had equal access to the documents instead of forcing students to use MS products.
Re:doesn't matter... (Score:1)
However, I really don't feel much sympathy for students as I don't see professors using MS Office, or whatever else they like as a problem. There is always teh simple option of attending class and picking up the hardcopy when it is passed out. Indeed many classess I have taken have no website at all, and it is your responsbility to attend class and get your informaton that way.
Also, all the universities I have seen do at least a passable job (and usually much better) of providing computer facalities in places like the main library. It is not hard to go to the library and print what you need.
If you want to mandiate that professors all must use a given system for their websites, fine, but you'd better be prepared to make sure it works WELL and provide any and all necessary support for them to use it. Otherwise, they need to be allowed to use what they like.
Re:doesn't matter... (Score:1)
Re:doesn't matter... (Score:1)
Re:doesn't matter... (Score:2)
Re:doesn't matter... (Score:2)
Re:doesn't matter... (Score:1, Informative)
IE6 W3 support (Score:5, Interesting)
Lately I've been working on an app for a company's internal use, which means the delightful situation of being able to dictate minimum browser requirements. As a result, the app is designed for IE6/Mozilla. All development has been in Mozilla, and a lot of DOM use goes on. And it all works in IE6, no browser checking anywhere. My only regrets is I can't make use of the more advanced selectors provided by CSS2, so the HTML has a few more class attributes than it would need otherwise. But, overall, not bad.
Another positive note, IE6 SP1 finally supports XHTML sent as text/xml. So at last, XHTML documents can be sent with the proper mime type [hixie.ch].
So despite being a Mozilla (Galeon) user, as a web developer who makes heavy use of modern standards, I look forward to seeing IE continue to catch up to Mozilla so that I can worry even less about browser-specific issues.
Re:IE6 W3 support (Score:2)
Ah, yes, the 'Not having to worry about browser-specific issues" notion. You haven't exactly been a web-dev long enough, have you? (:P)
This is _exactly_ what we thought HTML 3.2 would turn out to be....and look at how wel that worked!
And anyway, if it isn't W3C standards, it's resolution, colors (allthough that's fixed now...sortof) etc.
Re:IE6 W3 support (Score:2)
Great, except XHTML is supposed to be served as application/xhtml+xml, which IE6 SP1 still wants to download rather than display.
I guess text/xml is one step closer, though.. assuming it works properly.
XHTML MIME types (Score:1)
Another positive note, IE6 SP1 finally supports XHTML sent as text/xml.
How did you get text/xml to work in IE? When I try it, I get a collapsible tree view of the document's source code.
Re:doesn't matter... (Score:1, Insightful)
Re:doesn't matter... (Score:2)
Also, while IE is the most popular browser, it's not the only one, and a not insignificant proportion of the population uses Mozilla, Opera, and other browsers. Somewhat hypocritical of me, since I'm currently using IE on my Windows partition, as opposed to Mozilla on my FreeBSD partition, but on purely technical merits, IE isn't really the best browser, and the optimist in me is convinced that the greater portion of the online population will eventually go for the better solution. On the other hand, if they don't, why should we worry about it? The proletariat can do as they please. So long as "MS HTML" doesn't somehow become entirely proprietary, we retain the ability to access it, plus we get to view properly-rendered pages. Whee.
Don't forget, either, that Microsoft actually is a member [w3.org] of the w3c. Microsoft can be accused of many things, but blatantly violating one's own standards is a rather stupid thing to do.
No. (Score:5, Insightful)
When there was 1 standard (HTML), browsers didn't behave predictably.
Now there are more, there is more scope for implemetations to have their quirks, not less.
Standards are large and complicated descriptions of expected behaviour. Each implementor may have a slightly different interpretation. Different implementations will have their strengths and weaknesses which make different parts of the standard easier or harder to implement fully and/or correctly. There may even be reasons why an implementor may choose to ignore part of a standard (perhaps it is difficult and he believes that users don't want or need that functionality yet).
Unfortunately, standards are an ideal to aim for, not a description of reality.
C++ XML API (Score:4, Interesting)
Re:C++ XML API (Score:3, Informative)
Re:C++ XML API (Score:2)
"For portability, care has been taken to make minimal use of templates, no RTTI, no C++ namespaces and minimal use of #ifdefs."
The API is basically C with classes, uses XMLChar * instead of std::string, etc. I'm looking for something more along the lines of the Boost or Loki libraries in that they integrate cleanly with the STL.
Let me use JDOM and XML::Simple as examples. They both simplify the (IMHO too complex) DOM model, as well as fitting closely to the language. JDOM, for example, uses standard Java strings and containers, while XML::Simple uses Perl associative arrays.
Re:C++ XML API (Score:2)
Re:C++ XML API (Score:5, Informative)
Someone posted a neat little class to the expat mailing list ~2yrs ago. Basically it was just a Node class with STL list for children and a hashmap for attributes. It was very small, clean, and was in essance a DOM. It used expat but trust me, the code was so tiny you could use any parser with it. It was like 200 lines of code.
I liked it so much I created the same thing in C called domnode [eskimo.com].
Search the expat archives [sourceforge.net]. Wish I could give you more to go on.
Re:C++ XML API (Score:3, Informative)
Re:C++ XML API (Score:2)
I completely agree about all the weird reinvent-the-wheel stuff that DOM and similar libraries contain: it would be so much better if they could use the STL in C++ and native data structures in other languages (nested lists in Lisp, etc etc). It's just that a basic function call interface is the lowest common denominator, so if you want the same library on every language you have to invent a whole new list and tree API. Perhaps this is an indication that the same library on every different language isn't such a good idea. (Think of the Mozilla debate: 'the same on every platform' versus 'native on every platform'. I have a feeling that in programming languages as well as GUIs the second choice is better.)
Re:XML API (Score:1)
Instead I implemented my own jdom like system that uses xpath to find noes in a document using Xalan's [apache.org] xpath API. This gives me the flexibility of xpath and the usefulness of a DOM-like XML api. I was thinking of porting it to C++ for use at home.
Standards (Score:2, Interesting)
Perhaps it's time we stopped sitting on our thumbs and complaining about Microsoft ignoring standards. An outright ban of IE is needed, from workplaces, schools, ect... Sites should block access to people using IE. This is the only way we can get our rights to web standards back!
Seriously though, does anyone have any ideas on how we can take control of web standards away from MS ?
Something about reading Eolas thingie.. (Score:1)
I remember a slashdot link [slashdot.org] somewhere mentioning something about IE getting eliminated due to some sort of plugin junk?
Re:Something about reading Eolas thingie.. (Score:1)
Re:Something about reading Eolas thingie.. (Score:1)
Ugh...
Slashdot requires you to wait 2 minutes between each successful posting of a comment to allow everyone a fair chance at posting a comment. It's been 1 minute since you last successfully posted a comment Note: chances are, you're behind a firewall, or proxy, or clicked the Back button to accidentally reuse a form. We know about those kinds of errors. But if you think you shouldn't be getting this error, feel free to file a bug report, telling us: Your browser type Your userid "614145" set the Category to "Formkeys." Thank you.
Re:Standards (Score:2)
Y'know, in a perfect world, I'd whole heartedly agree with you. Is it a perfect world? Hence, the diatribe.
Seriously though, does anyone have any ideas on how we can take control of web standards away from MS ?
Ooops, sorry. Cancel diatribe...
Sorry for the dose of reality.
Soko
Re:Standards (Score:1)
Why bother? Have you taken a look at these standard recently? They're huge and unwieldly. Perhaps a more attainable goal is to develop the next generation of browsers - a blank context for multimedia rendering as directed by the server-side script. Sort of a Shockwave Flash as a native platform.
Re:Standards (Score:4, Informative)
Somedays I'm more optimistic. Today's one of those days (tomorrow may not be 'cause I'm digging deeper into IE's weird-ass DOM than I usually care to). But...
Most web developers that have been around for a while would rather code to standards than to marketshare. Standards give you the promise of backward, and more importantly, forward, compatibility. It's also a helluva lot easier to sort out your code when a client asks for a redesign in a year or two if you've been conscious of more than just "making it look right" in the popular browser of the day.
Markup designed for IE only often does truly evil things on other platforms - there's going to be more cellphones and PDAs accessing web pages, not fewer. There are also serious organizational advantages to coding to standards - more tools for handling your pages, it's easier to whip up a quick perl script to process standards compliant HTML...the list of advantages is long.
Just like any other field, there's a trickle-down effect. Not everyone will write good, W3C compliant code, but more will, more often. And despite their megalithic, feudal mentality, Microsoft will have to pay attention. IE6 is still a long ways away from adhering to standards, but it's much, much closer than IE4 was. This seems to have been in large part a reaction to developers bitching about their lack of compliance. I'm hopeful the trend will continue.
Re:Standards (Score:3, Interesting)
My own homepage doesn't render in anything but Mozilla, currently, but small, personal sites aren't gonna break or make anything (unless they come in the millions, which is unlikely).
The people at Mozilla have provided us with a tool of 99% perfect rendering. Now it is up to the web site maintainers to actually enforce the use of Mozilla (or any other browser that fully adheres to standards; there is no other currently).
But Slashdot won't take this upon its shoulders, because it doesn't believe in standards, just like M$.
So M$ wins.
Re:Standards (Score:4, Informative)
Many sites can get away with this, but many cannot. If I'm selling a product on the web, I'll make darn sure that 99% of my customer's browsers work with my site. It's a good ideal to say "fix your IE bugs", but often not realistic.
Re:Standards (Score:2, Interesting)
That depends quite a lot on your definition of ALWAYS as it applies to Mozilla...Considering Mozilla was originally based off the Netscape source code (though I realize now it is been virtually completely rewritten). People seem to forget that Netscape were the kings of non-standard HTML as an attempt to "lock-in" customers. Hell, IE still to this day includes Mozilla in its user agent header to work around all the sites that would deny access to anything other than Netscape, back in the 2.0 era.
Re:Standards (Score:2)
At this I am very surprised. It's Microsoft's style to turn around and bite people in the ass when they have the upper hand. I wonder why MS hasn't "forced" Netscape only sites to change by updating their agent header?
No need - they have Passport (Score:2)
Changing headers is no use in that scenario
I just wish one little thing (Score:1)
Re:I just wish one little thing (Score:1)
Re:I just wish one little thing (Score:1)
Re:I just wish one little thing (Score:2)
There are some excellent accessible, standards compliant scripts now for creating trees / drop down menus from HTML nested lists - browsers without javascript see the list, while browsers with javascript get a nice expanding tree. Two examples:
Re:I just wish one little thing (Score:1)
Re:I just wish one little thing (Score:4, Informative)
If you got a problem with popup ads, then please download the Opera browser [opera.com]... you'll find F12 to be your best friend.
If you really want to crusade against something, then VB script is a better candidate or why not Outlook... the worst virus spreading software ever created.
Re:I just wish one little thing (Score:2)
that reminds me, since I do not use outlook/express for e-mail (I use mozilla at work and opera's stuff at home) I just set my adress list to use public addresses @ microsoft.com, that way if for some reason (someone else in the family ignores one of the computer commandments and opens some virus in an attachment) it simply sends the crap to microsoft and no one else
junk snail mail is also handled by removing the postage paid self-adressed enveloped and filling it with metal scraps and placing in the mail (receivers are charged with postage) - make the spammers/virus enablers pay whenever you can.
client side scripting: good, JavaScript: bad (Score:2)
If JavaScript (by which I mean JavaScript, DOM, DHTML, etc.) were a simple, if limited, solution to those problems, it would be OK. But it isn't. It is much more complicated than technically better solutions, yet it still is extremely limited.
Simple and limited, and complex and powerful are both acceptable engineering tradeoffs. But complex and limited and buggy is a bad engineering tradeoff. And that's JavaScript.
Just because you don't feel the need .... (Score:1)
The banner rotation is via js so that the main page can be cached.
(but not annoying pop-up/unders - some of us realise they are a detraction).
Our banners don't link to any external sites.
The banner is part of the web frame of reference.
We have over 500 pages of content so I'm sure you'll excuse us our right to present deep links on our main page.
This is a troll, right? (Score:1)
Do you think javascript == popup windows? The open window call is abused, and I'd like to see the spec implement some kind of suggested behaviour along the lines of disregarding popups that aren't user activated (Mozillia already does a great job of this, but making it part of the spec would be superior) but to lose client based scripting would be a blow to the usability of the Internet and the palette of web designers trying to make intelligent sites.
Client side form validation, adapting pages, and heck, even silly stuff like graphical rollovers which you can't do in CSS yet, are all things the Internet benefits from. Only an idiot would fail to anticipate how their page would work to users who don't have Javascript turned on, but it can make the experience run that much nicer and efficiently.
Not to mention that nuking Javascript, an open, standards based, accessible language, will simply promote the use of obnoxious propriety technology like Flash.
The W3C is a joke (Score:2, Insightful).
From what I hear about CSS3, it's going to be such a massive specification that no company (save Microsoft, if they actually gave a damn) would possibly be able to implement it.
What are we doing? The W3C puts out specifications that by the year become less and less relevant because their possible implementation date grows further and further remote. We'll see CSS3 arrive but will we ever see it in action? Or will it be supplanted by CSS4 and 5 which we will also never see? In the meantime we see developers actually building websites entirely out of Flash because there's one reference implementation (one version, period) and it just works. Is that the future we want?
It's time to hold these clowns accountable. Make them do some real work: make them create a working version of their spec. Make them brand one developer's work as a reference. Make them do something to prove that these standards are more than just empty clouds of words!
Re:The W3C is a joke (Score:3, Informative)
Re:The W3C is a joke (Score:2, Interesting)
Unfortunately, Mozilla does not support DOM 2 HTML in XHTML... and probably never will, because the bug assignee doesn't seem to care about this rather crucial bug.
Btw, DOM 0 is not a standard, but a collection of common garbage from the old days. It is supported in Mozilla only for backward compatibility, and people shouldn't use it in design. Mozilla explicitly does not support IE and NN4 only stuff such as document.all and document.layers.
Re:The W3C is a joke (Score:1)
Re:The W3C is a joke (Score:2)
Re:The W3C is a joke (Score:4, Informative)
You have to have standards. The W3C are the people who are widely recognized as being the technical lead for the net. Now they don't make law, quite right, but if there was no W3C then Microsoft really WOULD own the web: as it is, we can and do take them to task when they break the rules. They can ignore us of course, yet whaddaya know but IE6 supports DOM/CSS Level 1. Not a particularly impressive achievement, but it's a start.
The standards are actually very precise, which is one reason they are seens as being very large. There is hardly any room for interpretation in stuff like the DOM, CSS, XML etc. Of course, sometimes when the internal architecture of IE mandates it Microsoft simply ignore things, the mime-type issue being a good example, but also the fact that you have to specify node.className = "class" to set the style on a new element, as opposed to setting the class attribute (which works fine in Mozilla). Why? Because (according to an MS developer) internally the MS dom is based on object model attributes, so that's what you have to set..
[sigh] Yes. Mozilla supports DOM and CSS Level 2 and they have partial support for Level 3 now. Level 0 is the term used to refer to the pre-standardized technologies, it doesn't actually exist as a standard so EVERY browser that can script web pages has a level zero DOM. It should be noted that TBL himself has stepped in on occasion to tick off Microsoft about stuff like browser blocks, bad HTML etc.
From what I hear about CSS3, it's going to be such a massive specification that no company (save Microsoft, if they actually gave a damn) would possibly be able to implement it.
Then you hear wrong.
In the meantime we see developers actually building websites entirely out of Flash because there's one reference implementation (one version, period) and it just works. Is that the future we want?
Developers do not build web pages out of flash. Marketing departments do. Luckily most web pages are not built by marketing.
It's time to hold these clowns accountable. Make them do some real work: make them create a working version of their spec.
Poor troll. The W3C already implement all their standards, go to w3.org and download Amaya. Nobody uses it for actually browsing the web, but there it is, proof that an actually very small organization with very few coders can implement their standards.
Amaya *cough* *cough* :-) (Score:2)
Amaya [w3.org]
I'm all for standards, but they should have a basis in reality (read: working implementations) and not be some committee's idea of a good idea.
DOM not HTML (Score:3, Informative)
You seems to confuse DOM with HTML standard. DOM does not enforce HTML document structure, it is just OO representation of HTML and XHTML documents.
Re:DOM not HTML (Score:2)
DOM can be used to "play around" with HTML documents, after they have been loaded by the browser.
I seem to recall some web site using Javascript to expand and collapse discussion threads. Think it was kuro5hin [kuro5hin.org]. I'm not sure if it's using DOM to do that, but that is the sort of thing you can do with DOM.
huh? (Score:1, Funny)
what does that mean?
*squints*
I gotta get some sleep..........
Ohhhh... _DOM_. (Score:3, Funny)
Yeah, considering how long ago it was released, the draft for it would be just about due...
Re:Ohhhh... _DOM_. (Score:1, Funny)
Yea, bash MS some more... (Score:3, Flamebait)
How about an example from around the time of the Great Browser Holy Wars...
NETSCAPE ONLY TAGS - blink - layer - keygen - multicol - nolayer - server - spacer
INTERNET EXPLORER ONLY TAGS - bgsound - iframe - marquee
Hmm... looks like Netscape had more.
Look around you, proprietary "anything" is how you keep money coming in and marketshare up. If youre talking about some kind of open source, community developed code, like Mozilla, then yes, please avoid proprietary stuff. But quit bashing Microsoft just because they have a good browser that supports standards at least as well as their only major competitor and are using the same technique as just about every other capitalist on the planet to make more money and keep investors happy. Netscape sucked and deserved to die.
Now go ahead, mod me down because I stood up for MS.
Re:Yea, bash MS some more... (Score:1)
It was a choice of either a mod, or a comment. I like discussion better than point systems.
I tend to agree with you on the CCS sheets. For example, in IE there is a CSS that allows me to do a hover color change WITHOUT using the seemingly more popular java code. I like it, its a better design for sites in my opinoin, netscape(older versions) craps on it though.
However, I dont really agree that netscape sucked and deserved to die. Without it there would have been even less innovation. Even now, I use opera over IE because of the ability to go to different and seperate connection by using a simple tab layout at the top of the screen all contained in one program. Whereas to do something similar in IE, I have to open up half a dozen instances of explorer
Re:Yea, bash MS some more... (Score:1)
Re:Yea, bash MS some more... (Score:2)
No nice popup menus in other words
Re:Yea, bash MS some more... (Score:2, Insightful)
There are some sites that are absolutely committed to IE and use evil tech like VBscript. Mostly, sites are optimized to IE's idiosyncracies. Since there's no W3 standard on rendering broken, non-compliant code, IE made it render a particular way while Netscape rendered it a particular way. With proper, compliant code, the pages look close enough or at least don't entirely die when you load them. And of all those non-compliant tools, I typically only see iframe, spacer, and bgsound being used.
But as IE market share grew, lazy/ignorant web designers (which includes Frontpage users) started to test only for IE. When MS destroyed Netscape, most web designers stopped testing for alternative browsers. So Microsoft indirectly caused mass W3C noncompliance.
I think the problem with your post is that you confuse standards with features. CSS support is a feature. An analogy: the DMV license application form in my state comes with a voter registration form attached. DMVs aren't required to attach forms; it's just an added bonus to promote voting. But, the voter registration form has to be standard. If my DMV office created a "SDMV voter registration form" that had extra questions like political ideology and sexual preference, any other DMV would wonder what the hell the DMV branch was thinking when they made the form.
It does seem that Mozilla is a lot more willing than the old Netscape and Opera to render broken, non-standard HTML pages, although IE will still render the mind-bogglingly broken things.
With Mozilla 1.1, I have seen _no_ pages that only work in IE ( excluding those using Evil MS Tech (tm) ), and a minority (usually made by non-professionals) that totally screw up the rendering.
Re:Yea, bash MS some more... (Score:4, Insightful)
Shock horror! Browser released in 1996 fails to support latest web standards!
If you want to bash Netscape, aim at Netscape 6 or 7 (both of which have superb standards compliance thanks to the Mozilla project). Netscape 4 simply isn't relevant any more, and hasn't been for several years. It's only big companies and institutions who don't want the hassle of upgrading their site-wide PCs that are keeping it alive, and with any luck even they will give it up soon.
Let's not forget JS, VBS & JSCRIPT (Score:2)
Hows about that for non-standard!
My first introduction to the DOM and Scripting was builing an I.E.4 Based VB Script application for Boots The Chemist Intranet. That's about as non-standard as you can get. The VBS/JS step debugger in Visual Studio was useful if you could get it going.
These days there are few differences between the different javascript/dom. (getting xml documents without screen refreshes is unfortunately one of them *sigh*). My favoured route is develop in Mozilla then test in I.E. I've done a drag and drop HTML email editor that works in Moz IE & Opera. The scope of Javascript doesn't really get excercised as far as I've seen round the web.
JS is a standard (Score:1)
Javascript was a Netscape invention. Hows about that for non-standard!
Was. Now it's an international standard, called "ECMAScript" [wikipedia.org] (ECMA-262) for the language and HTML DOM Level 2 for everything under document.
mozilla is run by netscape (Score:2)
Now go ahead, mod me down because I stood up for MS.
I wish I had mod points, I would mod you down. Not because you stood up for MS, but because I don't think you know what you're talking about.
Most of the work on mozilla is done by netscape employees. I would guess much more that 3/4's of the mozilla code is written by aol/netscape'rs
And as such, most of the kudos for mozilla's design and engineering accomplishments goes to the netscape engineer staff. There are a lot of very smart people in that group. I encourage anyone to try following mozilla's development for a while. Track a bug, follow discussions on #mozilla, etc. I don't agree on a lot of what the moz dev team does ( sometimes my opinion is they back-burner linux bugs for windows ), but I have a lot of respect for them. And you should too.
People say "netscape sucks", "mozilla rules" not realizing that mozilla today would be a much smaller project ( read not as many platforms, not as many standards ) if it weren't for the hardwork and dedication to open-source of AOL/Netscape
Does anyone ever... (Score:2, Insightful)
Standards can be made, don't expect that people will ever follow them.
-- AcquaCow
Re:Does anyone ever... (Score:1)
The maintenance factor should be of major importance to businesses... as it is, they have sloppy code that takes years to debug (font tags, inline propriteary javascript, both CSS and styled HTML, sniffer code and so on), and they have to maintain several versions for various browsers. Maintaining one standards compliant version with style separated from content is so much economically sane.
Re:Does anyone ever... (Score:1)
This has become a slightly longer rant than I wanted to write (esp at near 4am) but I suppose my point was that sure Netscape and IE are both rendering the HTML to standard but they handle certain objects differently causing the coder (me) to be forced to adjust their site accordingly to kludge around those slight differences. Standars or not, there are still differences.
If we can come up with one solid independent rendering engine that is both fast and highly portable, use that in all browsers, I think we'd be set.
5 mins to 4 am...its time for bed.
-- AcquaCow
Re:Does anyone ever... (Score:2)
Most Linux systems nowadays include nsgmls, but that command has so many obscure options and SGML prologues are hard to understand. There needs to be a single command 'html_validate' which runs nsgmls with all the necessary command-line options (and obscure environment variables, and DTD files stored in the right place) to validate an HTML document. If that existed then I'd run it every time before saving my document and I'm sure many others would too. But at the moment setting up nsgmls to do HTML validation (at least on Linux-Mandrake) is horribly complex. (Especially if you want to validate XML as well; you need to set environment variables differently between the two uses.)
Re:Does anyone ever... (Score:1)
You've got to be kidding me. You've never used the W3C validator? I couldn't live without that thing... [w3.org]
er, yes. (Score:3, Informative)
Is a great tool.
If your code is valid HTML then if anyone complains that their X browser doesn't render it properly that's your first point of defense.
Re:Does anyone ever... (Score:1, Interesting)
Re:Does anyone ever... (Score:2)
Yes, I do, all the time.
The current site I'm designing for gets about 35,000 visitors a day, and it's going to be XHTML 1.1 (served as application/xhtml+xml to accepting clients, no less) with a full CSS layout (with the XHTML being semantically rich so it's not required; no DIV/SPAN soup), and hopefully level AAA on the Web Content Accessability Guidelines 1.0.
I do the same for tiny sites too; the latest being a site for a diving club.
I have noticed a trend towards larger sites redesigning for XHTML and CSS recently; what was the trend for personal sites seems now to be migrating up the hierachy to larger sites such as Wired [wired.com] and AllTheWeb [alltheweb.com]. I don't expect this trend to reverse.
Re:Does anyone ever... (Score:2)
Yes, used sensibly to denote sections, since HTML provides no better way to mark them up yet.
There's nothing wrong with using DIV and SPAN, it's just when that's all you have that things get questionable.
Compare: To: Now, which do you think has more semantic meaning and will degrade better?
With CSS, both can easily be made to render identically, but the second non-DIV-and-SPAN-soup version degrades much better. Unfortunately a worrying number of people seem to think the former method is what CSS is all about -- the default Movable Type [movabletype.org] templates are a good example of this brain damaged view of HTML
Eh? (Score:5, Funny)
Soon to be followed by the Acronyn Formation Policy (FAP) ?
Re:Eh? (Score:1, Insightful)
Maybe WOL really was right!
Re:Eh? (Score:1)
Not "Proposed" Recommendation anymore, it's final (Score:3, Informative)
XML-Signature XPath Filter 2.0 is a final W3C Recommendation, not proposed.
-m
Standards (Score:2)
the last hope of the doomed (Score:1)
Sorry... (Score:4, Informative)
One simple example: innerHTML. This 'property' is not part of ANY W3C draft, yet many, many websites use it because both IE and Mozilla (Netscape) support it.
Even though M$ is on the committee, their own browser still has plenty of features that are not defined in XHTML 1.0, DOM (level 2 or 3), CSS or whatever. And of course 99% of all web 'developers' are more than happy to use these features.
Re:Sorry... (Score:2)
look here for more info [developer-x.com]
DOM-2 irrelevant to cross-browser issues (Score:2, Informative)
As long as you do things strictly DOM-1 way, current browsers have been working pretty much predictably for quite some time. I develop sophisticated DHTML and test it in IE, Mozilla and Opera, and I never have a problem as long as I use only DOM methods (which can sometimes be quite limiting, but bearable overall).
A lot of people still do pre-DOM legacy DHTML because they have to make 4.x-compatible sites, but that's another story. DOM-2 may be more featureful, but it doesn't promise making cross-browser development any easier. It can make it harder indeed if not implemented accurately and timely among different browsers. Given a lesser incentive to implement it (DOM-1 is OK for most things), I find it quite possible.
W3C: stop now (Score:3, Interesting)
Of course, some client-side code is useful, but unfortunately, the major contenders have dropped the ball on that one. The W3C has given us JavaScript+DOM+CSS+..., but it's way too complicated for the vanishingly small amount of functionality, and nobody has managed to implement it correctly; in fact, I doubt nobody knows what a correct implementation would even mean. Flash has become ubiquitous, but it just isn't suitable for real GUI programming and is effectively proprietary. And Java could have been a contender, but Sun has chosen to keep it proprietary, and the once small and simple language has become unusably bloated.
But, hey, that means that there is an opportunity for better approaches to client-side programming. Curl might have been a candidate if it weren't for the ridiculous license. But someone outside the W3C will do something decent that catches on sooner or later.
Re:W3C: stop now (Score:1)
Should everyone just copy whatever Microsoft comes up with, because lets face it, they have the largest userbase? Somehow I don't see people here appreciating that.
I mean sure, you can say "wah wah, Microsoft didn't follow the standards, wah wah, Opera doesn't do this yet, this standards system is flawed!" but if there is no reference point for any of these things, how could you possibly expect things to improve?
One thing that's obvious is that these technologies are needed, not just silly ideas implemented by bored programmers, so if they're going to exist, then better an appropriate committee come up with workable drafts than a lone company goes ahead and does what they feel like. (heck that's one of the main reasons MS came up with so much funky spec breaking stuff - call it embrace and extend if you want, but they wanted to do things before the standards were there, which is why we have this mess)
Re:W3C: stop now (Score:4, Interesting)
Everybody is, for practical purposes. Who do you think is dreaming up a lot of the stuff that comes out of the W3C? Look at the authorships of the standards. And if you sit in those meetings, you'll quickly see that Microsoft doesn't often take "no" for an answer.
Microsoft has even told us why they like their standards to be complicated: they believe that if they just make it complicated enough, nobody else but them can implement them. Of course, Microsoft's reasoning is at the level of Wiley Coyote, with Open Source being the Roadrunner, but what can you do.
One thing that's obvious is that these technologies are needed,
We have a problem with creating dynamic web content, but the current crop of W3C standards for addressing that problem isn't working; it has turned into a Rube Goldberg contraption. Someone needs to start from scratch, and the W3C appears to be incapable of doing it.
If we don't have someone like the W3C putting this stuff in writing somewhere, how else are we going to have a hope in hell of browsers talking to each other?
Of course, things need to get written down and standardized. But the way standards are supposed to work is that people try things out in practice, whatever works well survives in the marketplace or among users, people create multiple implementations, then people get together and work out the differences among the implementations, then it all gets written up as a standard, and finally everybody goes back and makes their implementations standards compliant. It's a long, tedious process, but it does result in reasonable standards that real people can actually implement.
What the W3C is often doing is using its position to create completely unproven systems on paper and let the rest of the world figure out how to deal with it. Or, worse, the W3C is used by powerful companies to push through "standards" that haven't stood the test of time and for which only they themselves have a working implementation. If you give that kind of junk the stamp of approval of a standards body, you make things worse, not better.
Re:W3C: stop now (Score:2)
Huh? JavaScript is the Mozilla implementation of ECMAScript, a standard (not W3C) invented by Netscape. The DOM was also a Netscape idea, now standardized. CSS was originally proposed and largely designed by a guy from Opera. There are quite a few implementations out there actually, the idea that W3C technologies are too large to implement is crazy. Look at Mozilla, Amaya, even Konqueror is getting there now.....
The W3C should have stopped with a full specification of HTML. Anything they have been doing beyond that has been doing more harm than good. The web succeeded because HTML was simple.
Yes, and now it's ubiquitous do you really think we need to keep it simple? Being simple was great when the web was small, it let it grow very quickly. Why should we keep it simple now? Just for the sake of it? I'd rather have power. If that means there are only 3 or 4 quality implementations as opposed to 20, then so be it.
The world is not a simple place, and the things we want to do with the web nowadays aren't simple either. If you want simplicity then feel free to write a web browser that only understands a subset of the standards, they are layered so people can do this. Just bear in mind that it won't be useful for browsing the web, because a lot of people like powerful technologies and use them.
Re:W3C: stop now (Score:2)
Yes, but the W3C gave it its blessing and built lots of other standards on it.
Why should we keep it simple now? Just for the sake of it? I'd rather have power. If that means there are only 3 or 4 quality implementations as opposed to 20, then so be it.
You are confusing complexity with power. The W3C standards are complex, but they aren't powerful. And that's the problem. Despite all the junk coming out of the W3C, it's still basically impossible to do reliable animations, drag-and-drop, document image display, editing, and other commonly desired things in web browsers.
I want to do complex things, but after 10 years, the W3C standards still don't support it.
The world is not a simple place, and the things we want to do with the web nowadays aren't simple either.
Yes, and the W3C fails to meet the needs of people who want to do complex things. All the W3C does is provide people with ever more complex ways of doing simple things. That is not progress.
If you want simplicity then feel free to write a web browser
More likely, there are new web browsers and plugins around the corner that build on HTML/XHTML, but come up with better ways of doing the other stuff. It's harder now than it was 10 years ago, when any kind of bogus idea could succeed, but it's still possible. Perhaps Curl could have succeeded in that area if they had open sourced it. But something else will come along.
Who needs W3C standards (Score:2)
Re:Who needs W3C standards (Score:2, Informative)
Now, if you were talking about SOAP...
It is not only Web development (Score:1, Insightful)
OWL is about information retrieval, and 'XML-Signature XPath Filter' is about document signing.
The DOM stuff, is no more only a Dynamic HTML stuff. DOM is important because it is being actively used to manage XML documents, and previous specifications are very clumpsy because they are a compromise between previous brosers specific market standards.
Maybe, it is a need to develop some simple DOM stuff from scratch instead of adding levels over a compromise approach. And again, as said above, give a reference implementation, to start with.
Vokimon
Ontologies (Score:2)
Re:Ontologies (Score:2)
I'm glad the W3 is there, pushing this, but I hope ontologies aren't just a web fad and can mature to a usable component of 'the semantic web'.
(Of course I also hope to one day get back to school or get a job that pays more than my expenses
Standards *DO* work. (Score:5, Insightful)
MSDN clearly marks out which functions are standard to and which version of HTML/DOM they are complying to.
Mozilla is almost de-facto compliant because that's the only thing they have to work from and they don't have an agenda like interoperation with MS Office/Front Page.
Standards compliance does work, it's the lazy/inept authors of web pages that are to blame for faulty product resulting from an ad-hoc approach to web page development.
Then again, like the saying goes: "A bad workman always blames his tools..."
Re:Help, About (Score:1) | https://developers.slashdot.org/story/02/11/10/0334248/w3c-releases-drafts-for-dom-l2-and-more | CC-MAIN-2018-17 | refinedweb | 7,764 | 71.55 |
- Interfaces
- Object Cloning
- Inner Classes
- Proxies
Inner Classes
An inner class is a class that is defined inside another class. Why would you want to do that? There are four reasons:
An object of an inner class can access the implementation of the object that created itincluding data that would otherwise be private.
Inner classes can be hidden from other classes in the same package.
Anonymous inner classes are handy when you want to define callbacks on the fly.
Inner classes are very convenient when you are writing event-driven programs.
You will soon see examples that demonstrate the first three benefits. (For more information on the event model, please turn to Chapter 8.)
NOTE
C++ has nested classes. A nested class is contained inside the scope of the enclosing class. Here is a typical example: a linked list class defines a class to hold the links, and a class to define an iterator position.
class LinkedList { public: class Iterator // a nested class { public: void insert(int x); int erase(); . . . }; . . . private: class Link // a nested class { public: Link* next; int data; }; . . . };
The nesting is a relationship between classes, not objects. A LinkedList object does not have subobjects of type Iterator or Link.
There are two benefits: name control and access control. Because the name Iterator is nested inside the LinkedList class, it is externally known as LinkedList::Iterator and cannot conflict with another class called Iterator. In Java, this benefit is not as important since Java packages give the same kind of name control. Note that the Link class is in the private part of the LinkedList class. It is completely hidden from all other code. For that reason, it is safe to make its data fields public. They can be accessed by the methods of the LinkedList class (which has a legitimate need to access them), and they are not visible elsewhere. In Java, this kind of control was not possible until inner classes were introduced.
However,. You will see the details of the Java mechanism later in this chapter.
Only static inner classes do not have this added pointer. They are the Java analog to nested classes in C++.
Using an Inner Class to Access Object State
The syntax for inner classes is somewhat complex. For that reason, we will use a simple but somewhat artificial example to demonstrate the use of inner classes. We will write a program in which a timer controls a bank account. The timer's action listener object adds interest to the account once per second. However, we don't want to use public methods (such as deposit or withdraw) to manipulate the bank balance because anyone could call those public methods to modify the balance for other purposes. Instead, we will use an inner class whose methods can manipulate the bank balance directly.
Here is the outline of the BankAccount class:
class BankAccount { public BankAccount(double initialBalance) { . . . } public void start(double rate) { . . . } private double balance; private class InterestAdder implements ActionListener // an inner class { . . . } }
Note the InterestAdder class that is located inside the BankAccount class. This does not mean that every BankAccount has an InterestAdder instance field. Of course, we will construct objects of the inner class, but those objects aren't instance fields of the outer class. Instead, they will be local to the methods of the outer class.
The InterestAdder class is a private inner class inside BankAccount. This is a safety mechanismsince only BankAccount methods can generate InterestAdder objects, we don't have to worry about breaking encapsulation. Only inner classes can be private. Regular classes always have either package or public visibility.
The InterestAdder class has a constructor which sets the interest rate that should be applied at each step. Since this inner class implements the ActionListener interface, it also has an actionPerformed method. That method actually increments the account balance. Here is the inner class in more detail:
class BankAccount { public BankAccount(double initialBalance) { balance = initialBalance; } . . . private double balance; private class InterestAdder implements ActionListener { public InterestAdder(double aRate) { rate = aRate; } public void actionPerformed(ActionEvent event) { . . . } private double rate; } }
The start method of the BankAccount class constructs an InterestAdder object for the given interest rate, makes it the action listener for a timer, and starts the timer.
public void start(double rate) { ActionListener adder = new InterestAdder(rate); Timer t = new Timer(1000, adder); t.start(); }
As a result, the actionPerformed method of the InterestAdder class will be called once per second. Now let's look inside this method more closely:
public void actionPerformed(ActionEvent event) { double interest = balance * rate / 100; balance += interest; NumberFormat formatter = NumberFormat.getCurrencyInstance(); System.out.println("balance=" + formatter.format(balance)); }
The name rate refers to the instance field of the InterestAdder class, which is not surprising. However, there is no balance field in the InterestAdder class. Instead, balance refers to the field of the BankAccount object that created this InstanceAdder. This is quite innovative. Traditionally, a method could refer to the data fields of the object invoking the method. An inner class method gets to access both its own data fields and those of the outer object creating it.
For this to work, of course, an object of an inner class always gets an implicit reference to the object that created it. (See Figure 63.)
Figure
63:) { double interest = outer.balance * this.rate / 100; // "outer" isn't the actual name outer.balance += interest; NumberFormat formatter = NumberFormat.getCurrencyInstance(); System.out.println("balance=" + formatter.format(outer.balance)); }
The outer class reference is set in the constructor. That is, the compiler adds a parameter to the constructor, generating code like this:
public InterestAdder(BankAccount account, double aRate) { outer = account; // automatically generated code rate = aRate; }
Again, please note, outer is not a Java keyword. We just use it to illustrate the mechanism involved in an inner class.
When an InterestAdder object is constructed in the start method, the compiler passes the this reference to the current bank account into the constructor:
ActionListener adder = new InterestAdder(this, rate); // automatically generated code
Example 64 shows the complete program that tests the inner class. Have another look at the access control. The Timer object requires an object of some class that implements the ActionListener interface. Had that class been a regular class, then it would have needed to access the bank balance through a public method. As a consequence, the BankAccount class would have to provide those methods to all classes, which it might have been reluctant to do. Using an inner class is an improvement. The InterestAdder inner class is able to access the bank balance, but no other class has the same privilege.
Example 64: InnerClassTest.java
1. import java.awt.event.*; 2. import java.text.*; 3. import javax.swing.*; 4. 5. public class Inner(double rate) 37. { 38. ActionListener adder = new InterestAdder(rate); 39. Timer t = new Timer(1000, adder); 40. t.start(); 41. } 42. 43. private double balance; 44. 45. /** 46. This class adds the interest to the bank account. 47. The actionPerformed method is called by the timer. 48. */ 49. private class InterestAdder implements ActionListener 50. { 51. public InterestAdder(double aRate) 52. { 53. rate = aRate; 54. } 55. 56. public void actionPerformed(ActionEvent event) 57. { 58. // update interest 59. double interest = balance * rate / 100; 60. balance += interest; 61. 62. // print out current balance 63. NumberFormat formatter 64. = NumberFormat.getCurrencyInstance(); 65. System.out.println("balance=" 66. + formatter.format(balance)); 67. } 68. 69. private double rate; 70. } 71. } InterestAdder inner class as
public void actionPerformed(ActionEvent event) { double interest = BankAccount.this.balance * this.rate / 100; BankAccount.this.balance += interest; . . . }
Conversely, you can write the inner object constructor more explicitly, using the syntax:
outerObject.new InnerClass(construction parameters)
For example,
ActionListener adder = this.new InterestAdder(rate);
Here, the outer class reference of the newly constructed InterestAdder object is set to the this reference of the method that creates the inner class object. This is the most common case. As always, the this. qualifier is redundant. However, it is also possible to set the outer class reference to another object by explicitly naming it. For example, if InterestAdder was a public inner class, you could construct an InterestAdder for any bank account:
BankAccount mySavings = new BankAccount(10000); BankAccount.InterestAdder adder = mySavings.new InterestAdder(10);
Note that you refer to an inner class as
OuterClass.InnerClass
when it occurs outside the scope of the outer class. For example, if InterestAdder had been a public class, you could have referred to it as BankAccount.InterestAdder elsewhere in your program.
Are Inner Classes Useful? Are They Actually Necessary? Are They Secure?
Inner classes are a major addition to the language. Java started out with the goal of being simpler than C++. But inner classes are anything but simple. The syntax is complex. (It will get more complex as we study anonymous inner classes later in this chapter.) It is not obvious how inner classes interact with other features of the language, such as access control and security.
Has Java started down the road to ruin that has afflicted so many other languages, by adding a feature that was elegant and interesting rather than needed? InterestAdder class inside the BankAccount class is translated to a class file BankAccount$InterestAdder.class. To see this at work, try out the following experiment: run the ReflectionTest program of Chapter 5, and give it the class BankAccount$InterestAdder to reflect upon. You will get the following printout:
class BankAccount$InterestAdder { public BankAccount$InterestAdder(BankAccount, double); public void actionPerformed(java.awt.event.ActionEvent); private double rate; private final BankAccount this$0; }
NOTE
If you use Unix, remember to escape the $ character if you supply the class name on the command line. That is, run the ReflectionTest program as
java ReflectionTest 'BankAccount$InterestAdder'
You can plainly see that the compiler has generated an additional instance field, this$0, for the reference to the outer class. (The name this$0 is synthesized by the compileryou cannot refer to it in your code.) You can also see the added parameter for the constructor.
If the compiler can do this transformation, couldn't you simply program the same mechanism by hand? Let's try it. We would make InterestAdder a regular class, outside the BankAccount class. When constructing an InterestAdder object, we pass it the this reference of the object that is creating it.
class BankAccount { . . . public void start(double rate) { ActionListener adder = new InterestAdder(this, rate); Timer t = new Timer(1000, adder); t.start(); } } class InterestAdder implements ActionListener { public InterestAdder(BankAccount account, double aRate) { outer = account; rate = aRate; } . . . private BankAccount outer; private double rate; }
Now let us look at the actionPerformed method. It needs to access outer.balance.
public void actionPerformed(ActionEvent event) { double interest = outer.balance * rate / 100; // ERROR outer.balance += interest; // ERROR . . . }
Here we run into a problem. The inner class can access the private data of the outer class, but our external InterestAdder class cannot.
Thus, inner classes are genuinely more powerful than regular classes, since they have more access privileges.
You may well wonder how inner classes manage to acquire those added access privileges, since inner classes are translated to regular classes with funny namesthe virtual machine knows nothing at all about them. To solve this mystery, let's again use the ReflectionTest program to spy on the BankAccount class:
class BankAccount { public BankAccount(double); static double access$000(BankAccount); public void start(double); static double access$018(BankAccount, double); private double balance; }
Notice the static access$000 and access$018 methods that the compiler added to the outer class. The inner class methods call those methods. For example, the statement
balance += interest
in the actionPerformed method of the InterestAdder class effectively makes the following call:
access$018(outer, access$000(outer) + interest);
Is this a security risk? You bet it is. It is an easy matter for someone else to invoke the access$000 method to read the private balance field or, even worse, to call the access$018 method to set it. The Java language standard reserves $ characters in variable and method names for system usage. However, for those hackers who are familiar with the structure of class files, it is an easy (if tedious) matter to produce a class file with virtual machine instructions to call that method. Of course, such a class file would need to be generated manually (for example, with a hex editor). Because the secret access methods have package visibility, the attack code would need to be placed inside the same package as the class under attack.
To summarize, if an inner class accesses a private data field, then it is possible to access that data field through other classes that are added to the package of the outer class, but to do so requires skill and determination. A programmer cannot accidentally obtain access but must intentionally build or modify a class file for that purpose.
Local Inner Classes
If you look carefully at the code of the BankAccount example, you will find that you need the name of the type InterestAdder only once: when you create an object of that type in the start method.
When you have a situation like this, you can define the class locally in a single method.
public void start(double rate) { class InterestAdder implements ActionListener { public InterestAdder(double aRate) { rate = aRate; } public void actionPerformed(ActionEvent event) { double interest = balance * rate / 100; balance += interest; NumberFormat formatter = NumberFormat.getCurrencyInstance(); System.out.println("balance=" + formatter.format(balance)); } private double rate; } ActionListener adder = new InterestAdder(rate); Timer t = new Timer(1000, adder); t.start(); }
Local classes are never declared with an access specifier (that is, public or private). Their scope is always restricted to the block in which they are declared.
Local classes have a great advantagethey are completely hidden from the outside worldnot even other code in the BankAccount class can access them. No method except start has any knowledge of the InterestAdder class.
Local classes have another advantage over other inner classes. Not only can they access the fields of their outer classes, they can even access local variables! However, those local variables must be declared final. Here is a typical example.
public void start(final double rate) { class InterestAdder implements ActionListener { public void actionPerformed(ActionEvent event) { double interest = balance * rate / 100; balance += interest; NumberFormat formatter = NumberFormat.getCurrencyInstance(); System.out.println("balance=" + formatter.format(balance)); } } ActionListener adder = new InterestAdder(); Timer t = new Timer(1000, adder); t.start(); }
Note that the InterestAdder class no longer needs to store a rate instance variable. It simply refers to the parameter variable of the method that contains the class definition.
Maybe this should not be so surprising. The line
double interest = balance * rate / 100;
is, after all, ultimately inside the start method, so why shouldn't it have access to the value of the rate variable?
To see why there is a subtle issue here, let's consider the flow of control more closely.
The start method is called.
The object variable adder is initialized via a call to the constructor of the inner class InterestAdder.
The adder reference is passed to the Timer constructor, the timer is started, and the start method exits. At this point, the rate parameter variable of the start method no longer exists.
A second later, the actionPerformed method calls double interest = balance * rate / 100;.
For the code in the actionPerformed method to work, the InterestAdder class must have made a copy of the rate field before it went away as a local variable of the start method. That is indeed exactly what happens. In our example, the compiler synthesizes the name BankAccount$1$InterestAdder for the local inner class. If you use the ReflectionTest program again to spy on the BankAccount$1$InterestAdder class, you get the following output:
class BankAccount$1$InterestAdder { BankAccount$1$InterestAdder(BankAccount, double); public void actionPerformed(java.awt.event.ActionEvent); private final double val$rate; private final BankAccount this$0; }
Note the extra double parameter to the constructor and the val$rate instance variable. When an object is created, the value rate is passed into the constructor and stored in the val$rate field. This sounds like an extraordinary amount of trouble for the implementors of the compiler. The compiler must detect access of local variables, make matching data fields for each one of them, and copy the local variables into the constructor so that the data fields can be initialized as copies of them.
From the programmer's point of view, however, local variable access is quite pleasant. It makes your inner classes simpler by reducing the instance fields that you need to program explicitly.
As we already mentioned, the methods of a local class can refer only to local variables that are declared final. For that reason, the rate parameter was declared final in our example. A local variable that is declared final cannot be modified. Thus, it is guaranteed that the local variable and the copy that is made inside the local class always have the same value.
NOTE
You have seen final variables used for constants, such as
public static final double SPEED_LIMIT = 55;
NOTE
The final keyword can be applied to local variables, instance variables, and static variables. In all cases it means the same thing: You can assign to this variable once after it has been created. Afterwards, you cannot change the valueit is final.
However, you don't have to initialize a final variable when you define it. For example, the final parameter variable rate is initialized once after its creation, when the start method is called. (If the method is called multiple times, each call has its own newly created rate parameter.) The val$rate instance variable that you can see in the BankAccount$1$InterestAdder inner class is set once, in the inner class constructor. A final variable that isn't initialized when it is defined is often called a blank final variable.
When using local inner classes, you can often go a step further. If you want to make only a single object of this class, you don't even need to give the class a name. Such a class is called anonymous inner class.
public void start(final double rate) { ActionListener adder = new ActionListener() { public void actionPerformed(ActionEvent event) { double interest = balance * rate / 100; balance += interest; NumberFormat formatter = NumberFormat.getCurrencyInstance(); System.out.println("balance=" + formatter.format(balance)); } }; Timer t = new Timer(1000, adder); t.start(); }
This is a very cryptic syntax indeed. What it means is:
Create a new object of a class that implements the ActionListener interface, where the required method actionPerformed is the one defined inside the braces { }.
Any parameters used to construct the object are given inside the parentheses ( ) following the supertype name. In general, the syntax is
new SuperType(construction parameters) { inner class methods and data }
Here, SuperType can be an interface, such as ActionListener;. In particular, whenever an inner class implements an interface, it cannot have any construction parameters. Nevertheless, you must supply a set of parentheses as in:
new InterfaceType() { methods and data }
You have to look very carefully to see the difference between the construction of a new object of a class and the construction of an object of an anonymous inner class extending that class. If the closing parenthesis of the construction parameter list is followed by an opening brace, then an anonymous inner class is being defined.
Person queen = new Person("Mary"); // a Person object Person count = new Person("Dracula") { ... }; // an object of an inner class extending Person
Are anonymous inner classes a great idea or are they a great way of writing obfuscated code? Probably a bit of both. When the code for an inner class is short, just a few lines of simple code, then they can save typing time, but it is exactly timesaving features like this that lead you down the slippery slope to "Obfuscated Java Code Contests."
It is a shame that the designers of Java did not try to improve the syntax of anonymous inner classes, since, generally, Java syntax is a great improvement over C++. The designers of the inner class feature could have helped the human reader with a syntax such as:
Person count = new class extends Person("Dracula") { ... }; // not the actual Java syntax
But they didn't. Because many programmers find code with too many anonymous inner classes hard to read, we recommend restraint when using them.
Example 65 contains the complete source code for the bank account program with an anonymous inner class. If you compare this program with Example 64, you will find that in this case the solution with the anonymous inner class is quite a bit shorter, and, hopefully, with a bit of practice, as easy to comprehend.
Example 65: AnonymousInnerClassTest.java
1. import java.awt.event.*; 2. import java.text.*; 3. import javax.swing.*; 4. 5. public class AnonymousInner(final double rate) 37. { 38. ActionListener adder = new 39. ActionListener() 40. { 41. public void actionPerformed(ActionEvent event) 42. { 43. // update interest 44. double interest = balance * rate / 100; 45. balance += interest; 46. 47. // print out current balance 48. NumberFormat formatter 49. = NumberFormat.getCurrencyInstance(); 50. System.out.println("balance=" 51. + formatter.format(balance)); 52. } 53. }; 54. 55. Timer t = new Timer(1000, adder); 56. t.start(); 57. } 58. 59. private double balance; 60. }
Static Inner Classes
Occasionally, you function to compute the minimum and another function to compute the maximum. When you call both functions, then the array is traversed twice. It would be more efficient to traverse the array only once, computing both the minimum and the maximum simultaneously.
double min = d[0]; double max = d[0]; for (int i = 1; i < d.length; i++) { if (min > d[i]) min = d[i]; if (max < d[i]) max = d[i]; }
However, the function must return two numbers. We can achieve that by defining a class Pair that holds two values:
class Pair { public Pair(double f, double s) { first = f; second = s; } public double getFirst() { return first; } public double getSecond() { return second; } private double first; private double second; }
The minmax function can then return an object of type Pair.
class ArrayAlg { public static Pair minmax(double[] d) { . . . return new Pair(min, max); } }
The caller of the function then, except that the other programmer.
NOTE
You use a static inner class whenever the inner class does not need to access an outer class object. Some programmers use the term nested class to describe static inner classes.
Example 66 contains the complete source code of the ArrayAlg class and the nested Pair class.
Example 66: StaticInnerClassTest.java
1. public class StaticInnerClassTest 2. { 3. public static void main(String[] args) 4. { 5. double[] d = new double[20]; 6. for (int i = 0; i < d.length; i++) 7. d[i] = 100 * Math.random(); 8. ArrayAlg.Pair p = ArrayAlg.minmax(d); 9. System.out.println("min = " + p.getFirst()); 10. System.out.println("max = " + p.getSecond()); 11. } 12. } 13. 14. class ArrayAlg 15. { 16. /** 17. A pair of floating point numbers 18. */ 19. public static class Pair 20. { 21. /** 22. Constructs a pair from two floating point numbers 23. @param f the first number 24. @param s the second number 25. */ 26. public Pair(double f, double s) 27. { 28. first = f; 29. second = s; 30. } 31. 32. /** 33. Returns the first number of the pair 34. @return the first number 35. */ 36. public double getFirst() 37. { 38. return first; 39. } 40. 41. /** 42. Returns the second number of the pair 43. @return the second number 44. */ 45. public double getSecond() 46. { 47. return second; 48. } 49. 50. private double first; 51. private double second; 52. } 53. 54. /** 55. Computes both the minimum and the maximum of an array 56. @param a an array of floating point numbers 57. @return a pair whose first element is the minimum and whose 58. second element is the maximum 59. */ 60. public static Pair minmax(double[] d) 61. { 62. if (d.length == 0) return new Pair(0, 0); 63. double min = d[0]; 64. double max = d[0]; 65. for (int i = 1; i < d.length; i++) 66. { 67. if (min > d[i]) min = d[i]; 68. if (max < d[i]) max = d[i]; 69. } 70. return new Pair(min, max); 71. } 72. } | http://www.informit.com/articles/article.aspx?p=31110&seqNum=3 | CC-MAIN-2015-18 | refinedweb | 4,033 | 57.57 |
There)
I’ll make a future post that goes into the technical details of what’s different between the two formats.
-Brian
Hi Brian
Can I have a default of ".xlsx if there is no code and .xlsm if there is some"?
Will there be a way of signing the macros and locking the macros so they can’t be tampered with?
Stephen – if you try to save a file in the .xlsx format and there are macros you will be prompted if you want to use .xlsm instead, which is pretty much the behavior you are asking for.
W Poust – The locking and signing of macros will be done in the same way that it is today in the binary formats.
-Brian
.xls, .xlsx, .xlsm
Aren’t you worried about customer confustion by having 3 main formats?
Hi Brian,
How will the office 12 XML formats impact the metadata in Office documents. Now, as an IT forensic audutor, we have two ways to proceed: 1)we use dedicated software to visualize metadata items in e.g Word of Excel files.
2) we use an hex editor and the Open Source definitions of the Word of Excel file layout and start looking for the right record type and the right offset within that record.
Will there be special tags for metadata?
LK
Ricky, we are definitely concerned about end user confusion from the move to new file formats. We are going to great lengths to make it so that the average end user doesn’t need to every think about what format they are in. There will be times when they are faced with making a decision in formats, and we plan to make that as clear and simple a decision as possible. We are going to look closely though usability studies and the Beta programs to see what the impact of this change is on the end user and see how we can make the impact as small as possible.
Luc, the metadata will be easily identifiable and accessible. It will be in its own XML part within the ZIP package that you can quickly find by accessing the root relationship of the package. The schema for the metadata is pretty simple so the XML files shouldn’t be too difficult to work with. I think these new formats create a whole new world of document interrogation and make possible some very rich solutions that leverage the contents of the documents.
-Brian
Most users don’t know what a macro is, let alone whether their document contains macros, or whether it should contain macros. While it’s true that many of these users will never see the file extensions, way too many will. It’s counter-productive to ask users questions they don’t need to answer.
The processing of the file should be based on its contents, not its name. The operating system can’t handle macro-enabled spreadsheets differently from data-only spreadsheets, since virus writers will just save the file as .xlsm, then rename it to .xlsx.
If the only difference between two XML formats is that one can contain macros and the other doesn’t, then wouldn’t it be better just to use a separate namespace for the macro elements? That way you could guarantee there are no macros by using a validating parser, or strip them out by ignoring elements in the macro namespace.
I make extensive use of VBA by writing my own user-defined functions for derivative pricing – these UDFs seem to be little known and in the current debate on file formats tarred with the "macros are bad, poor security brush" – though i’m not sure how true this is for module sheets with only code for functions
These are very popular because nearly all students and people that work in investment banks use Excel – but most don’t necessarily want to move up to Visual Studio as a front end – though some might wish to use the automation add-in facility in Excel to call functions written in C# for instance
VSTO is irrelevant for me – i’m much happier writing proper program code that can be called from Excel – i’m not interested in using a more complicated VSTO language and front end to replicate what I can already do in a spreadsheet
One thing that I am trying to do though is write my VBA code in such a fashion that it can easily be translated into C# (via the Instant CSharp program)
The macro disabled file formats is bad news for end users in the corporate world. Have no doubt, the high priests of technology in the corporate world will use the tool that Microsoft will be providing to reassert their control and prevent work from getting done.
No doubt Microsoft thinks it can’t be held responsible for corporate stupidity. But Microsoft can be held responsible for who it trusts. As I look at a corporate notebook with wireless built in but with an XP configuration so that wireless can never be used, I think Microsoft decided to trust the wrong folks.
This macroless file format is also very bad news for Microsoft because it seals the replacement of Microsoft Office with open source packages. Corporations will use Microsoft’s file format to eliminate macros. Once macros are gone, the barriers to replacing Office with something else are greatly reduced. With the barrier down, corporations will look at the open source products. Since Microsoft has not been able to compete on price, it will lose that battle.
Given its recent actions, Microsoft cannot count on end users to fervently oppose replacing Office with an open source packages. (1) Most users use very few features so they don’t really care. (2) The users who do care push the packages. In ancient times, Microsoft supported those users by adding features that empowered them to get their jobs done without going through the high priests of technology. In the 21st century, Microsoft has not supported those users. Instead Microsoft has focused on adding features that empower the high priests of technology to prevent all end users (including the power users) from doing work. In addition to not supporting power users, Microsoft’s list price for VSTO is $500! Microsoft’s wrong priorities and what feels like price gouging will severly dampen the loyalty and enthusiasm of power users, who are Microsoft’s best (and perhaps only) allies against swapping out Office.
In the last post, I showed you how to use the Packaging API to find a particular part and get that part’s…
Hi Brian
On the other day i tried to open up a simple .xml document in word 12 and tried to create a macro such that i would disable some menu items. The VBA environment that comes with it doesn’t give handles to the "The Ribbon" talked so much about!!! How would i run my previous macros (developed for earlier versions) with Office 12 ????
regards
kris | https://blogs.msdn.microsoft.com/brian_jones/2005/07/12/office-12-xml-formats-will-support-vba-just-not-the-default-formats/ | CC-MAIN-2016-30 | refinedweb | 1,174 | 66.57 |
If your cluster workload changes, use a different instance type for your worker machines. To ensure your worker machines’ operating system is up to date, use a different machine image that includes a more recent patch version of the operating system.
By default, Konvoy groups your worker machines in the
worker node pool. If you change properties of the machines and apply the change, the machines may be destroyed and re-created, disrupting their running workloads.
This tutorial describes how to update the properties of worker machines without disrupting your cluster workload. You create a new node pool, with up-to-date properties. You then move your workload, to the new node pool, from the
worker nodepool, and then scale down the
worker node pool.
Follow these steps:
Use this command to list all node pools, and identify the node pool with worker machines:
konvoy get nodepools
Create a new node pool, called
worker2, copying the properties of the
workernode pool.
konvoy create nodepool worker2 --from worker
Edit
cluster.yamlto change the machine image and other properties of the
worker2node pool if needed. If necessary, update the count.
This is an excerpt of an edited
cluster.yaml. Note that, compared to the
workernode pool, the
worker2node pool has twice as many nodes, uses a different instance type, a different machine image, and allocates twice as much space for image and container storage.
kind: ClusterProvisioner apiVersion: konvoy.mesosphere.io/v1beta1 spec: imageID: ami-01ed306a12b7d1c96 - name: worker2 count: 8 machine: rootVolumeSize: 80 rootVolumeType: gp2 imagefsVolumeEnabled: true imagefsVolumeSize: 320 imagefsVolumeType: gp2 imagefsVolumeDevice: xvdb type: p2.xlarge imageID: ami-079f731edfe27c29c
Apply the change to your infrastructure:
konvoy up
Move your workload, from the machines in the
workerpool, to the machines in the
worker2pool. For more information on draining, see Safely Drain a Node.
konvoy drain nodepool worker
Verify your workload has been rescheduled and is healthy. To list all Pods that are not Running, use this command:
kubectl get pods --all-namespaces=true --field-selector=status.phase!=Running
Scale down the
workernode pool to zero.
konvoy scale nodepool worker --count=0 konvoy up | http://docs-staging.mesosphere.com/ksphere/konvoy/1.4/tutorials/update-worker-machines/ | CC-MAIN-2020-24 | refinedweb | 349 | 55.44 |
EDIT: My aim was to run multiple Go HTTP Servers at the same time. I was facing some issues while accessing the Go HTTP server running on multiple ports while using Nginx reverse proxy.
Finally, this is the code that I used to run multiple servers.
package main
import (
"net/http"
"fmt"
"log"
)
func main() {
// Show on console the application stated
log.Println("Server started on:")
main_server := http.NewServeMux()
//Creating sub-domain
server1 := http.NewServeMux()
server1.HandleFunc("/", server1func)
server2 := http.NewServeMux()
server2.HandleFunc("/", server2func)
//Running First Server
go func() {
log.Println("Server started on:")
http.ListenAndServe("localhost:9001", server1)
}()
//Running Second Server
go func() {
log.Println("Server started on:")
http.ListenAndServe("localhost:9002", server2)
}()
//Running Main Server
http.ListenAndServe("localhost:9000", main_server)
}
func server1func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Running First Server")
}
func server2func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Running Second Server")
}
The classic ping does not work for testing TCP ports, just hosts (see). I've seen many frameworks provide a "ping" option to test if the server is alive, may be this is the source of the mistake.
I like to use netcat:
$ nc localhost 8090 -vvv nc: connectx to localhost port 8090 (tcp) failed: Connection refused $ nc localhost 8888 -vvv found 0 associations found 1 connections: 1: flags=82<CONNECTED,PREFERRED> outif lo0 src ::1 port 64550 dst ::1 port 8888 rank info not available TCP aux info available Connection to localhost port 8888 [tcp/ddi-tcp-1] succeeded!
You may have to install it with
sudo yum install netcat or
sudo apt-get install netcat (respectively for RPM and DEB based distros). | https://codedump.io/share/1DvzWWWq6n3L/1/how-to-run-multiple-go-lang-http-servers-at-the-same-time-and-test-them-using-command-line | CC-MAIN-2018-22 | refinedweb | 274 | 59.6 |
How Can You Defend Your Assets Against Distributed Reflective Attacks via DNS?
How Can You Defend Your Assets Against Distributed Reflective Attacks via DNS?
There are so many ways for cyber bad guys to get at your system, that there's no way to keep track of them all. Learn some tips for defending against attacks via DNS.
Join the DZone community and get the full member experience.Join For Free
DNS servers are necessary for finding resources on the internet. They are also a source of vulnerabilities and are often poorly defended. The DNS protocol listens on port 53, and this port is, therefore, open in most firewalls. This combination of an open listening service and little security focus makes the protocol interesting to hackers; especially if they want to perform denial-of-service attacks because they can use some of the features of DNS to amplify their attack vectors.
How DNS Works
DNS servers are used on the internet to translate between human-friendly domain names and IP addresses. DNS stands for Domain Name System and is a database of IP addresses. For an overview of how DNS works, see this Microsoft Technet article.
DNS usually receives a recursive name query from a web browser. One specific DNS server can only hold a limited amount of information. When a query is recursive it will query other DNS servers on the internet for the correct address lookup before returning the IP address to the client. The way this works is that when the web browser queries the DNS and the DNS doesn’t have the right information it gives a referral back as the result; which happens to be the address of a DNS server further down the namespace tree.
In the above illustration of a recursive DNS lookup, the resolver queries the DNS server with a request. The DNS server cannot find the information requested in its cache or zone and queries the root server. It is then referred to the domain DNS server, which again points to the Example domain DNS server.
Attacking Through Recursive Open DNS
Using recursive DNS servers to flood a target with traffic is effective for attackers because the request package sent to the DNS server is very small compared with the response (hence the term “amplified” attack). All the attacker has to do is to spoof the sender address of the DNS request package, and submit the spoofed package to open recursive DNS servers from a large number of machines under his or her control, and voila – a DoS condition occurs for the target because the DNS server will direct all those responses to the spoofed IP. So what you need to perform this attack is:
- A list of open recursive DNS servers.
- A spoofed UDP request package to the DNS server.
- A botnet under your control.
The first point of the list is easy enough – go to and search for ‘public dns’ and it will give you a list as an instant answer.
Spoofing the IP can be done using any library that can write IP headers (or you can craft the header manually). Here’s an example using scapy, a Python module for low-level network operations:
from scapy.all import * spoofed_packet = IP(src='spoofed_ip', dst='the dns you are trying to reach') \ / TCP(sport=sourceport, dport=53) / payload
So, then you only need a botnet. You can go ahead and create one by spreading malware to thousands of victims, or you can hire a botnet on the dark web – both are equally illegal and immoral, but the bad guys do this.
Defending Against This Mess
Using an old-fashioned IP table's firewall won’t do the job because you cannot drop traffic on port 53 (this is your DNS traffic). The DNS server can be configured to mitigate some of these attacks – but the open public ones are outside of your control. Some of them have rate control, limiting the frequency with which they can be queried for the same target, as well as per source IP.
So what can you do locally to protect against this type of attack?
- Ensure you have sufficient capacity to take peak traffic loads. It is probably infeasible to build capacity for very large DDoS attacks (~ 300 Gbps) but many attacks are much smaller than this and can be absorbed by high bandwidth capacity.
- Filter your traffic – especially unexpected traffic types. Filter out all DNS traffic for all equipment not dependent on sending DNS requests. Filter out IP’s from identified botnets and use a robust threat intelligence solution to obtain information on botnets.
- Use anomaly detection and use dynamic throttling of traffic from name servers. If there are sudden spikes in traffic from unusual resolvers, it may be a sign of a reflective amplification attack.
- For key resources, build in redundancy to redirect traffic when necessary and allowing the service to remain operational. Potentially contract with a very-high-bandwidth provider to act as a buffer against large DDoS floods.
Published at DZone with permission of Hakon Olsen , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/how-can-you-defend-your-assets-against-distributed?fromrel=true | CC-MAIN-2019-47 | refinedweb | 874 | 60.85 |
Hey guys I am still really new to C++, as I am currently taking my first programming class. I am currently working on a payment calculator. I am using Visual Studio 2010. Before I get into this I want to say this is NOT homework just simply practice for me. This is what I have so far.
#include<iostream>
#include<math.h>
usingnamespace std;
intmain()
{
float rate, priceOfHouse, years, finalPayment;
const double MONTHLY_INTEREST_RATE = (rate/100/12);
const float INTIAL_MONTHLY_PAYMENT = (rate/100/12) * priceOfHouse;
const double LENGTH_OF_LOAN = -years/12;
cout <<
"Enter Interest Rate: ";
cin >> rate;
cout << "Enter Price of House: ";
cin >> priceOfHouse;
cout << "Length of Loan in terms of Years: ";
cin >> years;
finalPayment = INTIAL_MONTHLY_PAYMENT / (1 - ((1 + (MONTHLY_INTEREST_RATE)) exp(LENGTH_OF_LOAN)))
************************************************** ******
The problem that I am stuck at right now is this last line finalPayment =, the exponent (exp) part. It is telling me Error: expected a ")"
With the variables plugged in the formula is suppose to look like so: (input numbers are just expamples)
((6.5/100/12)*200000 / (1-((1+(6.5/100/12))^(-30/12)))
Not looking for anyone to finish this for me just a little help with this last line, maybe an example of what something like this (a formula) would look like in C++
Thanks,
NismoT | https://cboard.cprogramming.com/cplusplus-programming/141344-using-exponent-cplusplus.html | CC-MAIN-2017-39 | refinedweb | 208 | 56.08 |
How to Upgrade a Sencha Touch App to Ext JS 6 Modern Toolkit – Part 2
In part 1 of this blog post series, I discussed the changes in Ext JS 6 Modern Toolkit and showed you how to do a basic mobile upgrade of your Sencha Touch app. In this article, I’ll show you how to do an advanced mobile upgrade.
Advanced Mobile Upgrade
For the advanced mobile upgrade, you will use the MVVM pattern. It will take more time and steps to upgrade this way, but you will have a lot of advantages with the latest framework and all of the new features and classes. Also, you can improve your application performance and code base with the new MVVM architecture.
I’ve cloned this dinmu1 folder to a new folder called dinmu2, so you can see the differences.
Start with Migrating the Views
- In the app/view folder, create the following folder structure (every view gets its own subfolder):
- app/view/main
- app/view/settings
- Move Main.js inside the main subfolder, and SettingsView.js into settings subfolder. (I also renamed SettingsView.js to Settings.js)
- Edit the namespaces for these two views to:
- Dinmu.view.main.Main
- Dinmu.view.settings.Setting
- At this point, you broke the app because the viewport can’t find Main.js and the Main.js view can’t find the Settings view, so you have to fix this:
- In the app.js, you can remove the line:
Ext.Viewport.add(Ext.create('Dinmu.view.Main'));
- Above the
launch()method, you create a new viewport, via the new way Ext 6 provides, by setting the mainView property to:
mainView: 'Dinmu.view.main.Main',
- Remove the
views: ['main']from app.js
- Add
'Dinmu.view.main.Main'to the
requiresarray
- In Main.js, change the
requiresfor the Settings View to
'Dinmu.view.settings.Settings'
- To confirm that nothing breaks after this fix, you can run a sencha app refresh, and you shouldn’t see any errors.
Migrate the Controllers to View Controllers
- Create the following new file:
app/view/main/MainController.js
- Create the following class definition:
Ext.define('Dinmu.view.main.MainController', { extend: 'Ext.app.ViewController', alias: 'controller.main', //all the VC methods init: function(){ console.log("new VC is initialized"); } });
- Wire up the view controller to the main view:
In Main.js, add the following line:
controller: 'main',
- Also add the MainController to the
requiresarray:
Dinmu.view.main.MainController
- Run another sencha app refresh, and test the app in the browser. You should see a log message that states the wiring of the VC was successful. Note, you don’t need to wire this controller up to the Settings view. Because Settings view is nested inside the main view, it can always access the main controller.
- You can remove the
controllersarray from app.js, because you won’t use it anymore.
- Remove the
initmethod from the view controller and copy over all the methods from
app/controller/Main.jsinto the new view controller.
- Now comes the tricky part. You won’t use the
refsand
controlblocks, so you need to fix these. Instead of the control block, you will create listeners in the view.
There are 5 controls that need to be replaced:
- onCarouselChange – activeitemchange in the main view
- btnBack – tap back button in title bar
- onSettingsBtnTap – tap settings button in settings view
- onToggle – toggle on togglebutton
- onRefresh – executed on tap of the refresh button
In the Main.js view class, you will create the activeitem change listener:
listeners: { 'activeitemchange': 'onCarouselChange' },
On the back button in Main.js, you will create a tap listener:
listeners: { 'tap': 'onBackBtnTap' },
On the settings button in Main.js, you will create a tap listener:
listeners: { 'tap': 'onSettingsBtnTap' },
On the toggle button in Settings.js, you will create a toggle listener:
listeners: { 'change': 'onToggle' },
On the refresh button in Settings.js, you will create a tap listener:
listeners: { 'tap': 'onRefresh' },
- When you run the application in your browser, you will notice various event errors. The references with component queries are broken. You will fix these now.
- Change the
onLaunchmethod to
init. Note, this will break the application because
Dinmu.utils.Functions.loadData(), uses the Settings store, which is not wired up to a controller anymore. For now, comment the line with
Dinmu.utils.Functions.loadData()out.
- Run another sencha app refresh and test the app in the browser. Everything except the refresh button should work. The refresh button requires the store, which is not linked yet.
All the references to
this.getMainView() can be replaced for
this.getView(). Because the view controller now knows about the view, you can fix this one easily. I replaced it on 3 places.
The other view references that you need will get a reference on the component, that you can look up later. In Settings.js, add the following property:
reference: 'settings'.
In the MainController, replace
this.getSettingsView() with
this.lookupReference('settings').
You can fix the
onToggle Method like this:
var s = this.lookupReference('settings'); if (!newVal) { s.down('field[name="city"]').enable(); s.down('field[name="country"]').enable(); s.down('field[name="units"]').enable(); } else { s.down('field[name="city"]').disable(); s.down('field[name="country"]').disable(); s.down('field[name="units"]').disable(); s.down('field[name="city"]').reset(); s.down('field[name="country"]').reset(); }
In the Main.js view, put a reference in the titlebar configuration:
reference: 'titlebar',
Then replace the
onCarouselChange method with:
onCarouselChange: function(carousel, newVal, oldVal) { var t = this.lookupReference('titlebar'); if (newVal.getItemId() == "mainview") { t.down('button[action=back]').hide(); t.down('button[action=settings]').show(); t.setTitle('Do I need my Umbrella?'); } else { t.down('button[action=back]').show(); t.down('button[action=settings]').hide(); t.setTitle('Settings'); } },
Link the Store to a View Model
- Create the following new file:
app/view/main/MainModel.js
- Create the following class definition:
Ext.define('Dinmu.view.main.MainModel', { extend: 'Ext.app.ViewModel', alias: 'viewmodel.main', requires: [ ], stores: { } });
- Wire up the view model to the the main view:
In Main.js, add the following line:
viewModel: 'main',
Don’t forget to put the
Dinmu.view.main.MainModelinto the
requiresarray.
- Now, link the Settings store; first add
Dinmu.store.Settingsto the
requiresarray.
- In the Settings store, set an
alias: 'store.settings'in the store class definition.
- In Ext JS 6, Stores don’t automatically set the storeId to the name of the class, therefore set the
storeIdto
Settings, so the store manager can find the store via
Ext.getStore('Settings')
- After that, add the following store to the stores object, (the type points to the settings alias):
- Enable the
Dinmu.utils.Functions.loadData()line, which you commented out before in the MainController. Then run another sencha app refresh and test your app.
'settings': { type: 'settings' },
At this point, you should have a working app that uses the MVVM pattern.
Other App Improvements
- This application doesn’t use data feeds in the store. However, another big advantage with Ext JS 6 is that you don’t need to code all the model fields in your Model definition. It gets the data directly from the feed. That saves you from typing all the data in the feed, and makes your model definitions a lot smaller.
- Another thing that’s different in Ext JS 6 is the config blocks. In Sencha Touch, you defined everything in the
configblock; in Ext JS 6, you only put properties in a config block that need the auto generation of getters, setters, apply, and update methods. For the Dinmu application this means that I had to remove most of the config blocks. For most of the classes, the config block in Sencha Touch style works fine, but you could run into weird problems at some point, if you leave them.
- Promises and Deferreds support. I was always a bit amazed that the way I coded the saving of the settings form just worked. There’s a lot of magic going on in the
sync()method, and the way they order the new created records, removed and edited records. It would have been a lot better, if I could have coded it this way:
- Enter the form.
- Check if localstorage contained old settings.
- Remove old records, if any.
- Sync store, and after the sync is complete, add new records.
- Sync store, and after adding, load what’s in the store.
With Ext JS 6, you can do this because it supports promises and deferreds, which allows you to chain methods, via the
then() method. Look at how I coded the
removeAllSettings and
addSettings methods. In the
onRefresh method, I chained it. You can compare it with the dinmu1 or touchdinmu files to see how this code differs.
Upgrade the Theme
- You can switch themes by changing the theme property in the app.json file. Out of the box, you can choose between the following themes:
- theme-cupertino (ios theme)
- theme-mountainview (android theme)
- theme-blackberry (blackberry theme)
- theme-windows (windows theme)
- theme-neptune
- theme-triton (default)
Triton Theme
After switching the theme, you will need to run sencha app build.
- The platform switcher in Ext JS is renewed. Instead, you will now use the profiles build block in app.json. To set this up, write in app.json:
- Themes for Ext JS 6 Modern toolkit use the same packages structure as Ext JS did. This is great, because it means that you can extend from your own theme packages, and you can generate custom themes with Sencha Cmd:
"builds": { "ios": { "toolkit": "modern", "theme": "theme-cupertino" }, "android": { "toolkit": "modern", "theme": "theme-mountainview" }, "windows": { "toolkit": "modern", "theme": "theme-windows" }, "bb": { "toolkit": "modern", "theme": "theme-blackberry" }, "default": { "toolkit": "modern", "theme": "theme-triton" } },
To enable the multiple themes on your development machine, add these lines to the app.json bootstrap block:
"bootstrap": { "base": "${app.dir}", "microloader": "bootstrap.js", "css": "bootstrap.css", "manifest": "${build.id}.json" //this is the magic, which generates a manifest file, to load on local. },
To enable the multiple themes on your production build, add these lines to the app.json
output block:
"output": { "base": "${workspace.build.dir}/${build.environment}/${app.name}", "appCache": { "enable": false }, "manifest": "${build.id}.json", "js": "${build.id}/app.js", "resources": { "path": "${build.id}/resources", "shared": "resources" } },
In index.html you write:
Ext.beforeLoad = function (tags) { var s = location.search, // the query string (ex "?foo=1&bar") profile; if (s.match(/\bios\b/) || tags.ios !==0) { profile = 'ios'; } else if (s.match(/\bandroid\b/) || tags.android !==0) { profile = 'android'; } else if (s.match(/\bwindows\b/) || tags.windows !==0) { profile = 'windows'; } else if (s.match(/\bbb\b/) || tags.bb !==0 ) { profile = 'bb'; } else { profile = 'default'; } Ext.manifest = profile; // this name must match a build profile name };
You will need to run sencha app refresh and sencha app build, which builds all profiles, to get it up and running.
sencha generate theme theme-MyTheme
Even if you don’t plan to create custom theme packages, theming is more advanced. To upgrade an existing theme, you have to put all the variables in the sass/var/ folder.
Take a look at my sass/var/all.scss which I used for the weather app application. The custom Sass / CSS classes will be stored in the sass/src/ folder. For an application (without custom theme packages), you have to map the folder structure of your JS applications. In other words, app/view/main/Main.js has a Sass file in this location: sass/src/view/main/Main.scss.
I could take most of my styling directly from my Sencha Touch application. However, there is no “default” Sencha Touch theme anymore, instead there’s the Neptune & Triton themes, which both have different Sass variables and require different DOM.
This means that when you used custom styling for templates (tpls) etc, it won’t break in your upgraded app, but when you used custom Sass to override the Sencha Touch theme, you might see differences. The best practice is to manually go through all the views in your browser, and check to see if the styling is correct. Take a look at my sass/src/view/main/Main.scss which I used for the weather app application.
In the next article in this series, I will show you how to do an advanced universal upgrade. | https://www.leeboonstra.com/developer/tag/mvvm/ | CC-MAIN-2017-34 | refinedweb | 2,043 | 59.3 |
WWW::Scripter - For scripting web sites that have scripts
0.032 .)
There are two basic modes in which you can use WWW::Scripter:
If you only need a single virtual window (which is usually the case), use WWW::Scripter itself, as described below and in WWW::Mechanize.
For multiple windows, start with a window group (see WWW::Scripter::WindowGroup) and fetch the WWW::Scripter object via its
active_window method before proceeding.
At any time you can attach an existing window (WWW::Scripter object) to a window group using the latter's
attach method. You can also
->close a window to detach it from its window group and put it back in single-window mode.
These two modes affect the behaviour of a few methods (
open,
blur,
focus) and hyperlinks and forms with explicit targets.
See WWW::Mechanize for a vast list of methods that this module inherits. (See also the "Notes About WWW::Mechanize Methods", below.)
In addition to those, this module implements the well-known Window interface, providing also a few routines for attaching scripting engines and what-not.
In the descriptions below,
$w refers to the WWW::Scripter object. You can think of it as short for either 'WWW::Scripter' or 'window'.
my $w = new WWW::Scripter %args
The constructor accepts named arguments. There are only two that WWW::Scripter itself deals with directly. The rest are passed on to the superclass. See WWW::Mechanize and LWP::UserAgent for details on what other arguments the constructor accepts.
The two arguments are: have elapsed, returning a number uniquely identifying the time-out. If the first argument is a coderef or an object with
&{} overloading, it will be called as such. Otherwise, it is parsed as a string of JavaScript code. (If the JavaScript plugin is not loaded, it will be ignored.).
Although the W3C DOM specifies that this return
$w (the window itself), for efficiency's sake this returns a separate object which one can use as a hash or array reference to access its sub-frames. (The window object itself cannot be used that way.) The frames object (class WWW::Scripter::Frames) also has a
window as follows:
max_wait Number indicating for how many seconds the loop should run before giving up and returning. min_timers Only run until this many timers are left, not until they have all finished. interval Number of seconds to wait before each iteration of the loop. The default is .1.
Some websites have timers running constantly, that are never cleared. For these, you will usually need to set a value for
min_timers (or
max_wait) to avoid an infinite loop.
This returns the window group that owns this window. See "SINGLE VS MULTIPLE WINDOWS", above.
You can also pass an argument to set it, but you should only do so if you know what you are doing, as it does not update the window group's list. Consider using WWW::Scripter::WindowGroup's
attach boolean indicating whether images should be fetched. Some sites use images with special URLs as cookies and refuse to work if those images are not fetched. Most of the time, however, you probably want to leave this off, for speed's sake.
Setting this does not affect any pages that are already loaded. HTML pages are parsed and turned into a DOM tree. It is true by default. You can disable HTML parsing by passing a false value. Of course, if you are using WWW::Scripter to begin with, you won't want to turn this off will you? Nevertheless, this is useful for fetching files behind the scenes when just the file contents are needed..
WWW::Scripter overrides the
_extract_links method that
links,
find_link and
follow_link use behind the scenes, to make it use the HTML DOM tree instead of the source code of the page.
This overridden method tries hard to match WWW::Mechanize as closely as possible, which means it includes link tags, (i)frames, and meta tags with http-equiv set to 'refresh'.
This is significantly different from
$w->document->links, an HTML::DOM method that follows the W3C DOM spec and returns only 'a' and 'area' elements.
To trigger events (and event handlers), use the
trigger_event method of the object on which you want to trigger it. For instance:
$w->trigger_event('resize'); # runs onresize handlers $w->document->links->[0]->trigger_event('mouseover'); $w->current_form->trigger_event('submit'); # same as $w->submit
trigger_event accepts more arguments. See HTML::DOM and HTML::DOM::EventTarget for details.
WWW::Scripter does not implement any event loop, so you have to call
check_timers or
wait_for_timers yourself to trigger any timeouts. If you set up a loop like this,
sleep 1, $w->check_timers while $w->count_timers;
or if you use
wait_for_timers, beware that these may cause an infinite loop if a timeout sets another timeout, or if a timer is registered with
setInterval. You basically have to know what works with the pages you are browsing.
$@.
Plugins are usually under the WWW::Scripter::Plugin:: namespace. If a plugin name has a hyphen (-) in it, the module name will contain a double colon (::). If, when you pass a plugin name to
use_plugin or
plugin, it has a double colon in its name, it will be treated as a fully-qualified module name (possibly) outside the usual plugin namespace. Here are some examples:
Plugin Name Module Name ----------- ----------- Chef WWW::Scripter::Plugin::Chef Man-Page WWW::Scripter::Plugin::Man::Page My::Odd::Plugin My::Odd::Plugin
This module will need to have an
init method, and possibly two more named
options and
clone, respectively:
init), every plugin that has a clone method (as determined by
->can('clone')), will also be cloned. The new clone of the WWW::Scripter object is passed as its argument.
If the plugin needs to record data pertinent to the current page, it can do so by associating them with the document or the request via a field hash. See Hash::Util::FieldHash and Hash::Util::FieldHash::Compat.
See LWP's Handlers feature.
From within LWP's
request_* and
response_* handlers, you can call
WWW::Scripter::abort to abort the request and prevent a new entry from being created in browser history. (The JavaScript plugin does this with javascript: URLs.)
WWW::Scripter will export this function upon request:
use WWW::Scripter qw[ abort ];
or you can call it with a fully qualified name:
WWW::Scripter::abort();.
WWW::Scripter sub-modules: ::Location, ::History and ::Navigator.
See WWW::Mechanize, of which this is a subclass.
See also the following plugins:
And, if you are curious, have a look at the plugin version of WWW::Mechanize and WWW::Mechanize::Plugin::DOM (experimental and now deprecated) that this was originally based on: | http://search.cpan.org/dist/WWW-Scripter/lib/WWW/Scripter.pod | CC-MAIN-2017-30 | refinedweb | 1,116 | 63.19 |
I'm programming this RPG, and I can't get past the first battle. I can't figure out how to get the computer to break the loop when then enemy's health (enhp) is less than 1. I tried using an if statement, but then it didn't loop at all. Could you guys help me out?
#include <iostream> #include <string> using namespace std; int main (void) { int num, random_integer, hp= 100, enhp= 50; while (hp >=1 or enhp >=1) { {srand((unsigned)time(0)); for(int index=0; index<20; index++) {random_integer = (rand()%10)+1; hp = hp-random_integer; if (hp>=1) {cout<<"The enemy does "<<random_integer<<" damage, leaving you with "<<hp<<" health."; cout<<"\n1)Attack!\n 2)I've had enough- run away!"; cin>>num;} if(num ==1) {srand((unsigned)time(0)); for(int index=0; index<20; index++) {random_integer = (rand()%10)+1;} enhp = enhp-random_integer; cout<<"You have done "<<random_integer<<" damage." ;} else cout<<"You have fled"; } system("PAUSE"); return 0; } } } | https://www.daniweb.com/programming/software-development/threads/164131/programming-a-fighting-game | CC-MAIN-2022-33 | refinedweb | 162 | 63.8 |
At the beginning of my Overview of C++11, I show a simple program to compute the most common words in a set of input files. I write the program once using "old" C++ (i.e., standard C++98/03), then again using features from C++11.
In 2009, when I first published the C++11 program (at that time, what became C++11 was still known as C++0x), there was no compiler that could come anywhere near compiling it. Testing the code required replacing standard C++11 library components with similar components available in TR1 or from Boost or Just Software Solutions, and language features like auto, range-based for loops, lambda expressions, and template aliases had to be replaced with typically clumsier C++98/03 constructs that were more or less equivalent in meaning.
This week I tested my simple C++11 sample program with Stephan T. Lavavej's excellent distribution of gcc 4.7 for Windows as well as Microsoft's VC11 beta. gcc 4.7 has lots of support for C++11, but the concurrency API still seems to be largely missing, at least for Windows, so my sample program doesn't get very far with that compiler. [Update 6 April 2012: As noted in the comments below, when invoked in the proper manner on the proper platform, gcc 4.7 compiles and runs my program without modification!]
The situation with the VC11 beta is a lot better. Only two lines have to be changed. The template alias
needs to be replaced by its typedef equivalent:needs to be replaced by its typedef equivalent:
using WordCountMapType = std::unordered_map<std::string, std::size_t>;
And theAnd the
typedef std::unordered_map<std::string, std::size_t> WordCountMapType;
zlength specifier in this call to
printf,
needs to be replaced with its VC++ equivalent,needs to be replaced with its VC++ equivalent,
std::printf(" %-10s%10zu\n", (*it)->first.c_str(), (*it)->second);
I:
Other than that, the demonstration program I wrote three years ago (which, in fairness to compiler writers, was two and a half years before the C++11 standard was ratified) compiles cleanly with VC11.Other than that, the demonstration program I wrote three years ago (which, in fairness to compiler writers, was two and a half years before the C++11 standard was ratified) compiles cleanly with VC11.
std::printf(" %-10s%10Iu\n", (*it)->first.c_str(), (*it)->second);
If you have access to a compiler that compiles my program without modification, please let me know! The program itself is below. You can see a more colorful version of it, along with some commentary, and an example invocation and the corresponding output, on slides 13-15 of the free sample of my C++11 training materials.
Scott
#include <cstdio> #include <iostream> #include <iterator> #include <string> #include <fstream> #include <algorithm> #include <vector> #include <unordered_map> #include <future> using WordCountMapType = std::unordered_map<std::string, std::size_t>; WordCountMapType wordsInFile(const char * const fileName) // for each word { // in file, return std::ifstream file(fileName); // # of WordCountMapType wordCounts; // occurrences for (std::string word; file >> word; ) { ++wordCounts[word]; } return wordCounts; } template<typename MapIt> // print n most void showCommonWords(MapIt begin, MapIt end, const std::size_t n) // common words { // in [begin, end) // typedef std::vector<MapIt> TempContainerType; // typedef typename TempContainerType::iterator IterType; std::vector<MapIt> wordIters; wordIters.reserve(std::distance(begin, end)); for (auto i = begin; i != end; ++i) wordIters.push_back(i); auto sortedRangeEnd = wordIters.begin() + n; std::partial_sort(wordIters.begin(), sortedRangeEnd, wordIters.end(), [](MapIt it1, MapIt it2){ return it1->second > it2->second; }); for (auto it = wordIters.cbegin(); it != sortedRangeEnd; ++it) { std::printf(" %-10s%10zu\n", (*it)->first.c_str(), (*it)->second); } } int main(int argc, const char** argv) // take list of file names on command line, { // print 20 most common words within; // process files concurrently std::vector<std::future<WordCountMapType>> futures; for (int argNum = 1; argNum < argc; ++argNum) { futures.push_back(std::async([=]{ return wordsInFile(argv[argNum]); })); } WordCountMapType wordCounts; for (auto& f : futures) { const auto wordCountInfoForFile = f.get(); // move map returned by wordsInFile for (const auto& wordInfo : wordCountInfoForFile) { wordCounts[wordInfo.first] += wordInfo.second; } } std::cout << wordCounts.size() << " words found. Most common:\n" ; const std::size_t maxWordsToShow = 20; showCommonWords(wordCounts.begin(), wordCounts.end(), std::min(wordCounts.size(), maxWordsToShow)); }
25 comments:
It compile with gcc 4.8, though doesnt seem to run:
g++-4.8 --version
g++-4.8 (GCC) 4.8.0 20120311 (experimental)
g++-4.8 --std=c++11 prog.cpp
./a.out prog.cpp
terminate called after throwing an instance of 'std::system_error'
what(): Unknown error 18446744073709551615
Aborted
I did not investigated much.
With clang it fail compiling in GNU libstd++ headers (lots of errors seem related to atomics support).
clang++ --version
clang version 3.1 (trunk 150359)
Target: x86_64-unknown-linux-gnu
Thread model: posix
fx
So it compiles with gcc 4.8 under Linux. Cool! Did you happen to try gcc 4.7 as well? Unfortunately, I have access only to Windows, so I can't check myself.
If you get a chance to get some more information about the runtime failure, I'd be interested to know the details. It seems to work correctly for me under Windows (once I make the changes needed to get it to compile).
Scott
Threading support works if you compile gcc-4.7 on Cygwin and chose thread=posix.
Paul
Can you check to see if the program I posted compiles under the conditions you describe? If so, can you let me know if it seems to run properly?
Thanks,
Scott
Sure,
can you repost your code between pre tags or some other tag that will preserve the original format ? I'll check your code on a Mac with gcc-4.7 and on Cygwin (Windows 7).
Paul
I've used the following easy way to test things with Linux compiler in Windows: (1) install VirtualBox, it's free, and (2) install Debian in a virtual box. The only problem I had with that was lack of built-in support for Norwegian keyboard. I found a config file on the net that fixed that, and then even copy and paste between Windows and the Lunix box worked (that is, works)! Cheers, - Alf
@Anonymous: the code is between pre tags, and copying it from the blog post (either from my blog site or from the entry's version in Google Reader) using Firefox lets me paste it into a text editor with formatting intact. Are you not able to do the same thing?
Scott
It compiled on Linux with GCC 4.7.0 with only one warning:
warning: ISO C++ does not support the ‘z’ gnu_printf length modifier [-Wformat]
I've used -Wall -Wextra -pedantic -std=c++11.
When it comes to running it, the situation changes. It works fine without any arguments, but throws an "Unknown error -1" std::system_exception if I pass some file name to it.
It turns out one has to pass -pthread to g++ and it starts working. I hope it will be on by default when compiling in c++11 mode in the future versions of gcc.
To sum up, your example works perfectly without any code modifications on GCC 4.7.0.
@Scott
Sorry about that, apparently when I press "Show Original Post" when I'm in the comment zone, I'm redirected to a version of your page without proper formatting.
Using a direct link to your blog works perfectly:
So, I've tested your code on Mac OSX Lion and Cygwin (under Windows 7) with a custom compiled gcc-4.7.0 and it works perfectly. Here is a link to some screenshots (I'll keep them for a few days in my Dropbox folder), feel free to use them as you wish:
Paul
I've revised the title of this blog entry and updated the content to reflect the fact that, on the proper platform and with the proper command line options, gcc 4.7 accepts my sample program. I'm very excited about this, and I thank Wojciech and Anonymous for letting me know about it. C++11, at least for the simple demonstration I wrote three years ago, is finally here!
Scott
It compiles on Mac with the macports g++-4.7:
g++-mp-4.7 (GCC) 4.7.0 20120225 (experimental). It runs and seems to produce the correct output.
As Wojciech Cierpucha realized, pthread is not automatically linked, and nothing warn about any symbol missing.
Here the test with gcc-4.8 again, no errors, no warning, and seem to run fine:
g++-4.8 --version
g++-4.8 (GCC) 4.8.0 20120311 (experimental)
g++-4.8 --std=c++11 -lpthread prog.cpp
./a.out prog.cpp
155 words found. Most common:
// 13
#include 9
{ 8
} 8
for 7
= 7
const 5
return 4
WordCountMapType 4
wordCounts; 3
words 3
of 2
: 2
<< 2
std::vector 2
most 2
it 2
in 2
common 2
!= 2
Regards,
fx
Adding -Wall -pedantic print this warning though:
prog.cpp:41:5: warning: ISO C++ does not support the ‘z’ gnu_printf length modifier [-Wformat]
I guess its just a matter of detail ;)
fx
Regarding gcc's diagnostic, "warning: ISO C++ does not support the ‘z’ gnu_printf length modifier," this is incorrect as of C++11. C++11 relies on C99 for the specification for printf formatting strings, and "z" is part of C99 (in 17.6.9.1/7).
Scott
Yep its a matter of details compared to the first time I played with gcc support for c++0x! ;)
gcc 4.6 is perfectly fine as well. Needs the using changed to a typedef as MSVC11Beta apparently.
Output below. I would like to ask a sneaky wee question if possible (I bought the c++11 overview, very good, no enough grovelling). Could you tell me if
1: Will std::async launch automatically in async mode or do we need to make sure by passing a launch to it (std::launch::async) ?
2: Are there any thread pool policies compilers should be expected to follow, i.e. do I need to worry about thread pools/efficient reuse etc.?
(I did read where you say we need to care about cleaning up thread local storage objects and that's OK)
~/ggcov-0.8.3 $ gcc --version
gcc (Ubuntu/Linaro 4.6.3-1ubuntu4) 4.6.3
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE
g++ -Wall -Wextra -Weffc++ -std=c++0x -pthread d.cc
Output
155 words found. Most common:
// 13
#include 9
{ 8
} 8
for 7
= 6
const 5
return 4
WordCountMapType 3
typedef 3
words 3
wordCounts; 3
i 2
word; 2
in 2
MapIt 2
it 2
std::vector 2
of 2
auto 2
@David:
1. By default, async may choose whether to run its function synchronously or asynchronously, the idea being to give it the flexibility to avoid oversubscription. If you want to guarantee that the function passed to std::async will run asychronously, you need to specify a launch policy of std::launch::async.
2. The standard gives no guarantees about thread pools or efficient use of threads or scheduling fairness, etc. All that is considered QoI (Quality of Implementation) stuff. It's reasonable to assume that once implementers have the basic functionality under control, they will turn their attention to QoI issues. Bartosz Milewski's blog post from last October is worth reading in this regard.
Scott
@Scott
Thanks very much Scott for answering my cheekily posted questions. I am now on packaged_task and looking at opportunities there. The videos by Bartosz are very much worth a watch for anyone interested in c++11 concurrency support, plus of course Anthony Williams blog.
Can I just add great books Scott!! all our developers get at least Effective c++ and more Effective c++ on their desk when starting with us. They are excellent, I keenly await the c++11 version, although I think this will take a load of time to find all the nuances there to keep to your high standards.
it compiles with -Wall -Wextra and runs correctly with gcc-4.7.0 (built from source) on os/x 10.7.3, both 32 and 64.
Compiles and works fine on my MacBook Pro OSX Lion using gcc-4.7.0 :)
>: g++ --version
g++ (GCC) 4.7.0
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
>: g++ -std=c++11 myers.cpp -o myers
>: ./myers myers.cpp
155 words found. Most common:
// 13
#include 9
{ 8
} 8
for 7
= 7
const 5
return 4
WordCountMapType 4
wordCounts; 3
words 3
of 2
: 2
<< 2
std::vector 2
most 2
it 2
in 2
common 2
!= 2
AS I remember it compiling gcc 470 on my mac was hard because apple ship LLVM based gcc with Xcode 4. I'll build 4.7.1 and see if it still works.
BTW, rather than testing with gcc under windows - which I have found to be a pain, there are some great free-to-use virtualisation programs like virtualbox.org.
Virtualisation lets you install linux for just such demos and tests even though you have a windows computer.
1) Install virtualbox from virtualbox.org (< 5 mins with high-speed corporate internet)
2) Install Ubuntu from ubuntu.com (< 30 mins)
3) sudo apt-get install g++ (< 5 mins) - for the latest g++ that version of ubuntu has
3a) follow the build instructions for gcc and apt-get install any prerequisites for the latest full gcc/g++ version (3 hours)
You will then have a Linux desktop ready to compile using a full version of g++ in a window on your windows machine (pretty cool - you really ought to try this technology out).
It compiled with clang (svn3.2) and libc++ (and probably has for a long time; I don't see any C++11 features that haven't been in clang for a long time now).
It also appears to run fine unless I build using -fcatch-undefined-behavior. Running the program built with that flag indicates that there's some kind of undefined behavior in the program. The output doesn't give any hints as to the cause though. I understand some work is being done to improve that so maybe more info will be available soon.
GCC printf warning bug should be fixed as of 4.8: | http://scottmeyers.blogspot.com/2012/04/c11-is-almost-here-for-real.html | CC-MAIN-2019-18 | refinedweb | 2,414 | 64.61 |
No matter what kind this lethal arsenal and you will generally achieve more closing sales within HOURS of skim reading the Guide -- and we have a 90 day 100% money-back guarantee to back that up.
These fresh and modern comebacks and rebuttals are BETTER and DIFFERENT than the common stuff that's been available for ages. You'll convert more leads into sales in no time! What you'll gain from this Guide are new, unique, and original TRUST-BUILDING, FAITH-GENERATING conversational comeback arguments intended for educated clients and today's complex business world. Continue reading for the FREE sample scripts.
Visualize yourself for a moment, you're positively breezing through a sales call for the reason that you are totally 100% sure that you will win any objection a customer could possibly throw at you ... If you could feel like this with all your calls, it should be worthwhile deciding for yourself if this Guide is really as powerful as everyone says it is, don't you think? ... prove it for yourself.
This guide is 49 pages long (over 15,000 words) and it comprises of 152 clever and effective Comebacks and Rebuttals, 14 Influential Closing Power Statements, , 38 sales questions to isolate the objection, 6 budget data-mining questions, and 4 free Bonus Sections that are a huge value in their own right. This is an instant download ...
If you refuse to help yourself, and FAIL you learn just two new things from this Guide, then you would probably close more sales and make more money,.
business to business telemarketing |
cold call cowboy |
cold calling alternative |
cold calling estate real |
cold calling sales script |
cold calling success |
cold calling training |
free cold calling script |
free sales script |
free selling technique |
handle objection |
import sales lead |
insurance sales lead |
negotiation technique |
negotiation tip |
real estate negotiation |
response handling |
sales force training |
sales prospecting technique |
sales techniques |
sales training course uk |
sales training manual |
selling tip |
spin sales training |
telemarketing lead generation | http://www.acceleratedsoftware.net/sales-techniques/handling-difficult-people.html | crawl-003 | refinedweb | 334 | 52.63 |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtl.h>
void os_sys_init_user (
void (*task)(void), /* Task to start */
U8 priority, /* Task priority (1-254) */
void* stack, /* Task stack */
U16 size); /* Stack size */
The os_sys_init_user function initializes and starts the
Real-Time eXecutive (RTX) kernel. Use this function when you
must specify a large stack for the starting task.
The task argument points to the task function stack argument points to a memory block reserved for
the stack to use for the task. The size argument
specifies the size of the stack in bytes.
The os_sys_init_user function is in the RL-RTX library. The
prototype is defined in rtl.h.
Note
The os_sys_init_user function does not return. Program
execution continues with the task identified by the task
argument.
os_sys_init, os_sys_init_prio
#include <rtl.h>
static U64 stk1[400/8]; /* 400-byte stack */
void main (void) {
os_sys_init_user (task1, 10, &stk1, sizeof(stk1));
while. | https://www.keil.com/support/man/docs/rlarm/rlarm_os_sys_init_user.htm | CC-MAIN-2020-34 | refinedweb | 155 | 66.94 |
This is a sample chapter from Learning IPython for Interactive Computing and Data Visualization, second edition.
If you don't know Python, read this section to learn the fundamentals. Python is a very accessible language and is even taught to school children. If you have ever programmed, it will only take you a few minutes to learn the basics.
Open a new notebook and type the following in the first cell:
print("Hello world!")
Hello world!
TIP (Prompt string): Note that the convention chosen in this book is to show Python code (also called the
input) prefixed with
In [x]:(which shouldn't be typed). This is the standard IPython prompt. Here, you should just type
print("Hello world!")and then press
Shift-
Enter.
Congratulations! You are now a Python programmer.
Let's use Python as a calculator.
2 * 2
4
Here,
2 * 2 is an expression statement. This operation is performed, the result is returned, and IPython displays it in the notebook cell's output.
TIP (Division): In Python 3,
3 / 2returns
1.5(floating-point division), whereas it returns
1in Python 2 (integer division). This can be source of errors when porting Python 2 code to Python 3. It is recommended to always use the explicit
3.0 / 2.0for floating-point division (by using floating-point numbers) and
3 // 2for:
a = 2
And here is how to use an existing variable:
a * 3
6
Several variables can be defined at once (this is called unpacking):
a, b = 2, 6
There are different types of variables. Here, we have used a number (more precisely, an integer). Other important types include floating-point numbers to represent real numbers, strings to represent text, and booleans to represent
True/False values. Here are a few examples:
somefloat = 3.1415 sometext = 'pi is about' # You can also use double quotes. print(sometext, somefloat) # Display several variables.
pi is about 3.1415
Note how we used the
# character to write comments. Whereas Python discards the comments completely, adding comments in the code is important when the code is to be read by other humans (including yourself in the future).
String escaping refers to the ability to insert special characters in a string. For example, how can you insert
' and
", given that these characters are used to delimit a string in Python code? The backslash
\ is the go-to escape character in Python (and in many other languages too). Here are a few examples:
print("Hello \"world\"") print("A list:\n* item 1\n* item 2") print("C:\\path\\on\\windows") print(r"C:\path\on\windows")
Hello "world" A list: * item 1 * item 2 C:\path\on\windows C:\path\on\windows
A list contains a sequence of items. You can concisely instruct Python to perform repeated actions on the elements of a list. Let's first create a list of numbers:
items = [1, 3, 0, 4, 1]
Note the syntax we used to create the list: square brackets
[], and commas
, to separate the items.
The built-in function
len() returns the number of elements in a list:
len(items)
5
INFO (Built-in functions): Python comes with a set of built-in functions, including
print(),
len(),
max(), functional routines like
filter()and
map(), and container-related routines like
all(),
any(),
range()and
sorted(). You will find the full list of built-in functions at.
Now, let's compute the sum of all elements in the list. Python provides a built-in function for this:
sum(items)
9
We can also access individual elements in the list, using the following syntax:
items[0]
1
items[-1]:
items[1] = 9 items
[1, 9, 0, 4, 1]
We can access sublists with the following syntax:
items[1:3]
:
my_tuple = (1, 2, 3) my_tuple[1]
2
Dictionaries contain key-value pairs. They are extremely useful and common:
my_dict = {'a': 1, 'b': 2, 'c': 3} print('a:', my_dict['a'])
a: 1
print(my_dict.keys())
dict_keys(['c', 'a', 'b'])
There is no notion of order in a dictionary. However, the native collections module provides an
OrderedDict structure that keeps the insertion order (see).
Sets, like mathematical sets, contain distinct elements:
my_set = set([1, 2, 3, 2, 1]) my_set
{1, 2, 3}
INFO (Mutable and immutable objects):.
We can run through all elements of a list using a
for loop:
for item in items: print(item)
1 9 0 4 1
There are several things to note here:
for item in itemssyntax means that a temporary variable named
itemis created at every iteration. This variable contains the value of every item in the list, one at a time.
:at the end of the
forstatement. Forgetting it will lead to a syntax error!
print(item)will be executed for all items in the list.
Python supports a concise syntax to perform a given operation on all elements of a list:
squares = [item * item for item in items] squares
[1, 81, 0, 16, 1]
This is called a list comprehension. A new list is created here; it contains the squares of all numbers in the list. This concise syntax leads to highly readable and Pythonic code. as
\t), or by inserting a number of spaces (typically, four). It is recommended to use spaces instead of tab characters. Your text editor should be configured such that the Tabular.
Sometimes, you need to perform different operations on your data depending on some condition. For example, let's display all even numbers in our list:
for item in items: if item % 2 == 0: print(item)
0 4
Again, here are several things to note:
ifstatement is followed by a boolean expression.
aand
bare two integers, the modulo operand
a % breturns the remainder from the division of
aby
b. Here,
item % 2is 0 for even numbers, and 1 for odd numbers.
==to avoid confusion with the assignment operator
=that we use when we create variables.
forloop, the
ifstatement ends with a colon
:.
ifstatement. It is indented. Indentation is cumulative: since this
ifis inside a
forloop, there are eight spaces before the
print(item)statement.
Python supports a concise syntax to select all elements in a list that satisfy certain properties. Here is how to create a sublist with only even numbers:
even = [item for item in items if item % 2 == 0] even
[0, 4]
This is also a form of list comprehension.
Code is typically organized into functions. A function encapsulates part of your code. Functions allow you to reuse bits of functionality without copy-pasting the code. Here is a function that tells whether an integer number is even or not:
def is_even(number): """Return whether an integer is even or not.""" return number % 2 == 0
There are several things to note here:
defkeyword.
defcomes the function name. A general convention in Python is to only use lowercase characters, and separate words with an underscore
_. A function name generally starts with a verb.
number.
:at the end of the
defstatement).
""". This is a particular form of comment that explains what the function does. It is not mandatory, but it is strongly recommended to write docstrings for the functions exposed to the user.
returnkeyword:
is_even(3)
False
is_even(4)
True
Here, 3 and 4 are successively passed as arguments to the function.
A Python function can accept an arbitrary number of arguments, called positional arguments. It can also accept optional named arguments, called keyword arguments. Here is an example:
def remainder(number, divisor=2): return number % divisor
The second argument of this function,
divisor, is optional. If it is not provided by the caller, it will default to the number 2, as show here:
remainder(5)
1
There are two equivalent ways of specifying a keyword argument when calling a function:
remainder(5, 3)
2
remainder(5, divisor=3):
def f(*args, **kwargs): print("Positional arguments:", args) print("Keyword arguments:", kwargs)
f(1, 2, c=3, d=4)
Positional arguments: (1, 2) Keyword arguments: {'c': 3, 'd': 4}
Inside the function,
args is a tuple containing positional arguments, and
kwargs is a dictionary containing keyword arguments.
When passing a parameter to a Python function, a reference to the object is actually passed (passage by assignment):
Here is an example:
my_list = [1, 2] def add(some_list, value): some_list.append(value) add(my_list, 3) my_list
[1, 2, 3]
The function
add():
Let's discuss about an example:
def divide(a, b): return a / b
divide(1, 0)
--------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <ipython-input-2-b77ebb6ac6f6> in <module>() ----> 1 divide(1, 0) <ipython-input-1-5c74f9fd7706> and the exception's type and message. The stack trace shows all functions.
We will see later in this chapter how to debug such errors interactively in IPython and in the Jupyter Notebook. Knowing how to navigate up and down in the stack trace is critical when debugging complex Python code.
Object-oriented programming (or
'hello' is an instance of the built-in
str type (string). The
type() function returns the type of an object, as shown here:
type('hello')
str
There are native types, like
str or
int (integer), and custom types, also called classes, that can be created by the user.
In IPython, you can discover the attributes and methods of any object with the dot syntax and tab completion. For example, typing
'hello'.u and pressing Tab automatically shows us the existence of the
upper() method:
'hello'.upper()
'HELLO'
Here,
upper() is a method available to all
str objects; it returns an uppercase copy of a string.
A useful string method is
format(). This simple and convenient templating system lets you generate strings dynamically:
'Hello {0:s}!'.format('Python')
'Hello .
Python is a multi-paradigm language; it notably supports imperative, object-oriented, and functional programming models. Python functions are objects and can be handled like other objects. In particular, they can be passed as arguments to other functions (also called higher-order functions). This the essence of functional programming.
Decorators provide a convenient syntax construct to define higher-order functions. Here is an example using the
is_even() function from the previous Functions section:
def show_output(func): def wrapped(*args, **kwargs): output = func(*args, **kwargs) print("The result is:", output) return wrapped
The
show_output() function transforms an arbitrary function
func() to a new function, named
wrapped(), that displays the result of the function:
f = show_output(is_even) f(3)
The result is: False
Equivalently, this higher-order function can also be used with a decorator:
@show_output def square(x): return x * x
square(3)
The result is: 9
You can find more information about Python decorators at and at. "Hello" (without parentheses) works in Python 2 but not in Python 3, while
print("Hello") works in both Python 2 and Python 3.
There are several non-mutually exclusive options to write portable code that works with both versions:
Here are a few references:
You now know the fundamentals of Python, the bare minimum that you will need in this book. As you can imagine, there is much more to say about Python.
There are a few further basic concepts that are often useful and that we cannot cover here, unfortunately. You are highly encouraged to have a look at them in the references given at the end of this section:
rangeand
enumerate
pass,
break, and,
continue, to be used in loops
Here are some slightly more advanced concepts that you might find useful if you want to strengthen your Python skills:
withstatements for safely handling contexts
picklemodule for persisting Python objects on disk and exchanging them across a network
Finally, here are a few references: | http://nbviewer.jupyter.org/github/ipython-books/minibook-2nd-code/blob/master/chapter1/14-python.ipynb | CC-MAIN-2017-43 | refinedweb | 1,927 | 61.87 |
Memory-backed storage that implements the Web Storage API, making it a drop-in replacement for
localStorage and
sessionStorage in environments where these are not available.
Project website
.
.
For Node
npm install --save memorystorage
For browsers
memorystorage can be used directly from CDN, from a local script file, or from a module loader.
This is by far the easiest method and gives good performance to boost. Use this if you are in doubt.
<script src=""></script>
Download memorystorage.min.js, place it in a folder
lib in the root of your website and include it like this:
<script src="lib/memorystorage.min.js"></script>
Memorystorage implements the Universal Module Pattern and as such, is available to be consumed from Node modules as well as via an AMD loader such as RequireJS.
var MemoryStorage = require('memorystorage'); // here, the MemoryStorage function is available var myStorage = new MemoryStorage('my-app');):
require.config({ paths: { 'memorystorage': '' } });
import MemoryStorage from 'memorystorage' // here, the MemoryStorage function is available const myStorage = new MemoryStorage('my-app');:
var store = MemoryStorage('my-store'); var global = MemoryStorage();
Instances of
MemoryStorage expose an immutable
id property that is set to
the id the store was created with:
alert(store.id); // alerts 'my-store' alert(global.id); // alerts 'global'.
Here is some code to print all the keys and values in the
store object that does not limit itself
to the Web Storage API:
var keys = Object.keys(store); for (var i=0; i<keys.length; i++) { var key = keys(i); var value = store[key]; console.info(key + ': ' + value); }
Here is the same code, rewritten to stay within the API:
for (var i=0; i<store.length; i++) { var key = store.key(i); var value = store.getItem(key); console.info(key + ': ' + value); }
MemoryStorage is type-agnostic; it doesn't care about the type of data you store. If you want to remain within the Web Storage API, you should only read and write strings, however if you want you can store other types just as well:
store.myObject = {my: 'object'}; alert(store.myObject.my); // alerts 'object' var tree = { nested: { objects: { working: 'Sure!' } } } store.setItem('tree', tree); alert(store.tree.nested.objects.working); // alerts 'Sure!'
I'd like to draw your attention to the people that contributed to this project with bug reports, documentation, pull requests or other forms of support.
©2016 by Stijn de Witt and contributors. Some rights reserved.
Licensed under the Creative Commons Attribution 4.0 International (CC-BY-4.0) Open Source license. | https://openbase.com/js/memorystorage | CC-MAIN-2021-39 | refinedweb | 412 | 50.63 |
Carousel
Carousel allows you to add user-configurable rotating banners to any section of a Plone site.
Project Description
- Introduction
- Compatibility
- Installation
- Using Carousel
- Detailed overview and tests
- Customizing Carousel
- Carousel Events
- Tests
- Making a release
- Support
- Change history
- Contributors
Introduction
Carousel is a tool for featuring a rotating set of banner images in any section of your Plone site. Features:
- Different sets of banners can be used in different sections of the site.
- Banners can link to another page in the site, or an external URL.
- Carousel provides options to customize the appearance of the banner as well as the length and type of transition.
- An optional pager provides navigation among the banners.
- Transition effects are implemented using the jQuery javascript library which is included with Plone, so they are pretty lightweight.
- Banners do not rotate while the mouse cursor is hovering over the Carousel.
- Banner and pager templates can be registered to customize the appearance of the Carousel.
- Carousel implements jQuery events, allowing for integration with custom javascripts.
- Carousel images can be lazily loaded, to conserve the user and the server bandwidth if the full carousel cycle is not shown
- Carousel images can be made to appear in random order
Compatibility
Carousel requires Plone 3.0 or greater, mainly because it renders itself in a viewlet.
Installation
Add Products.Carousel to your buildout's list of eggs, and re-run buildout. If you get version conflicts while running buildout, you can use a known good version set to find versions compatible with plone.app.z3cform:
Start Zope, go to Site Setup -> Add-on Products in your Plone site and install the Carousel product.
Using Carousel
A detailed guide to using Carousel is available.
Customizing Carousel
It is possible to customize presentation of the Carousel by registering custom templates for the banner and pager. To simplify the registration of Carousel templates and their associated menu items, Carousel includes special ZCML directives. To begin, define the Carousel XML namespace in your product's configure.zcml:
xmlns:carousel=""
Then load the ZCML for Carousel:
<include package="Products.Carousel" />
Finally, register your templates:
<carousel:banner <carousel:pager
Both the banner and pager directives can also accept a layer attribute to restrict the availability of the template to a particular browser layer.
To make the development of banner and pager templates less repetitive, Carousel includes macros in the banner-base and pager-base templates. See banner-default.pt and pager-titles.pt for examples of how to use these macros.
Carousel Events
Carousel triggers jQuery events at key points in its operation, making it possible to integrate Carousel with other interactive elements on the page. These events are triggered on the Carousel container element:
- afterAnimate
- Triggered immediately before animation begins. It passes as parameters the Carousel object, the index of the previous banner and the index of the current banner.
- beforeAnimate
- Triggered immediately before animation begins. It passes as parameters the Carousel object, the index of the current banner and the index of the banner that will be active at the end of the animation.
- pause
- Triggered when animation is paused, such as when the user mouses over the Carousel. It passes as its parameter the Carousel object.
- play
- Triggered when animation begins or resumes. It passes as its parameter the Carousel object.
The Carousel object, which is passed as the first optional parameter to event handlers, is a Javascript object that encapsulates the current state of the Carousel. See carousel.js for details of the Carousel object.
To bind a callback to one of the Carousel events, select the Carousel container element and call the jQuery bind method on it:
(function ($) { $('.carousel').bind('afterAnimate', function (event, carousel, old_index, new_index) { console.log(carousel); console.log(old_index); console.log(new_index); }); })(jQuery);
Making a release
Do with zest.releaser
Example:
# Install zest.releaser in venv and activate that venv fullrelease
Support
- Use stackoverlow.com for usage and development related questions
- File bugs and patches at Github project
Change history
2.2.1 (2013-03-15)
- Fixed error on hover. [kroman0]
2.2 (2012-12-12)
- Browser view support for showing and hiding carousel viewlet added. [taito]
- Lazily load carousel images [miohtama]
- Don't display <img alt=""> text on the carousel images as it leads to confusing with partially load carousel images [miohtama]
- Added checkbox to enable lazy loading carousel images. [kroman0]
- A checkbox for randomizing the order of shown pictures [miohtama]
- Added Plone 4.3 compatibility [davilima6]
2.1 (2011-09-06)
- Updated i18n helper script to reflect current translations. [yomatters]
- Added Finnish translation. [datakurre]
- Allow users with the Site Administrator role to add Carousel banners by default. [davisagli]
- Added collective.googleanalytics tracking plugin for tracking banner clicks. [yomatters]
- Added Dutch translations. [jladage]
2.1b3 (2011-01-19)
- Changed default banner to use the image URL first and then fall back to the uploaded image. [yomatters]
- Made Carousel respect folder order of banners. [yomatters]
- Fixed animation logic for sliding Carousel. [yomatters]
- Made link URL optional. [yomatters]
- Added the ability to enter an external image URL instead of uploading an image. [yomatters]
- Fixed permission bug that affected unpublished Carousel folders. [yomatters]
- Fixed Carousel banner lookup so that banner view permissions are respected. [yomatters]
- Fixed a bug that affected folders containing an item with the ID 'carousel' that was not a Carousel folder. [yomatters]
2.1b2 (2010-12-08)
- Fixed javascript error on Plone 3. [yomatters]
2.1b1 (2010-12-06)
- Fixed known good versions set link. [yomatters]
- Made instructions for adding and modifying banners more prominent. [yomatters]
- Added option for setting the ID of the Carousel. [yomatters]
2.0 (2010-11-19)
- Split plugin into functions, making it easier to override parts of the Carousel behavior. [yomatters]
2.0b1 (2010-09-30)
- Added slide as a possible transition type. [yomatters]
- Refactored javascript as a jQuery plugin that triggers jQuery events on transitions. [yomatters]
- Added settings to customize the appearance of the banner and pager and the length and type of transition. [yomatters]
- Added an optional pager for navigation among banners. [yomatters]
- Replaced description field on Carousel banners with a rich-text body field. [yomatters]
- Remove the browser layer to help with use in Plone 2.5. [davisagli]
1.1 (2010-03-26)
- In Plone 4, add viewlet to the abovecontent viewlet manager by default, to avoid weird styles. [davisagli]
- Added Spanish translation. [tzicatl]
- Only show published banners in the Carousel, even for users who have permission to see others. [davisagli]
1.0 (2009-03-31)
- Changed behavior of text links to swap banner on mouseover. [davisagli]
1.0b3 (2009-02-07)
- Add 'Carousel Banner' to types not searched. [davisagli]
- Locate carousel folder correctly on containers used as default pages (e.g. a Topic) [davisagli]
- Apply proper security declarations to the getSize and tag methods of the banner type so that the view works okay when customized TTW. [davisagli]
- Only display the carousel on default view; not any of the other tabs. [davisagli]
- Fix viewlet removal on uninstallation. [davisagli]
- Fix duplicate entries in quick installer. [davisagli]
1.0b2 (2009-02-04)
- Declare dependency of our custom GS import step on the viewlets step. [davisagli]
- Separate the (globally-registered) template from the (locally-registered) viewlet, so that the former can be customized using portal_view_customizations. [davisagli]
- Added banner description to the template. Changed the 'carousel-title' class to 'carousel-button' so I could split out 'carousel-title' and 'carousel-description'. [davisagli]
- Handle non-structural folders correctly. [davisagli]
1.0b1 (2009-02-03)
- Initial release. [davisagli]
Contributors
- David Glick [davisagli], Groundwire, Author
- Matt Yoder [yomatters], Groundwire
- Noe Nieto [tzicatl], NNieto CS, Spanish translations
- Taito Horiuchi [taito]
- Mikko Ohtamaa [miohtama]
Current Release
Products.Carousel 2.2.1
Released Mar 15, 2013
Get Carousel for all platforms
- Products.Carousel-2.2.1.zip
- If you are using Plone 3.2 or higher, you probably want to install this product with buildout. See our tutorial on installing add-on products with buildout for more information. | https://plone.org/products/carousel | CC-MAIN-2015-48 | refinedweb | 1,322 | 58.08 |
Working with Axis2: Making a Java Class into a Service
How to Exclude Some of a POJO's Methods
As you know by default, Axis2 exposes all the public methods; howeverv there are instances where you need to exclude or not need to expose all of them. So now, let's look at how to control the behavior. And, say you need to expose only the "echo" and "add" methods and you do not want to expose the "update" method, you can do that by adding the following entry to services.xml. It should be noted here that you cannot exclude operations when you use a one-line deployment mechanism; however, even when you deploy a service programatically, you can exclude operations.
<excludeOperations> <operation>update</operation> </excludeOperations>
Add the above entry to services.xml and redeploy the service. Then, when you click on ?wsdl, you would see only two methods, and you will not see the "update" operation in the WSDL.
What Type of Bean Can You Write?
- Need to have getter and setter methods.
- Need to have default constructor.
- Cannot have properties that start with upper-case letters. As an example, it is not allowed to have a property such as "private String Age" in a bean but you can have "private String age."
- A bean's properties can be some other bean, primitive types, any kind of object arrays, DataHandlers, and the like.
Now, write a simple JavaBean and try to use that inside the service class. Your bean look like the following.
public class Address { private String street; private String number; public String getStreet() { return street; } public void setStreet(String street) { this.street = street; } public String getNumber() { return number; } public void setNumber(String number) { this.number = number; } }
Now, you can change your service implementation class and use Address bean as follows;
package sample; public class SampleService { public String echo(String value) { return value; } public int add(int a, int b) { return a + b; } public void update(int c) { } public Address get(String name) { Address address = new Address(); address.setNumber("Number"); address.setStreet("Streat"); return address; } }
Compile the code again, create another service archive file, and redeploy the service. Then, look at the WSDL file carefully; you will see new operations as well as a new schema element for Address in the types section.
Does It Have Support for Object Arrays?
It is possible to write POJO applications with object arrays, and you can have an object array as a field of a bean and as a method in a Service class as shown below. It should be noted here that an object array can be any kind.
public Address[] getAddress(String [] names){ }
Or it is possible to have something like this.
public Address[] getAddress(String [] names , Address[] address , int [] values){ }
How to Write a POJO with Binary Support
You can write your POJO to accept or return Binary data, and there you can use either byte[] or DataHandler. Irrespective of what you use depending on an Axis2 configuration, Axis2 will serialize and de-serialize into Base64 or MTOM. To have binary support, you can write your Service class as shown below.
Sending binary data
public DataHandler getImage(String fileName) { File file = new File(fileName); DataHandler dh = new DataHandler(new FileDataSource(file)); return dh; } public byte[] getImage(String fileName) { //Logic of creating byte array return byteArray; }
Page 2 of 3
| http://www.developer.com/services/article.php/10928_3726461_2/Working-with-Axis2-Making-a-Java-Class-into-a-Service.htm | CC-MAIN-2015-22 | refinedweb | 562 | 52.8 |
Hi all,
I am trying to implements my routes in Camel XML (vs DSL).
Anyone can help me in converting the following DSL to XML ?
/**
* Define the route from "A" where applications puts
* their HL7 messages.<br>
* First : the HL7 message is transformed to XML.<br>
* second : message is routed to both "B" and "C".
*/
from("activemq:A")
.process(new HL7toXML())
.to("activemq:B", "activemq:C");
I have no problem with the from and to clauses, but I don't know how to
express the process clause in XML (the documentation is not verbose on the
subject :confused:).
HL7toXML is a class implementing the Processor interface like the following:
public class HL7toXML implements Processor {
public void process(Exchange exchange) throws Exception {
String hl7Message = exchange.getIn().getBody(String.class);
String xmlMessage = HL7TransformerUtil.hl7ToXml(hl7Message);
exchange.getOut().setBody(xmlMessage, String.class);
}
}
Thank in advance.
PJ.
--
View this message in context:
Sent from the Camel - Users mailing list archive at Nabble.com. | http://mail-archives.apache.org/mod_mbox/camel-users/200710.mbox/%3C13021883.post@talk.nabble.com%3E | CC-MAIN-2014-52 | refinedweb | 159 | 57.47 |
Opened 7 years ago
Closed 7 years ago
#14640 closed defect (fixed)
Refactor the plot_expose function into a method
Description (last modified by )
This is just a needless slap in the face of OOP ;-)
Also, a better name should be picked. How about
plot().describe()
Finally, sort by zorder and then alphabetically for doctesting sanity.
Attachments (2)
Change History (22)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
Changed 7 years ago by
comment:3 Changed 7 years ago by
- Cc nthiery tscrim added
- Status changed from new to needs_review
comment:4 Changed 7 years ago by
The patch looks good to me, but how about a more specific (descriptive :P) name such as
text_description()?
Best,
Travis
comment:5 Changed 7 years ago by
Active is better than the passive verb from. And of course its text, did you expect that it is going to read it out to your through the speaker? ;-)
comment:6 Changed 7 years ago by
- Reviewers set to Travis Scrimshaw
- Status changed from needs_review to positive_review
That would be awesome if it did. I'm just thinking when I see this on tab completion that it is somewhat vague (but that's what the doc is for). Anyhow, I can't think of a better name, so I'm setting this to positive review. Nicolas, if you have any issues with the patch, feel free to set this back.
comment:7 Changed 7 years ago by
Hi Volker,
+1 for the change: I was just lazy at this point touching yet another thing outside of root systems and risking to create a conflict; and also launching a discussion about what would be the right output, etc. Thanks for fixing my lazyness!
As for the name, why is active better? Isn't the convention to use a
noun describing the result for methods whose main purpose is to return
something, and a verb describing the action for methods whose main
purpose is to change
self
? At this point I would lean for
"description".
Oh, and we might want to check if e.g. matplotlib does not have already a convention for this. Related things are the x3d_str and mtl_str methods in 3d plots.
Anyway, just throwing ideas in the air. Please proceed as you see fit!
comment:8 Changed 7 years ago by
I'd say noun if you return an object, otherwise verb form. E.g
foo.normalization()return a number
foo.normalize()modify
fooif necessary but don't return anything.
The former makes sense to chain together in English, as in
foo.normalization().numerator(), the latter doesn't. Similar
list.sort() vs.
sorted(list) etc.
comment:9 follow-up: ↓ 10 Changed 7 years ago by
And this method doesn't return anything, it just prints to the screen.
comment:10 in reply to: ↑ 9 Changed 7 years ago by
And this method doesn't return anything, it just prints to the screen.
Oh it does? Yikes! And I wrote this? Oops!
Ok, that explains the confusion; we agree on the naming conventions :-)
I'd rather have it return a string and use the idiom:
print ....plot().description()
so that we could reuse the result elsewhere in the of code.
Thanks!
comment:11 Changed 7 years ago by
- Milestone changed from sage-5.10 to sage-5.11
Changed 7 years ago by
comment:12 Changed 7 years ago by
- Status changed from positive_review to needs_work
I've uploaded a review patch which changes
describe() which prints to
description() which returns a string as per Nicolas' request. Needs a review.
comment:13 Changed 7 years ago by
- Status changed from needs_work to needs_review
comment:14 Changed 7 years ago by
- Status changed from needs_review to positive_review
Thanks, looks good to me!
comment:15 follow-up: ↓ 19 Changed 7 years ago by
Shouldn't the
plot_expose function have been deprecated along with this change?
comment:16 Changed 7 years ago by
Was never imported into the global namespace, so it does not need deprecation.
comment:17 Changed 7 years ago by
Oh. I missed that. Seems combinat specific, but it is good to see it in the plot code.
comment:18 Changed 7 years ago by
Actually, this is quite a useful feature. Perhaps it should be documented somewhere in the plot code. Probably in the Graphics class of sage/plot/graphics.py. Need not be done in this ticket.
comment:19 in reply to: ↑ 15 Changed 7 years ago by
comment:20 Changed 7 years ago by
- Merged in set to sage-5.11.beta0
- Resolution set to fixed
- Status changed from positive_review to closed
Initial patch | https://trac.sagemath.org/ticket/14640 | CC-MAIN-2019-51 | refinedweb | 778 | 72.87 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
On 21/06/2015 at 01:13, xxxxxxxx wrote:
Hello,
I'm trying to make a xpresso setup, as a beginner in python I have a trouble to set axis to center of an object. (For example by default if we convert a text into an editable spline its align stays as left I want its axis to set center automatically. Because other calculations are being made as it is in center)
As I understand SetMg should be matrix value but I could not convert my vector data to matrix.
And of course it should only run for once. (Sorry if these questions are silly, as I said I'm a beginner:)
Here is my code; (I'm using it as python node in xpresso )
import c4d
def setAxis(obj) :
size = obj.GetRad()*2
pos = obj.GetMp() + obj.GetAbsPos()
for i, point in enumerate(obj.GetAllPoints()) :
obj.SetPoint(i, point * pos)
obj.Message (c4d.MSG_UPDATE)
obj.SetRelPos(pos) #????
obj.SetMg(pos) ### ??? Axis should be in center of an object but how???
def main() :
global Source
if Source.CheckType(c4d.Opoint) :
setAxis(Source)
doc.EndUndo()
c4d.EventAdd() | https://plugincafe.maxon.net/topic/8859/11706_center-axis-with-python/1 | CC-MAIN-2021-31 | refinedweb | 229 | 65.52 |
Hi, Trying to get this bit of code to work so that a train goes up the track, leaves the track and then a train comes down the track, leaves and then a train comes up the track and so on. Only one train is allowed to go onto the track at anytime. My output starts off as this
Train 1 entered track, going DOWN
Train 1 left track, going DOWN
Train 7 entered track, going UP
Train 7 left track, going UP
Train 3 entered track, going DOWN
Train 3 left track, going DOWN
Train 6 entered track, going UP
Train 6 left track, going UP
which is right but then turns too
Train 8 left track, going UP
Train 5 left track, going UP
Train 4 left track, going DOWN
Train 2 left track, going DOWN
I have no idea why this is happening and if someone could show me where i have went wrong and why its went wrong i would be greatful.
public class UpDownTrack extends OpenTrack { //Add the allowedDirection attribute required private TrainDirection allowedDir; //Add a constructor to set the initial state of an UpDOwnTrack public UpDownTrack() { allowedDir = DOWN; } // only allows one train on the track public synchronized void useTrack(TrainDirection trainDir, int id) { try { enterTrack(trainDir, id); traverse(); exitTrack(trainDir,id); } catch(Exception e) { System.out.println("Error" + e); } } // Only allows a train to leave in the opposite direction public synchronized void enterTrack(TrainDirection trainDir, int id) { if(allowedDir != trainDir) { try { wait(); } catch (Exception e) { System.out.println(e); } } else { System.out.println("Train " + id + " entered track, going " + trainDir); } } // Tells us train has left the track and changes the direction public synchronized void exitTrack(TrainDirection trainDir, int id) { System.out.println(" Train " + id + " left track, going " + trainDir); allowedDir = (allowedDir == DOWN)? UP : DOWN; notifyAll(); } public boolean unsafeToEnter(TrainDirection trainDir) { return false; //always safe!? } }
Thanks | https://www.daniweb.com/programming/software-development/threads/363791/train-simulator-threading | CC-MAIN-2017-13 | refinedweb | 309 | 50.6 |
Ionic 5/Angular enableProdMode()
Ionic 5 supports the latest version of Angular, the popular Google client side framework.
In development phase, Angular works on development mode which has assertions and various necessary framework checks turned on.
After serving your Ionic 5 app using the browser, if you look on the console you are going to find an Angular message telling you that Angular is running on development mode and that you need to enable the production mode using the
enableProdMode() function. There is a good reason for this so if you just enable Angular production mode you are going to:
- Have a good boost on performance and speed of your Ionic 5 app: The device ready event will fire much sooner.
- Reduce the app size by half.
How to Enable Angular Production Mode in your Ionic 5 App?
To enable Angular production mode in Ionic 5, we use the
enableProdMode() function. Here is a detailed example on how to do it.
First of all, open the
src/app/main.ts file.
Next, import
enableProdMode from Angular core:
import {enableProdMode} from '@angular/core'; import { AppModule } from './app.module';
Next, call the
enableProdMode() function before bootstraping your module:
enableProdMode(); platformBrowserDynamic().bootstrapModule(AppModule);
Conclusion
So that is all for this short tip tutorial where I just showed you how to enable Angular production mode in Ionic 5 when you are ready to build and publish your mobile app to increase performance and speed, and reduce app size. | https://www.techiediaries.com/ionic2-enableprodmode/ | CC-MAIN-2020-34 | refinedweb | 244 | 51.48 |
Table of Contents
by
D.C. Bean
Consultant Masterfisherman
January 1991
RAF/87/008/DR/63
Bibliographic reference:
Bean, D.C., (1991), Gillnetting Trials for Deepwater Red Snappers on the Seychelles
Plateau, FAO/UNDP
RAF/87/008/DR/63/E, 15 pp.
CONTEXT
1. BACKGROUND
3. SEA TRIPS - METHODOLOGY
4. RESULTS
5. DISCUSSION
6. RECOMMENDATIONS
7. CONCLUSION
8. COSTING
9. ACKNOWLEDGEMENTS
The consultancy involved fishing trials with gillnets targeting deepwater snappers in order to establish the feasibility of this method. This resource is perceived to have the potential to permit economic operation of larger vessels in the artisanal fishery. The stocks targeted are lightly exploited at present, and are all varieties in demand for export by the Seychelles Marketing Board (SMB).
A total of 16 days at sea spread over three trips was achieved during the consultancy, using the vessel Etelis, belonging to the Seychelles Fishing Authority (SFA). Forty three sets of the nets were made, all in different locations scattered around the edge of the shelf of the Seychelles Plateau.
The most difficult conditions were encountered during first trip to the Southeast in the area of the Constant Bank because of the strong currents. Less current was encountered during the second trip to the Northeast edge of the plateau and the third trip to locations Southwest and a better understanding of the problems involved improved performance. Throughout the period, the weather was excellent with only light sea breezes and very little swell.
A total of 3 964 kg of fish and 5 626 kg of shark were caught, giving catches per unit effort for the three trips of 18.7, 23.2 and 30.3 kg/net/set for fish, and 15.1, 38.9 and 50.5 kg/net/set for sharks.
These figures indicate an improving catch rate for each successive trip probably due to the better understanding of local conditions. The final trip was made during the spring tides and to an area alleged to be overfished by the local fishermen. Contrary to predictions, it seems that deepwater gillnetting can be operated during spring tides and that the handline catch in a specific area may not actually be indicative of the stock present but rather more the catchability of that stock..
Later sets were generally more successful at targeting the horizon at the top of the drop-off (60-100 m) where the two dominant species of red snapper were Pristipomoides filamentosus and Aphareus rutilans. These two snappers together made up the bulk of the fish catch (60-70%).
The tremendous variation in the results achieved from different sets makes them very susceptible to misinterpretation. At one end of the scale, the catch was spectacular, with catch rates as high as anything the consultant had previously encountered. At the other end of the scale, there were a number of very poor results, mainly due to technical difficulties in being able to secure the gear to the target area because of the strong local currents and activities of very large sharks.
Economic projections show the highly positive potential for this fishery for 12 m boats and a slightly less encouraging result for 21 m boats.
However, before recommendations can be made to the SFA that they commit large sums of money into equipping their new and larger vessels for gillnetting, it is considered necessary to continue the research, attacking the problems from three different standpoints:
- to continue to identify the best fishing locations,
- to modify the gear to obtain better holding abilities in current and
- to lighten the work load on board by having better gear and fish handling procedures.
It will be necessary to purchase and rig further gear of different specifications to those on site to a design more suitable to the local conditions. Shallower nets with heavier footropes are thought to be one way of achieving more accurate positioning of the nets on the sea bed and getting them to stay there for the duration of the set, thus reducing the number of unproductive sets.
The Masterfisherman was contracted by the FAO to act as consultant to the Seychelles Fishing Authority (SFA) in the field of deepwater gillnetting. The expert had previously extensively and successfully applied the technique in the areas of continental slope in several countries surrounding the Red Sea.
For the duration of the consultancy, he worked with the Research Department of the SFA using their well equipped 18 m FRV Etelis. Funding for the consultancy came from UNDP. Fishing gear used during the sea trials was provided by the SFA, funded from Japanese aid sources.
In an effort to increase the catch from the artisanal sector of the Seychelles from the present 4 392 metric tonnes to the estimated 7 000 metric tonnes of total sustainable yield, it was agreed that additional effort should be devoted to exploiting the deeper water species from the rim of the Seychelles Plateau. These species, primarily Pristipomoides and Etelis spp, were considered to be under least pressure from the traditional fishery and could offer opportunities for expansion of the local fleet, in particular the larger vessels capable of fishing at long range.
Presently, the SFA is in the process of building two 22 m and a number of 12 m vessels. The development of a fishing method capable of giving these and other large vessels within the artisanal sector, a viable operation is required. Furthermore, this would be an opportunity to much increase the supply of fish to the Seychelles Marketing Board's (SMB) extensive and under-utilised shore facility.
The SMB is the sole exporter of the artisanal catch and has an important role in the island economy. The target species of deepwater gillnetting are all suitable for export.
3.1 Equipment
Net hauler
The FRV Etelis was equipped with a hydraulically powered surface gillnet hauler. The drum surface of theis hauler was smooth and lacked sufficient friction to allow nets to be hauled from deep water. The consultant supervised the conversion of the drum so as to make it grip monofilament nets. This was done in the SFA workshop by local staff. By the second trip, following a few modifications, a very satisfactory net hauler was produced, lacking only a line hauler attachment which could be purchased later.
The conversion consisted of welding 12 mm steel bars radiating from the middle of the drum, fixed on their outer ends to the extended wings of the hauler (see photographs). These bars were staggered alternately on each side, overlapped the centerline by two centimeters, producing a herringbone pattern on the drum. This caused the net to travel a zigzag path as it passed over the drum, producing enough friction for the net to be gripped tightly. Even when being hauled from 200 m depth, a single man could operate the hauler. For safety and to strengthen the drum, 6 mm steel plate webs were cut to fit under the bars and welded in place.
Preparation of gear
Five deepwater gillnets were rigged in the first three days by the consultant and members of the FRV Etelis crew. The design is given in the diagrams in Annex I. The gear components were on site at the time of the consultant's arrival.
3.2 Methodology
The FRV Etelis with its normal crew of 6 persons was additionally staffed by a biologist and assistant and, on the second trip by a Mauritian gear technologist. On the third trip, David Ardill, Project Manager and Michael Sanders, Senior Fishery Development Specialist of the SWIOP accompanied the vessel.
The trips were of 3 to 7 days duration and each was in a different location on the rim of the Seychelles Plateau.
Ground Survey
On arriving at the rim of the Plateau, the position was checked with the satellite navigator and the steepness of the slope determined using the three echo sounders on board. A track on a particular chosen contour was then surveyed in the depth range between 60 and 200 m.
Where the slope was gentle enough, gear was targeted to lie diagonally down the slope from one limit to the other of the selected range. In practice, it was found that targeting 200 m was particularly hazardous as the currents and steepness of the drop-off at that depth frequently caused the gear to carry off into much deeper and unfishable waters.
Shooting the nets
Having determined a suitable track on which to target the nets, the gear was shot off the stern of the boat from the up-current end of the location. The speed of shooting was kept to a minimum to allow the crew to tie matchbox sized pieces of bait (bonito) to the net at roughly five meter intervals.
Anchoring the nets
As it became more and more apparent during the course of the trials that anchoring the nets securely was a major factor in procuring a catch, more effort was made to achieve this objective. In later sets, not only were 50 kg irons and grapnels used at the ends of the gear, but intermediate 20 kg irons were used between individual nets and the bridles/veering lines shortened to only 50 m.
Handling the nets
Starting with the up-current end, the gear was hauled with the converted hauler. The skipper kept the boat up on the gear from his key observation position on the flying bridge.
Fish were taken from the nets by two or three men immediately behind the hauler before the cleaned net was pulled astern, and spread, ready for baiting and shooting by three other crew members.
Biologists monitored the catch at all times during the hauling procedure, identifying the species and sex of each fish and taking its length and weight.
Fasteners and sharks
On the occasions when the net fouled the bottom, the hauler was stopped once it had reached the vertical position and a short piece of 20 mm rope was secured to the net just above the water line, the other end of which was tied to a cleat or towing position on the boat. Usually a little forward motion on the engine would release the fastener and hauling by the net drum would proceed. This method was necessary to prevent unnecessary strain on the bearings of the hauler.
Sharks of 100 kg and over were treated in a similar way, the thick rope being secured either to the head or the tail at the water line. The weight of the fish was then taken on the thick rope, thus minimising net damage. Very large sharks involved the use of a second thick rope and the capstan. Sharks smaller than 100 kg were systematically eased over the top of the hauler, with one crewman on the rail helping the fins around the net guides.
Handling of fish
After extraction from the net, the fish were placed in a slurry of ice and seawater contained in a 1-ton insulated container on deck. Once or twice a day, this container was emptied and the chilled fish were gutted, gilled and scrubbed clean of blood particles by the crew members, after which they were layered systematically by species in the fish hold. Each layer was well separated by crushed ice. The top layer of ice was covered with empty ice bags. Any additional space was also covered by unused bags before the hatches were securely put back in place (see Discussion).
Handling of sharks
Only the fins of the sharks were retained after the species and biological data had been collected by the biologists. Some large jaws were kept and some smaller specimens were kept by the crew for salting and taking home.
Duration of sets
A soak time1 of two and a half to three hours was generally allowed for each set. This time got extended to six or more hours during the midnight to dawn period when the crew were catching up on their rest. At no time whatsoever was any fish found to be deteriorated in any form due to being in the nets too long. All the grouper and most of the Pristopomoides were still alive on hauling.
1 "Soak time" refers to the time interval between the end of setting of the net and the beginning of hauling.
Aphareus rutilans were rarely alive on hauling due to their large size and the "gilling" position of the net bars. The 150 mm mesh was not wide enough to allow the gills to pass through the mesh and the fish would have died immediately on entangling in the net due to the fact that its gills would have been unable to open. Even so, no deterioration in quality was noticeable.
Repairs and modifications
Running repairs to the nets were done aboard the FRV Etelis as the nets were pulled back and spread ready for the next shot. The crew overhauled the nets thoroughly between trips and repaired all the damage using the 'lacing' method. Most damage to the nets was caused by sharks.
Modifications to the gear were also carried out on board when thought necessary. These mainly entailed adding additional anchors and weight to the footrope, but also involved shortening the veering lines to achieve more directional positioning of the nets in the shallower sets.
4.1 Fishing results
A total of 16 days at sea spread over three trips was achieved during the consultancy, using the SFA vessel Etelis. The detailed fishing results are presented in Table 1. Forty three different sets of the nets were made in different locations on the edge of the shelf.
The most difficult conditions were encountered because of the strong currents during first trip to the Southeast in the area of the Constant Bank. Less current was encountered during the second trip to the Northeast edge of the plateau and the third trip to locations Southwest and a better understanding of the problems resulted in improved catches. The weather was excellent throughout the period, with only light sea breezes and very little swell.
A total of 3 964 kg of fish and 5 626 kg of shark were caught, giving catches per unit effort (CPUE) for the three trips of 18.7, 23.2 and 30.3 kg/net/set for fish, and 15.1, 38.9 and 50.5 kg/net/set for sharks2. The catch per day on each trip was 157, 293 and 335 kg for fish and 137, 504 and 714 kg for shark. The catch per day, however, is not strictly comparable in that varying quantities of gear were used with less nets in the second cruise and a second fleet of nets in the third.
2 This gave CPUE* of 7.69, 6.78 and 5.46 kg/net/hour for fun, and 5.8, 8.93 and 7.52 kg/net/hour for sharks. The apparent drop in the catch rates presented in this manner was due to long soak times in the last cruise when nets were left overnight.
Although sufficient data were not available to establish this statistically, a tendency was noted for the catch of fish to drop and that of sharks to increase with soak times in excess of three hours, particularly in the night sets. The presence of sharks in the net was almost invariably associated with tears in the netting and with partially eaten fish. It thus seems likely that the above finding was due to sharks eating the fish meshed in the nets. Further confirmation comes from the fact that, even in long sets, many fish were still alive: it is presumed that the fish netted early in the set had been eaten by sharks.
Again, although detailed records were not kept of catch for each individual net, a tendency was noted for the catch to be concentrated in the initial and terminal nets in a set. Furthermore, the nets in the centre of a set frequently contained more rubble than those in the ends. These findings were interpreted to mean that the centre of the net tended to be swept along the bottom by the current (and large sharks). The addition of more intermediate weights and anchors in later sets improved these features. A reduction of the depth of the net may also help, hopefully with no drop in catch as fish were consistently meshed in proximity to the bait attached to the nets.. Many sets were wasted at the beginning of the trials whilst trying to position nets at depths of 150-200 m. These negative results abnormally affected the catch per unit effort in the earlier days.
Later sets were generally more successful at targeting the horizon at the top of the drop-off (60-100 m) where the two dominant species of snapper were Pristipomoides filamentosus and Aphareus rutilans. These two snappers together made up the bulk of the fish catch (60-70%).
Aphareus rutilans, usually at an insignificant level in the handline catch, represented a much bigger proportion of the red snappers caught in the gillnetting operations (up to 30% by weight).
Sets made in the shallower end of the range in proximity to coral areas provided good catches. The species diversity was very high, however, many of the fish not being readily exportable species. This, as well as the perceived competition with traditional gears, would provide arguments against specifically targeting these areas.
The tremendous variation in the results achieved from different sets makes them very susceptible to misinterpretation. At one end of the scale, the catch was spectacular, with catch rates as high as anything the consultant had previously encountered, showing beyond doubt that the method worked in the Seychelles. At the other end of the scale, there were a number of very poor results, mainly due to technical difficulties in being able to secure the gear to the target area because of the strong local currents and activities of very large sharks.
With shallower nets, more intermediate anchors and better knowledge of the fishing grounds, it is likely that the average catch rates achieved can be improved upon.
4.2 Economic evaluations
Inputs
Economic projections were made using a spreadsheet model (Ardill, pers. comm.). The initial catch hypotheses represented a catch rate of 630 kg per day, with 20 nets being set daily3 (Table 2). These catch rates were used for both the 21 m and 12 m boats studied, as were costs of gear replacement and crew.
3 These catch rates were achieved in the third cruise (Table 1).
Eight different scenarios were examined. These are shown below:
The value of the sharks is not taken into account, as there is no market at this time in Seychelles for fresh or salted shark (the fins can be sold, but could be considered a "crew bonus"). If a market is found, the economic performance of the fishery will certainly improve. Extra costs involved would include a crew member specifically for dressing down the sharks on board as well as, presumably, the cost of salt.
The prices taken are those offered by SMB for large snappers. Smaller snappers and groupers in particular fetch a higher price, and this will also improve economic performance if substantial numbers of smaller fish are caught4.
4 Increased catches of smaller fish can be determined partially by the choice of tone* and depths fished and may also be one of the effects of a sustained fishing effort on these stocks.
Based on the experience of the consultancy, it was assumed that the netting would have to be replaced after 20 days fishing, with the ropes, floats, leadlines, etc. being replaced at longer intervals. Crew salaries were also kept the same in all simulations: the work is unrelenting, and a high salary level would be necessary to motivate the crews.
In the simulations on the 21 m boats, the assumed cost for the hull and machinery were lower than the real costs for these vessels. This is justified by the fact that these vessels were constructed for other uses, and this occasioned cost overruns in the construction and equipment fitted which should not be reflected in a simulation.
In the case of the 12 m boats, two simulations (VI and VII) are based on the actual costs of the hulls (SR 700,000), while simulation VIII is based on the subsidised price at which SFA sells these hulls to fishermen (SR 300,000).
Results
In the baseline simulation, the 21 m boat does not cover operating costs from the catch value (Simulation I). If the catch (rates) are increased by 25% (which is not unreasonable as gear is improved and fishing zones better known), operating costs are covered, but the debt service keeps the cash flow negative (Simulation II). With a catch increased by 50%, the cash flow becomes positive and a project internal rate of return (IRR) of 11% is achieved (Simulation III). The project cash flow is negative as a result of the replacement of the machinery after the 10th. year: this could be compensated by a salvage value or by a longer service life.
Simulation IV assumes a catch increase of 25% and a price increase of 10% (as from a higher catch of smaller fish). Although the IRR is weakly positive, cash flows are negative. Positive economic results are again achieved with catch and price increases of 25% each (Simulation V). The IRR of 14% is satisfactory, although lower than the rate which could be obtained from bonds, and an annual profit of SR 100,000 is obtained.
Comparatively, the economic performance of the 12 m boats is far superior, with positive results being obtained even in the base case (Simulation VI, IRR = 15%). The IRR of 40% in the case of a catch increase of 25% is a high return (Simulation VII). Not surprisingly, a comparable return is achieved in the base case for the subsidised hull (Simulation VIII).
5.1 Current
Good catches were obtained in each of the shelf areas sampled when the nets remained in the area selected (in relation to the depth and - sometimes - fish echoes located on the echo-sounder). The overriding factor in correct positioning of the nets was always found to be the current. Too often, the nets when hauled were at depths quite different from those where they were shot three hours previously.
The conclusions drawn from these sets were that the gear had "walked" along the bottom due to substantial bottom currents; 60 kg end irons polished on their underside confirmed this. The use of home-made grapnels did little to help and it is thought that the only realistic solution is to use proper fisherman type or "Danforth" anchors in conjunction with an end iron.
Furthermore, the use of 80 meshes deep nets has exacerbated the problem and the benefits of a high net with good entangling properties for big fish was in the main lost due to it being swept down by the current.
To compensate for the adverse effects of current, much heavier footropes were constructed during the trials. These helped in securing the nets to the bottom and in raising the catch level. However, they had an adverse effect on the deck handling and shooting procedures, with additional lead weights and stones, double footropes and intermediate anchors all slowing down the overhauling. At the risk of losing the odd expensive fisherman's anchor, future trials should be conducted with these and also with shallower nets, e.g., 45 meshes deep, plus heavier leaded footrope. No lead weights should be used anywhere in the gear as they catch in the hauler ribs.
5.2 Fishing location - Target areas
At the present stage of investigation, it appears that the most easily fished sites are where the current flows onto the Plateau rather than off it. At the time of the trials, these areas were to the South or Southwest, but this may vary for different times of the year. Moon phase seemed to have little effect.
The most productive sets were hauled from depths of 60-100 m and were conspicuously better where there were some abrupt changes in contours. Typically, there were "wave platforms" on the West side of the Plateau, one at 42 m and another at 62 m, the latter marking the beginning of the continental slope (drop-off).
At both the first drop-off and the second main drop-off, fish were found to be abundant, the larger red snappers being more concentrated on the deeper rim. Both these areas and the interim plateau at 60-70 m offer good sites for future trials. As the operators become more confident with the improved gear, trials could be made down the main slope into deeper water targeting Etelis spp.
5.3 Fish handling
The present procedure for handling fish on board FRV Etelis needs to be reviewed in view of the different nature of the fishing operation and the demanding work load on the crew. Ideally, the fish after extraction from the nets should be plunged into a slurry of ice and seawater and remain there for up to six hours, by which time their temperature has been brought right down. The whole fish should then be iced down in the hold with a layer of ice between each fish layer. The smaller snappers and groupers which are valuable should be boxed and iced, top and bottom, before stowage in the fish hold.
These innovations would have to be in agreement with SMB, but records and experience show that quality fish are better preserved in this manner.
5.4 Net damage
As in any gillnetting operation, net damage is unavoidable and a necessary component of operational expenses. During the trials, one fleet of nets was fished heavily, with hauls every 3 or 4 hours and a total of 38 sets. In this period, the nets were mended twice onshore and running repairs were done from time to time on the boat. Even so, the sheet netting was nearing the end of its life and will soon need replacing. In the economic evaluations, a net life of twenty fishing days has been used.
A decrease in damage would probably be experienced with a shallower net and a large decrease in damage could be expected if night shots were avoided, due to the hyper-activity of sharks during this period.
5.5 Training
The crew of FRV Etelis and members of the Research Department were fully trained in the method as it was demonstrated. The skipper of the research vessel was fully aware of the adverse effects of current and gained considerable knowledge during the trips on correctly positioning the nets. Once the problems of positioning the nets are sorted out during the next research period, the transfer of the technology to other skippers would be relatively simple.
5.6 Sharks
Five tonnes of sharks were discarded during the demonstrations and under any normal gillnetting operation would be utilised at least as a salted product. Thought should be given to this aspect if gillnetting is to be adopted by the large vessels at a later date.
5.7 Pricing
Pricing is not the concern of this consultancy but it should be pointed out that if projections are to be made to determine the economics of operating a large Seychelles vessel as a gillnetter, it is rather unfair, on one hand, to base the expenses such as fuel, replacement nets, etc., on world prices plus import tax and, on the other hand, compute the gross revenue on the prices paid for fish by the monopolistic fish buyer (SMB), which are substantially below world levels.
5.8 Economic evaluations
These show clearly that even with the catch rates achieved in experimental fishing, a 12 m boat provides a good economic return. While this is not the case for the 21 m boats, the hypotheses needed to provide a profitable operation are not unrealistic and the results are likely to be far superior to any other fishery tested to date in the Seychelles.
6.1 Research
It is recommended that a further period of research be conducted under supervision of a masterfisherman, as soon as the components for the modified gear are on site in Seychelles. To facilitate the extended research, it is urgent that the necessary finance and backers are contacted immediately in order to get the time framework correct. The shallow nets and leaded rope required will take at least three months to reach Victoria, as they should be shipped.
6.2 Improved gear
As a direct outcome of the research to date, it is recommended that the following gear be available for extended research and commercial trials. Enough gear could be imported by sea freight to cover the very basic needs of a trial period of three months' commercial fishing. In order to have nets available at the earliest for the FRV Etelis, it might be advantageous to have twenty percent air freighted.
Total input needed
a) 50 nets
1.5 x 12 ply x 45 meshes deep x 150 mm stretched mesh x 200 m long. Double, Selv. Top and Bottom. Colour: grey or blue;
b) 25 coils of 12 mm leaded rope approx. 20 kg/100 m;
c) 75 coils of 8 mm polypropylene rope;
d) 1 500 Deep water floats buoyancy approx. 120 g egg-shaped and with 10 mm hole. Operating depth 400 m;
e) 50 kg braided nylon setting twine equiv. to 210 d/45;
f) 50 kg 1.5 x 12 ply mending twine;
g) 15 coils 14 mm polypropylene rope, each 220 m;
h) 15 coils 12 mm polypropylene rope, each 220 m;
i) 60 8-inch diameter plastic trawl floats;
j) 10 40-inch circumference inflatable buoys;
k) 20 fishermen's or danforth anchors, 20 kg.
Approximately US $ 20 000 CIF Mahé Island.
6.3 Anchors
It is recommended that all future gillnetting trials be carried out using commercial anchors of minimum weight of 20 kg. This is now considered necessary to overcome the "walking" of the gear. The anchors could possibly be locally made and would be operated in conjunction with a "tripping" device plus an end-iron of scrap metal.
6.4 Speeding up the overhauling and turn around time of the sets
It is recommended that the following procedures be adopted on board FRV Etelis to ensure rapid turn around of gear and therefore allow both more gear to be worked and a lighter workload for the crew:
a) work shallower nets;
b) use anchors;
c) use only lead line or leaded rope - no lead weights;
d) pull the nets over a 'net bar' (steel pipe arranged horizontally 2 m above the deck) to allow small objects to fall out of the net and also spread the net better;
e) overhaul nets to a position forward of cabin and opposite to hauler, thus allowing crew to move from overhauling to clearing as the job requires. The nets would then be shot over the portside;
f) haul dahn lines with line hauler. A line hauler can be fitted onto the same shaft as the existing net hauler and mounted at the inner end. This should be procured;
g) speed up fish handling by first putting all fish into an ice slurry tank on deck and later, after shooting and a delay of some hours, put the whole fish in layers in ice in the fish hold, the smaller and more valuable fish being boxed and iced. This would avoid the lengthy gutting and washing period and also lead to a better quality product ashore.
6.5 Fishing location
It is recommended that extended research takes place initially on the Western side of the Plateau, at least during the period of Northeasterly onshore drift of the Equatorial Counter Current, in order that, to some extent, the operators are protected from having their gear swept into deep water and possibly lost. Once anchoring and securing the nets effectively had been systematically achieved, the research area could be extended to the rest of the Plateau rim.
6.6 Utilisation of shark
In view of the shark bycatch provided by a gillnetting operation, it is recommended that a review of shark utilisation be made by SMB and/or other interested bodies. Ideally, a gillnetting operation should aim at covering its gear and maintenance costs from its shark bycatch.
The main conclusion drawn from the work by the consultant is that there is both an under exploited stock of deepwater snappers (ref. biological data collected during trials and numbers of very large fish) and the possibility to commercially harvest that stock using deepwater gillnets.
The second conclusion is that to exactly determine the correct gear and mode of operation, before large and possibly inappropriate investment be made, a further period of research be conducted with FRV Etelis using the recommended improved gear and procedures.
8.1 Cost of one standard net fishing length 100 m
Note: As per quotation Namnet-Korea, Dec. 6/90
Import tax not included
8.2 Costs of buoy ropes and bridles
Ropes required for fishing areas close to or down the drop off allowing for current and drift are as follows for each end of a fleet of nets.
NB: Import tax not included
Normally nets would be fished in fleets of 6 nets and have a dahn line at each end. Therefore, it follow that the cost of buoy ropes, etc., for a fleet of nets is 123 x 2 = US $ 246. Thus, the cost per net including the dahn line is one fifth of US $ 246, i.e., US $ 49.20, making the overall costs per net in the water 288.24 + 49.20 = US $ 337.44.
8.3 Estimated investment costs to start gillnet operation in 12 m vessel
These are CIF prices Mahé but without import tax.
Much thanks to the following persons for their help and cooperation.
The Crew of FRV "Etelis"
And all the administrative support staff
Table 1: Catch Data
Results of the 1st gillnet trials (26/11/1990 - 2/12/1990)
Results of the 2nd gillnet trials (7/12/1990 - 12/12/1990)
Results of the 3rd gillnet trials (15/12/1990 - 18/1990)
Table 2: Simulations for a gillnetter working in Seychelles Service life 15 years
Results of Simulations
Figure 1: Design of Deepwater Gillnet
Figure 2: Design of Dahn Lines for Deepwater Gillnetting | http://www.fao.org/docrep/field/366407.htm | CC-MAIN-2016-30 | refinedweb | 5,739 | 56.89 |
So I have a following snippet (and a good reason behind it):
#include <iostream> volatile const float A = 10; int main() { volatile const float* ptr = &A; float* safePtr = const_cast<float*>(ptr); *safePtr = 20; std::cout << A << std::endl; return 0; }
Under G++ v8.2.0 (from MinGW suite), this program compiles fine and outputs
20 as expected. Under VS2019, it compiles, but throws a runtime exception -
Exception thrown at 0x00007FF7AB961478 in Sandbox.exe: 0xC0000005: Access violation writing location 0x00007FF7AB969C58.
Is there a way to make VS2019 behave the same way the G++ does? And how to do it with CMake?
All standard references below refers to N4659: March 2017 post-Kona working draft/C++17 DIS.
As governed by [dcl.type.cv]/4, your program has undefined behaviour
Except that any class member declared mutable can be modified, any attempt to modify a const object during its lifetime results in undefined behavior. [ Example:
// ... const int* ciq = new const int (3); // initialized as required int* iq = const_cast<int*>(ciq); // cast required *iq = 4; // undefined: modifies a const object
and as such, demons may fly out of your nose and any kind of analysis of your program beyond this point, including comparison of the behaviour for different compilers, will be a fruitless exercise.
You question is interesting, it would seem that
const_cast would allow to change an underlying
const object, that would be nice indeed, but unfortunately no,
const objects cannot be safely changed by any means, even though it appears to be working.
const_castmakes.
You should not try to make this work by ignoring the problem, my guess is that you are running the program in VS in debug mode so it catches the error where g++ doesn't, but if you run your program through the debugger you'll likely see the same problem, though it's not guaranteed as per the nature of undefined behavior.
The way to go is to fix the code, not to ignore the problem.
As point out, you cannot legally modify const objects...
But you can have const reference on non-const object:
so you might use the following:
const float& A = *([]() { static float a = 10; return &a; }()); // const float& A = []() -> float& { static float a = 10; return a; }(); int main() { float* safePtr = const_cast<float*>(&A); *safePtr = 20; std::cout << A << std::endl; }
(No need of
volatile neither).
User contributions licensed under CC BY-SA 3.0 | https://windows-hexerror.linestarve.com/q/so63845078-Changing-value-of-volatile-const-G-vs-Visual-Studio-2019 | CC-MAIN-2021-04 | refinedweb | 402 | 55.98 |
Suppose a man standing in the first cell or the top left corner of “a × b” matrix. A man can move only either up or down. That person wants to reach his destination and that destination for him is the last cell of the matrix or bottom right corner.
And if there are some hurdles in it, how many unique paths can be identified by the person. Help him to reach the destination by knowing No of Unique Paths.
A hurdle is considered as 1 and space is marked as 0 in the matrix.
Example
Input:
[
[0,0,0],
[0,1,0],
[0,0,0]
]
Output:
2
Explanation:
Because only one hurdle is present in the middle of the whole matrix which allows only two unique paths:-
- Down →Down →Right →Right
- Right →Right →Down →Down
In the above image, the blue arrow shows path 1 and the red arrow shows path 2.
Algorithm
- Check if the first cell that is array[0][0] contains 1 then return the unique paths as 0 as it can’t move forward than this.
- If array[0][0] doesn’t contain the value 1 then initialize the value of array[0][0]=1.
- Now, iterate over the first row of the array and if this cell contains 1, this means the current cell has a hurdle in it and sets the value of a cell to 0. Else set the value of this cell as of the previous cell that is array[i][j]= array[i][j-1].
- Now, iterate over the first column of the array and if this cell contains 1, this means the current cell has a hurdle in it and sets the value of the cell to 0. Else set the value of this cell as of the previous cell that is array[i][j]= array[i][j-1].
- Now, iterate over the whole matrix starting from the cell array[1][1].
- Check if cell doesn’t contain any hurdle then do arrayi,j] = array [i-1,j] + array [i,j-1].
- If a cell contains a hurdle, then set the value of the cell to 0 to make sure it doesn’t repeat in any other path.
Explanation
So our main idea for solving this question is if a cell doesn’t have the values 0 in it then it means that we have the number of ways of reaching our destination cell is the sum of a number of ways of reaching the cell above that cell and sum of the number of ways of reaching the cell left of that cell.
So we implemented some of the functions which are going to help us in solving our problem. We declared a function getVal, getVal gets some arguments perform some operations and return some value, that is going to solve the following problem:
- If in a first row, a cell contains a hurdle it sets the value to 0 else it will set the value of that cell as that of the previous cell that is array[i][j]=array[i][j-1].
- If in a first row, a cell contains a hurdle it sets the value to 0 else it will set the value of that cell as that of the previous cell that is array[i][j]=array[i-1][j].
Example
So let us take an example as Array={{0,0,0},{0,1,0},{0,0,0}};
An array is passed into findUniquePath
Array={{0,0,0},{0,1,0},{0,0,0}};
So its first cell that is array[0][0] is not equal to 1.
So set, array[0][0]=1;
Iterating over column,
i=1
if array[1][0]==0 and array[0][0]=1 is true
so array[1][0]=1;
i=2;
if array[2][0]==0 and array[1][0]=1 is true
so array[2][0]=1;
Now iterating over row,
i=1
if array[0][1]==0 and array[0][0]=1 is true
so array[0][1]=1;
i=2;
if array[0][2]==0 and array[0][1]=1 is true
so array[0][2]=1;
Now starting from array[1][1]
i=1,j=1
if array[1][1]==0 is false
so array[1][1]=0
i=1,j=2
if array[1][2]==0 is true
array[i][j] = array[i – 1][j] + array[i][j – 1];
so array[1][2]=array[0][2]+array[0][1];
array[1][2]=1+1=2;
i=2,j=1
if array[2][1]==0 is true
so array[2][1]=0
i=2,j=2
if array[2][2]==0 is true
array[i][j] = array[i – 1][j] + array[i][j – 1];
so array[2][2]=array[1][2]+array[2][1];
array[2][2]=2+0;
array[2][2]=2
That is we are going to return the value array[2][2] which means 2 is our required output.
Implementation
C++ program for Unique Paths II
#include<iostream> using namespace std; int getVal(int x, int y) { if(x==0 && y==1) { return 1; } else { return 0; } } int findUniquePath(int arr[3][3]) { if (arr[0][0] == 1) { return 0; } arr[0][0] = 1; for (int i = 1; i < 3; i++) { arr[i][0] = getVal(arr[i][0],arr[i-1][0]); } for (int i = 1; i < 3; i++) { arr[0][i] = getVal(arr[0][i],arr[0][i-1]); } for (int i = 1; i < 3; i++) { for (int j = 1; j < 3; j++) { if (arr[i][j] == 0) { arr[i][j] = arr[i - 1][j] + arr[i][j - 1]; } else { arr[i][j] = 0; } } } return arr[2][2]; } int main() { int inputValue[3][3]= {{0,0,0}, {0,1,0}, {0,0,0}}; findUniquePath(inputValue); cout<<findUniquePath(inputValue);; return 0; }
2
Java program for Unique Paths II
class uniquePath2 { public static int getVal(int x, int y) { if (x == 0 && y == 1) { return 1; } else { return 0; } } public static int findUniquePath(int array[][]) { if (array[0][0] == 1) { return 0; } array[0][0] = 1; for (int i = 1; i<array.length; i++) { array[i][0] = getVal(array[i][0], array[i - 1][0]); } for (int i = 1; i<array[0].length; i++) { array[0][i] = getVal(array[0][i], array[0][i - 1]); } for (int i = 1; i<array.length; i++) { for (int j = 1; j<array[i].length; j++) { if (array[i][j] == 0) { array[i][j] = array[i - 1][j] + array[i][j - 1]; } else { array[i][j] = 0; } } } int row = array.length; int col = array[0].length; return array[row - 1][col - 1]; } public static void main(String[] args) { int inputValue[][] = { { 0, 0, 0 }, { 0, 1, 0 }, { 0, 0, 0 } }; System.out.println(findUniquePath(inputValue)); } }
2
Complexity Analysis for Unique Paths II
Time Complexity
O(a × b) where a and b is the number of rows and columns in the matrix given to us as each cell is processed only once.
Space Complexity
O(1) as no extra space is being utilized since we are using dynamic programming. | https://www.tutorialcup.com/interview/matrix/unique-paths-ii.htm | CC-MAIN-2021-49 | refinedweb | 1,181 | 59.16 |
Root relative path plugin for Lektor
Version: 0.2.1
Author: Atsushi Suga
This plugin returns root-relative-path list from top page to current page as below.
[(toppage_url, toppage_name), ...(parent_url, parent_name), (url, name)]
Add
lektor-root-relative-path to your project from the command line:
lektor plugins add lektor-root-relative-path
See the Lektor documentation for more instructions on installing plugins.
Set these option in
configs/root-relative-path.ini:
Optional. Name of top page inidicated in the navication. Default is 'Top Page'
navi_top_page_name = 'Top Page'
Insert the following line in the template (e.g. layout.html) which you would like to show navigation.
{% for i in this._path | root_relative_path_list %} >><a href="{{i[0]}}">{{i[1]}}</a> {% endfor %}
Then, navigation is shown as below in case the page 'blog/first-post/'
>>Top Page >>blog >>first-post
If you do not want to show current page in the navigation, modify template as below.
{% for i in this._path | root_relative_path_list %} {% if not loop.last %} >><a href="{{i[0]}}">{{i[1]}}</a> {% endif %} {% endfor %}
Then, navigation is shown as below.
>>Top Page >>blog | https://www.getlektor.com/plugins/lektor-root-relative-path/ | CC-MAIN-2019-09 | refinedweb | 182 | 51.34 |
With a real-time operating system, going into low-power mode is easy. Continuing the recent ChibiOS example, here is a powerUse.ino sketch which illustrates the mechanism:
#include <ChibiOS_AVR.h> #include <JeeLib.h> const bool LOWPOWER = true; // set to true to enable low-power sleeping // must be defined in case we're using the watchdog for low-power waiting ISR(WDT_vect) { Sleepy::watchdogEvent(); } static WORKING_AREA(waThread1, 50); void Thread1 () { while (true) chThdSleepMilliseconds(1000); } void setup () { rf12_initialize(1, RF12_868MHZ); rf12_sleep(RF12_SLEEP); chBegin(mainThread); } void mainThread () { chThdCreateStatic(waThread1, sizeof (waThread1), NORMALPRIO + 2, (tfunc_t) Thread1, 0); while (true) loop(); } void loop () { if (LOWPOWER) Sleepy::loseSomeTime(16); // minimum watchdog granularity is 16 ms else delay(16); }
There’s a separate thread which runs at slightly higher priority than the main thread (NORMALPRIO + 2), but is idle most of the time, and there’s the main thread, which in this case takes the role of the idling thread.
When LOWPOWER is set to
false, this sketch runs at full power all the time, drawing about 9 mA. With LOWPOWER set to
true, the power consumption drops dramatically, with just an occasional short blip – as seen in this current-consumption scope capture:
Once every 16..17 ms, the watchdog wakes the ATmega out of its power-down mode, and a brief amount of activity takes place. As you can see, most of these “blips” take just 18 µs, with a few excursions to 24 and 30 µs. I’ve left the setup running for over 15 minutes with the scope background persistence turned on, and there are no other glitches – ever. Those 6 µs extensions are probably the milliseconds clock timer.
For real-world uses, the idea is that you put all your own code in threads, such as
Thread1() above, and call
chThdSleepMilliseconds() to wait and re-schedule as needed. There can be a number of these threads, each with their own timing. The lowest-priority thread (the main thread in the example above) then goes into a low-power sleep mode – briefly and repeatedly, thus “soaking” up all unused µC processor cycles in the most energy-efficient manner, yet able to re-activate pending threads quickly.
What I don’t quite understand yet in the above scope capture is the repetition frequency of these pulses. Many pulses are 17 µs apart, i.e. the time
Sleepy::loseSomeTime() goes to sleep, but there are also more frequent pulses, spread only 4..9 ms apart at times. I can only guess that this has something to do with the ChibiOS scheduler. That’s the thing with an RTOS: reasoning about the repetitive behavior of such code becomes a lot trickier.
Still… not bad: just a little code on idle and we get low-power behaviour almost for free! | http://jeelabs.org/tag/rtos/ | CC-MAIN-2014-42 | refinedweb | 463 | 59.33 |
XForms are an application of XML [XML 1.0], and have been designed for use within other XML vocabularies, in particular XHTML [XHTML 1.0]. This chapter discusses some of>
xmlns = namespace-identifier - Optional standard XML attribute for identifying an XML namespace. It is often useful to include this standard attribute at this point.
id = xsd:ID - Optional unique identifier used to refer to this particular
xformelement.
For example:
The
model element is used to define the XForms Model. The content of the
XForms Model may be defined inline or obtained from a external URI.
model>
id = xsd:ID - Optional unique identifier.
xlink:href = xsd:anyURI - Optional link to an externally defined XForms Model.
The
instance element is used to define initial instance data.
The instance data may be defined inline or obtained from a external URI.>
id = xsd:ID - Optional unique identifier.
xlink:href = xsd:anyURI - Required destination for submitted instance data.
method = xsd:string - Optional indicator to provide details on the submit protocol. With HTTP, the default is "
The
bind element represents a connection between the different
parts of XForms.
bind>
id = xsd:ID - Required unique identifier.
ref = XForms binding expression - A link to an externally defined XForms Model.
Additional details are found in the chapter 9 Binding.
xlink:type Models. | http://www.w3.org/TR/2001/WD-xforms-20010608/slice10.html | CC-MAIN-2015-11 | refinedweb | 213 | 51.75 |
I've been trying to convince myself that that objects of the same type have access to each others private data members. I wrote some code that I thought would help me better understand what is going on, but now I am getting an error from XCODE7 (just 1), that says that I am using the undeclared identifier "combination."
If someone could help me understand where I have gone awry with my code, I would love to learn.
My code should simply print false, if running correctly.
#include <iostream>
using std::cout;
using std::endl;
class Shared {
public:
bool combination(Shared& a, Shared& b);
private:
int useless{ 0 };
int limitless{ 1 };
};
bool Shared::combination(Shared& a,Shared& b){
return (a.useless > b.limitless);
}
int main() {
Shared sharedObj1;
Shared sharedObj2;
cout << combination(sharedObj1, sharedObj2) << endl;
return 0;
}
combination is a member function of the class
Shared. Therefore, it can only be called on an instance of
Shared. When you are calling
combination, you are not specifying which object you are calling it one:
cout << combination(sharedObj1, sharedObj2) << endl; ^^^ Instance?
The compiler complains because it thinks you want to call a function called
combination, but there is none.
So, you'll have to specify an instance:
cout << sharedObj1.combination(sharedObj1, sharedObj2) << endl;
In this case however, it doesn't matter on which instance it is being called on, so you should make
combination static, so you can do
cout << Shared::combination(sharedObj1, sharedObj2) << endl; | https://codedump.io/share/o0jgfnKcgIAD/1/do-functions-need-to-be-declared-anywhere-else-besides-in-the-class-and-before-the-program | CC-MAIN-2018-13 | refinedweb | 242 | 50.67 |
UCLA, CIsco & More Launch Consortium To Replace TCP/IP 254
alphadogg writes $13.5 million into it since 2010.
Great idea at the concept stage. (Score:5, Insightful)
Just don't expect anyone to early adopt except the usual hypebots and yahoos. We can't even get rid of IPv4 and you want do replace TCP entirely.
Re: (Score:3, Insightful)
Yeah. And replace UNIX, too. You know? Like Plan 9 and Windows NT.
I ain't holdin' my breath.
Re: (Score:3)
BTW, how hard will it be to transform Linux's kernel structure into something that is equivalent to Plan-9?
not very. [glendix.org]... [wikipedia.org]... [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
He said fun for programmers. He didn't say useful for users.
Re:Great idea at the concept stage. (Score:5, Insightful)
This. There's likely trillions of dollars invested in IPv4 that is going to be around for decades. Consider the Internet like highways and train track widths - we're stuck with it for a very long time.
Re: (Score:2)
Re: (Score:3)
Umm, the "Internet of things" doesn't NEED "modern Internet speeds". Does your fridge or your sprinkler system or whatever need high speed? No, it just "needs" (for people who want that functionality), some kind of comparatively dirt slow communication path.
That's not an argument FOR IPv4 directly, just that your "modern Internet speeds" argument directly doesn't necessarily justify throwing away decades' worth of hardware that is providing people functionality.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Neither my fridge nor my sprinkler system - especially my sprinkler system - needs any kind of connectivity whatsoever except to spy on me and bombard me with ads where ever I go, both of which do require high speed.
Re: (Score:2)
That's why I specifically said "for people who want that functionality".
I can see wanting your sprinkler system online -- to change it from your couch.. or heck, even from somewhere else (not everyone has automatic rain sensors).
The common "fridge keeps track of what you have in it" idea would be great if it ALSO coordinated with the local grocery store ads that week..
Re: (Score:2)
Re:Great idea at the concept stage. (Score:4, Insightful).
Re: (Score:3)
'ipv4 hardware' (huh? what IS that, btw? does this imply that ipv6 is not in 'hardware'? how strange to describe things)
Not sure what he was on about but, yeah, IPv4 is always in ASIC on big gear and part of the slow IPv6 adoption curve is that there is a lot of big expensive gear deployed with IPv4 in ASIC and IPv6 is only done on the anemic CPU.
We're probably 2 of 5 years into the required replacement cycle, but it is significant. One of the wrinkles with the recent Cisco "Internet is too big" bug was th
Re: (Score:3)
Re: (Score:2)
No, that makes too much sense.
We need super long addresses so we can assign IPs to grains of sand, and we need to use colons everywhere and a shitty fucking collapsing scheme for writing this shit down because the addresses are unintelligible.
Re: (Score:2)
Re: (Score:2)
Ignoring the fact that many of the places around the world are growing like mad, and fibre is being put up everywhere, even your local usage would have increased many times over.
10 years ago how many youtube videos were you watching? The provisioned bandwidth may have been the same but the utilisation would have increased, I guarantee it. Also 10 years ago I'd wager you were maybe one of only a handful of people with DSL in the street? I'm willing to bet now that every house has it and multiple 4G connectio
Re: (Score:2)
10 years ago youtube didn't exist. It's 8 years old.
Re: (Score:2)
This. There's likely trillions of dollars invested in IPv4 that is going to be around for decades. Consider the Internet like highways and train track widths - we're stuck with it for a very long time.
I'm probably missing the point, but isn't NDN just a way to do content-addressable lookup of data? And if so, why would we need to throw out IPv4 in order to use it? We already have lots of examples of that running over IPv4 (e.g. BitTorrent, or Akamai, or even Google-searches if you squint).
Re: (Score:2)
It's not like TCP/IP is the only protocol to have existed, there have been several that people have heard of and quite a few that most people don't know about. Even the OSI model itself was originally intended to be an implementation, rather than an abstraction, but DARPANET was so successful and r
Re: (Score:3)
Re:Great idea at the concept stage. (Score:4, Insightful)
Actually, the very reason we are stuck with IPv4 right now is due to a consortium just like this deciding that "pure v6, no NAT" is the way to go, which is effectively what is stifling deployment and adoption.
Carriers are now trying to figure out how to segment what they have for customers (/64 is the smallest routeable subnet), and finding IPAM solutions to manage such large network sizes.
Truth be told, if they would have went with some form of v6 NAT, deployment could have been at least 25% done by now.
/64, but once it hits inside the carrier network, they can route smaller subnets internally to hit their customers Public v6 address.
You can still restrict the BGP routes to
Couple that with translation tech like Cisco's AFT, and we would be much further along then we already are.
Re: (Score:2)
Re:Great idea at the concept stage. (Score:4, Insightful)
You can do that with ipv6 anyways.. and without even bothering with NAT. home devices can be assigned addresses in a local range, and will not be accessible from outside any more than if they were NATted, since IP's in such ranges are explicitly designed by the protocol spec to not be routable. As long as your cable modem adheres to the spec, there is no danger of accessing it from the outside any more than if it were behind a NAT.
Of course, in practice, I expect some kind of NAT solution will be in fairly wide use even in IPv6 anyways, since there will be no lack of use cases where you do not want your device to generally have a globally visible IP and be visible to the outside, but you may still have occasion to want to make requests of services in the outside world, using a local proxy to route the responses to those requests directly to your local IP, even though you do not have a global IP, much like NAT currently operates. This can also be solved by utilizing a global IP and configuring a firewall to block inbound traffic to that IP unless it is in response to a specific request by that device, but this is generally less convenient to configure properly than using a NAT-like arrangement.
Notwithstanding, at least with IPv6, the number of IP's is large enough that every device that anyone might ever want to have its own IP actually can... instead of only satisfying the about 70 or 80% of users, like ipv4 does.
Re: (Score:2)
Is it wrong that I don't want my home devices to be reachable from the outside unsolicited?
Use a stateful firewall? NAT is not a firewall.
Just because something has a globally unique IP address doesn't mean that it's globally reachable.
Re: (Score:3)
NAT is much simpler to use than setting up a firewall. And why would I want my personal network to use public IP addresses anyway?
For SOHO environments NAT is the perfect tool.
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
TCP supports 64k packets.
1500 bytes is the Ethernet MTU.
Re:Great idea at the concept stage. (Score:5, Insightful)
You know some kind of ill conceived "content protection" is going be built into this protocol.
Re: (Score:2)."
Cache (Score:2)
Not a chance (Score:2, Insightful)
Despite a few decades of research, TCP/IP is still the best thing we know for the task at hand. Yes, it is admittedly not really good at it, but all known alternatives are worse. This is more likely some kind of publicity stunt or serves some entirely different purpose.
Re:Not a chance (Score:4, Insightful)
Despite decades of research the horse and cart are still the best thing we know for the task at hand. Yes, it's admittedly not really good, but all the known alternatives are worse. This is more likely some kind of publicity stunt or serves some entirely different purpose.
Your statement as shown can be applied to the internal combustion engine, or any other technology. Rejecting any change out of hand without consideration is incredibly sad, if not dangerous to our species future prospects. Yes it's important to take everything with a grain of salt, but everything should be at least considered. It only takes one successful change to have a dramatic impact and improve the lives of many.
This goes for all technology, not just this specific problem.
Re: Not a chance (Score:2)
Re: (Score:2)
I never said you had to accept ideas, just consider them.
Re: (Score:2)
All these ideas have been considered and are continued to be considered. What do you think scientific publishing is? A joke? There is NOTHING THERE at this time. No candidate. New protocols are considered good if they are not too much worse than TCP/IP in general applications. Truth be told, most serious researchers have left that field though, as there is nothing to be gained and everything obvious (after a few years of research) has been discounted.
Really, stop talking trash. You have no clue about the st
Re: (Score:2)
All these ideas have been considered
Ahh I see now. There's no such thing as a new idea? Even if the old system has problems? Everything that can ever be invented has been invented.
I have to be honest I didn't read past that first sentence. I can only imagine the rest of your post follows this completely retarded preposition.
Re: (Score:2)
You probably also believe that they will eventually discover the philosopher's stone, as they may just not have considered the right idea so far.
Rally, this is science. There are border conditions for what is possible and there are no fundamental breakthroughs out of the blue. But there is another good word for people like you: "sucker".
Re: (Score:2)
Oh my god this made me laugh.
A sucker is someone who believes something without evidence. What I am is someone who poopoos an idea because I believe we've already figured out the best way of doing something. Trust me we haven't, and we never will. If we had time machines I would suggest going and talking to people with kerosene lamps and tell them one day that they will be able to light their houses through this magical (they will think it is) thing called electricity.
Will we find the philosopher's stone? N
Re: (Score:2)
So you would me following the research in that area for 25 years now call "without consideration"? That is pretty dumb. For the SPECIFIC PROBLEM at hand, there is currently no better solution, despite constant research effort for a few decades. That is why it will not be replaced anytime soon.
I really hate mindless "progress fanatics" like you. No clue at all, insulting attitude and zero to contribute. Moron.
Re: (Score:2)
That depends, did you actually say you follow the research in the area for 25 years? Did you also look at the proposal in detail and make an assessment? Nope? Didn't think so!
Dammit Jim I'm a progress fanatic not a mind reader.
By the way the definition for progress is "development towards an improved or more advanced condition.".
Based on this I personally think that everyone should be a progress fanatic and it will be sad when all the researches turn into middle managers and naysayers and the world will sto
Re: (Score:2)
Which is why I call you a "progress fanatic", "clueless" and a "moron". Thanks for confirming my assessment.
And "progress fanatic" I will gladly put on a t-shit and wear proudly.
The other two labels you have no basis for other than hot-headed hatred of what I have said. Looks like several people agree with me too.
This was fun.
Re: (Score:2)
Your statement as shown can be applied to the internal combustion engine, or any other technology. Rejecting any change out of hand without consideration is incredibly sad
There are only so many hours in a day... ignoring/rejecting silliness out of ignorance is often a practical necessity.
Yes it's important to take everything with a grain of salt, but everything should be at least considered.
"Everything"
...sort of...includes magic unicorns and assorted demon things observed while trip-pin' on mushr00ms...
See also trusted Internets, motor/generator free energy machines and application of ternary logic to prevent IPv4 exhaustion.
It only takes one successful change to have a dramatic impact and improve the lives of many.
Well paying out that $25k to play is sure to improve the life of someone.
Re: (Score:2)
That's the wonderful thing about our world. Not everyone needs to be an expert in everything. But if you proclaim to be then ignoring/rejecting silliness out of ignorance....
Hang on this doesn't compute. If you're ignorant how do you know it's silly again?
I'm not saying everyone needs to check everything about everything. Just that the experts consider the solution.
On the other hand the parent is rejecting new ideas out of hand because it would be changing TCP/IP. That's not examining if a solution is silly
Re: (Score:2)
It's hard for me to see this as a significant improvement. It might make caching somewhat easier, I guess, by pushing the caching mechanism down to the routing layer.
How else is this an improvement? It seems like every problem they are trying to solve has been solved, and more elegantly, as long as you can see the beauty in the multi-layer stack. If you
Re: (Score:2)
Oh I agree it's probably not much of an improvement with technical merits. I was merely calling out the parent's attitude which appears to be that we should abandon all efforts to improve TCP/IP because we haven't had any luck in the past decade.
That's not how science works.
As for technical merits I don't think this standard has much that would warrant the incredible expense of implementing it.
Re: (Score:3)
Why should content protection be part of the Internet standard? Why do my devices (routers, computers, etc.) have to have built in DRM which will end up getting cracked, or at least possibly exploited from offshore?
This also is going to be met with a lot of suspicion. Who keeps the keys, gets to keep content locked, owns the license servers, and is able to come in via backdoors mandated as part of the protocol? The UN? Give me a break. China? Sure, we can trust them allright, provided we give them 51%
Re: Not a chance (Score:4, Insightful)
TCP/IP has the singular advantage that it is deeply entrenched, runs on a vast number of devices from supercomputers right down to single-chip computers. Is it perfect? Absolutely not, but it's a proven technology.
I'm sure in the fullness of time it will be replaced, or at least subsumed into some better protocol, and maybe this initiative will be the one that produces its successor... or not. I think TCP/IP is going to be with us for a very long time.
Re: (Score:2)
I agree. The best thing we can do at this time is careful tweaks in congestion control, buffering and error handling, but that is it. Also, if you have reasonable over-provisioning (i.e. >= 200% of what you use), TCP/IP even works pretty well for real-time applications. That is one of the factors that keeps it alive, over-provisioning is a far easier solution to its problems than changing the network, especially as bandwidth is only getting cheaper while the bandwidth actually needed for most application
Re: (Score:3)
But no one actually does that. In practice, even people writing low-level code encapsulate their send/receive in a function or a method, at which point SCTP doesn't give any real advantages. The idea of channels is kind of cool, but for it to be really useful, they would need guaranteed bandwidth (or once again, encapsulating your network code
Re: (Score:2)
The advantage of SCTP is that it is not a retarded implementation of go back N. Which means it can operate efficiently at high speeds on unreliable networks. Also the channels could be easily and automatically used with HTTP to replace the inefficient pipelining. With TCP something like SPDY had to reimplement channels on a higher level.
Re: (Score:3)
The advantage of SCTP is that it is not a retarded implementation of go back N.
SCTP has all the same limitations as TCP at the SCTP stream level.
Which means it can operate efficiently at high speeds on unreliable networks. Also the channels could be easily and automatically used with HTTP to replace the inefficient pipelining. With TCP something like SPDY had to reimplement channels on a higher level.
This is semantically identical to opening multiple TCP sessions - one for each stream. If you were to lower round trip cost of subsequent session setup in TCP to zero (e.g. fast open extensions) then you essentially have the useful advantage of SCTP without SCTP.
The only benefit SCTP has is multipath failover baked in and you can't even use the extra paths concurrently it only exists as a contingency.
Re: (Score:3)
We do not know whether there is a better solution, but currently we do not have one, despite decades of research. What would you do, start breaking things?
Re: Not a chance (Score:4, Interesting)
NDN looks like a scheme to tag data and change networks from "addressing a particular node" to "addressing data". This is like changing the Post Office such that a person addresses a particular letter sent to them, rather than having a house number where letters get delivered.
Computer addresses with DNS on top make sense: it's easy to subdivide and route, and name translation allows humans to interact with it. NDN looks like it's trying to make the names the addresses, and make the URIs the names, and make the routers act as caches, and hope it all works; but then how do I address a *computer*? How do I ask for anything other than HTTP?
NDN looks like p2pwww stuff I designed back in 2004, except trying to implement as a network protocol on the routers, rather than an application protocol on the nodes. Even then, I specified digital signatures, encryption, and network namespace isolation: you could have an ICANNWeb which signed certificates for each name (i.e. Microsoft) and, on ICANNWeb, you would put out a message (P2P) for Microsoft://www/windowsxp/support.aspx and get back responses for (have|know|home)--node has a copy recent as per [date], node knows who has a copy recent as per [date], node knows the home is [address]--and select from there. Each resource would be digitally signed with generation date stamp and expiration date stamp, and a new generation date stamp overrides an earlier expiration date stamp.
In short: you'd get on a Gnutella-like network, perform a search, and be told where the resource is. Data was such that you could identify newer, identical, and expired resources. Your node could say, "0-3 hops", then "4-6 hops", incrementally crawling the network; or "3 hops past first response, limit 10". Usually if a node knows another node has a copy, that other node also knows several (it got its copy somehow--by its own request). If a node locates nodes with multiple versions, it provides outdated nodes with provable evidence that they're outdated, so they can drop their caches and learn some other node has a more up-to-date copy. Likewise, when those nodes are queried, they will then re-query the nodes they know have copies, and update them: an update doesn't trigger this cycle--too much traffic.
That's application-level. A locatable, self-caching network which encapsulates all resources in digital signatures and allows for namespaces. It sounds like that's what they're trying to accomplish, but in the transport layer.
Mass media takeover and destruction of 'net (Score:5, Insightful)
This is basically designed to bring the old big media, broadcast ways to the internet. Hence, to basically destroy the Internet, allowing for mass reproduction of centrally created Corporate content, where independant voices are locked out. The protocol is designed for that, mass distribution of corporate created, centrally distributed content to an ignorant, consumption only masses which are treated with disdain and objects of manipulation by the elite. This is to bring big media and the stranglehold they had for so many years on information the public has access to back.
With the Ipv6 transition needed its time to focus on that rather than on this plan to destroy the internet and turn it into the digital equivalent of 100 channels of centrally produced, elite controlled, one way cable television programming designed to psychologically manipulate and control a feeble and dim witted public.
No thanks and get your #%#% hands of my internet.
Now I know why Tsinghua is involved (Score:5, Insightful)
I was puzzled with the involvement of Tsinghua University of China with this thing
After reading your comment it starts to make sense
The China Communist Party needs to regain control of the Internet (at least inside China), that explains why they endorse this new scheme so much
Re:Mass media takeover and destruction of 'net (Score:4, Insightful)
Re:Mass media takeover and destruction of 'net (Score:4, Interesting)
I get what you're saying, but I don't get how NDN is supposed to replace TCP/IP. Sure, it replaces many things done with UDP, and it even can do some things better than TCP, but it's not going to be replacing IPvX any time soon, just as TCP and UDP and ICMP etc. can happily co-exist.
What I find interesting is that there's been an implementation of NDN/IP for YEARS -- it's called Freenet [freenetproject.org]. Something tells me that the sponsoring groups wouldn't like to see this particular implementation be the first thing to try out their new network layer however....
Re: (Score:2)
and it even can do some things better than TCP
Like what? I've been trying to figure that out, I can't see anything.
Re:Mass media takeover and destruction of 'net (Score:5, Interesting)
I don't think we're going to stop the progression you are describing. The method by which it is achieved may not be the one being discussed by UCLA and Cisco, but it's clear now that what slashdotters call "the Internet" is doomed and has been since all of those rebellions in northern africa/mideast a couple years ago. What most end-users call "the Internet" is just getting started, but certainly the application of it is as a control and monitoring system against dissent rather than a catalyst promoting freedom of information. The point where we have some hope of rallying the population to activism is the point where content providers and governments try to do things like completely disallow offline storage media. But not before then, because the population just plain doesn't understand what they have or what is at stake.
Re: (Score:2)
Different layers (Score:5, Insightful)
They are also funding a study to replace roads with run-flat tires. Oh, right, different layers.
Corporate Inertia (Score:4, Insightful)
Youtube video by Van Jacobson, from 2006 on this (Score:5, Interesting)
There is a talk on youtube from 2006 by Van Jacobson that describes this idea before it was called named data networking. It is really neat, and I am surprised that it has taken so long for somebody to actually try to implement it.
Prior Art? (Score:2)
That's funny (Score:2)
A bunch of broke folks saddled with student loans are looking to replace UCLA and Cisco; but they didn't bother to announce it.
Oh joy, stateful routers... (Score:2)
From the architecture page [named-data.net]:
Great, NAT-like state in every router...
Re: (Score:2)
And who controls the names and how much does it cost to be a data producer?
Baby steps (Score:3)
First, IPv6. If you can handle simple things like that, then we'll let you play with the important stuff.
Oh yeah. Flying cars too.
Only viable as a replacement for a subset of uses (Score:2, Interesting)
All the internet is NOT "give me data named thus." For example, this "NDN" doesn't seem to support logging in to a particular computer, you know, so that you can administer it. It doesn't seem to support sending a file to a particular printer. Maybe it might make an interesting overlay on IP, replacing existing content distribution techniques, like Akamai, but I'm not seeing it replace IP.
-- david newall
Re: (Score:2)
For example, this "NDN" doesn't seem to support logging in to a particular computer, you know, so that you can administer it. It doesn't seem to support sending a file to a particular printer.
How about, giving your printer a particular name, and giving your computer a particular name? I'm pretty sure they've thought about that particular problem.
Just in time! (Score:3, Funny)
Yeah, that's gonna work (Score:2)
We can't even get TCP/IP v6 off the ground, and they want to try this?
Come on guys - it's 2014 not 1994 (Score:2)
So, tell us what we really want to know? (Score:3, Insightful)
How is this going to harm the everyday Internet user? I imagine at the very least it will make it more difficult for two random internet users to connect to each other, because all connections will probably have to be approved by Verisign or some other shit like that.
Remember folks, the age of innovation is over. We are now in the age of control and oppression. Everything "new" is invented for one purpose and only one purpose - to control you more effectively.
I don't see this as so horrible (Score:5, Informative)?
Re: (Score:2)
But why can the two protocols not run on top of the same Layer 2 infrastructure?
Because once they do get it rolled out, only "terrorists" (properly pronounced 'tarrists') will be using IPv4 or IPv6.
Re: (Score:2)?
Or use, you know, like multicast or something...?
Re: (Score:2)
Multicast is fine when every reciever wants the same thing at the same time. Good for broadcasting live events. Not very good for things like youtube, where millions of people will want to watch a video but very few of them simutainously, and those that do may want to pause it at any moment and resume playback hours later.
This is BAD. Very very BAD. (Score:5, Interesting)
In a nutshell, this is applying DRM to all of your connection attempts. You will only be able to make connections that are "authorized" by TPTB.
No more free and open networking.
Multicast + caching? (Score:2)
As I read the descriptions of NDN, I can't quite see what the difference between NDN and ip multicast is.
If the problem is inefficient use of resources due to over replication, didn't multicast solve that? Add caching boxes, and hey! You just invented IPTV!
SMTP (Score:2)
As long as you're replacing the "DNA" of the Internet, wouldn't replacing SMTP be a better thing to start with? (To prevent spam, or at least untraceable spam?)
Re: (Score:2)
The major flaw is any new bandwagon is going to have the spammers climbing aboard as early adopters. Any barriers to entry are going to be more difficult for the general public to negotiate than the spammers, since the spammers have the means to bot, buy or mule their way around them.
With so much distributed malware around, as well as various other means, the spammers can send from trusted addresse
Re: (Score:2)
I think we need that form of why a suggestion to stop spam is not new and is not going to be a silver bullet.
Please, no. That form has rejected far too many good solutions. It's issue is that it insists that we remove spam without changing how email works and what we use it for, as if we can expect something to change even though we refuse to change it. I recall one suggestion got that form as a reply with nothing but the "it won't work for mailing lists" box checked. Is it really too much to tell people running mailing lists to find some other means to do what they do, if it will eliminate spam for everyone e
Re: (Score:2)
Personally, I think XMPP has the problem solved well enough. Their general architecture is superior to email in terms of verifying that you really know where a message came from, so if you receive spam from user@example.com,
XMPP is embarrassingly similar to email it only seems less spammy because nobody uses it.
...and because each server knows the contact list of its users, it has a good clue about whether that message is spam even before doing any content analysis
Reputation analysis by more voodoo algorithms which assume server is big enough to develop any meaningful clue and not misinterpret results. I'm sick of algorithms... email at the very least used to be reliable...now it is anyone's guess whether a message will be silently dropped for no human understandable reason.
because there's no culture of "spam is an unavoidable problem" in XMPP, nor is there even a culture of "bulk messaging must be allowed" and so no one can even claim ignorance about what their users are doing.
More like a culture of denial. XMPP does NOT meaningfully address spam in any way that matters.
but for now it seems the spammers don't even care about XMPP, probably because email isn't just low-hanging fruit, it's fruit that has fallen from the tree and has been rotting on the ground for years.
Keep on d
Re: (Score:2)
Yes, but also mainly due to my next point
Obviously your strawman example would no do such a thing if it was really that good because it would have been adopted and forced upon those with mailing lists. Let's please keep this an honest discussion without hysterical bullshit that insults the intelligence of the reader.
As for your suggestion, it appe
Magnet Links (Score:4, Interesting)
Since every single goddamned one of you has used magnet links, you should be comfortable with the idea of requesting objects rather than discussions with particular hosts. Taking this idea and running with it is NDN. It's an excellent network research subject.
It facilitates caching, multipathing... with some more work perhaps network coding to get close to the min-cut bound. Bittorrent is super successful because it's all about the content. Let's give a similar protocol a chance at changing the net.
Another False Technology Headline (Score:2)
If Slashdot editors can't even get the technology headlines correct, how is it better than Reddit, Fark, or any other news aggregator site?
Damn you guys have fallen far.
A Likely Story.... (Score:2) i
Re: (Score:2)
That's the main selling point. It gives routers a lot more information about what they are routing, allowing them to enforce usage rules. Things like 'only redistribute content signed by those who paid to use our new content distribution system' or 'Do not distribute media from Netflix tagged as licensed for distribution in the US only.'
There's the core of a good idea. CAN is a great idea - power savings, bandwidth savings, faster internet, more reliable, hosting costs slashed. But this starts off with CAN
The reason the government wants this... (Score:3, Informative)
For those who don't see why this is bad, consider this:
In order to route/cache by data, the data must be visible to the routing nodes; in essence, you would no longer be able to use end-to-end encryption. You could still have point-to-point (eg: encryption for wireless connections), but everything would be visible to routing nodes, by necessity. This means no more hiding communications from the government (who taps all the backbone routers), no TOR routing, no protection from MTM attacks, by design. You get the promise of more efficiency, at the cost of your privacy/freedom... and guess what, you'll get neither in this case, too.
Re: (Score:2)
Slight correction: It does include protection from MITM attacks: There's a hash for the content that the endpoint verifies. So it does prevent spoofing content, so long as the endpoint has the correct address. It does't stop your ISP from monitoring exactly what you are getting though - it makes that a whole lot easier, as there's no way the requests could be encrypted.
Like BitTorrent, but lower level. (Score:2)
I need to read more about this. At first glance, it's kind of like BitTorrent, but at a lower level in the protocol stack. Or like Universal Resource Identifiers (remember those?) at a higher level. The general idea seems to be to make cacheing easier at the expense of making everything else more complex.
overlay (Score:2)
It looks like this would be more likely to be an overlay to TCP/IP than to replace it, with the idea of 'protected' content distribution being a driver.
Of course, as with any other content distribution mechanism, there will no doubt be ways to copy it once it reaches your living room (or wherever).
This looks terrible. (Score:5, Interesting)
It looks like they started out with Content Addressible Networking, which is a great idea. Massive bandwidth savings, improved resilience, faster performance, power savings, everything you could want. But then rather than try to impliment CAN properly alongside conventional networking they went for some ridiculous micro-caching thing, over-complicated intermediate nodes that enforce usage rules, some form of insane public-key versioning system validated by intermediate nodes and generally ended up with a monstrosity.
CAN is a great idea. NDN is a terrible implimentation of CAN. The main selling points include having DRM capability built into the network itsself, so if you try to download something not authorised for your country the ISP router can detect and block it. A simple distributed cache would achieve the same benefits with a much simpler design.
There's the core of a great idea in there, burried deep in the heap of over-engineered complexity that appears designed not to bring benefits to performance but rather to allow ISPs to readily decide exactly what content they wish to allow to be distributed and by whome. This thing is designed to allow the network devices to transcode video in real time to a lower bitrate - putting that kind of intelligence in the network is insane!
Re: (Score:3)
Don't worry the NSA and GCHQ interests are being covered by China. | http://tech.slashdot.org/story/14/09/04/2156232/ucla-cisco-more-launch-consortium-to-replace-tcpip?utm_source=rss1.0mainlinkanon&utm_medium=feed | CC-MAIN-2015-40 | refinedweb | 6,081 | 70.33 |
The Convention Plugin is bundled with Struts since 2.1 and replaces the Codebehind Plugin and Zero Config plugins. It provides the following features:
The Convention Plugin should require no configuration to use. Many of the conventions can be controlled using configuration properties and many of the classes can be extended or overridden.
In order to use the Convention plugin, you first need to add the JAR file to the
WEB-INF/lib directory of your application or include the dependency in your project's Maven POM file.
Where X.X.X is the current version of Struts 2. Please remember that the Convention Plugin is available from version 2.1.6.
See this page for the required changes and tips.
If you are using REST with the Convention plugin, make sure you set these constants in struts.xml:
Now that the Convention plugin has been added to your application, let's start with a very simple example. This example will use an actionless result that is identified by the URL. By default, the Convention plugin assumes that all of the results are stored in WEB-INF/content. This can be changed by setting the property
struts.convention.result.path in the Struts properties file to the new location. Don't worry about trailing slashes, the Convention plugin handles this for you. Here is our hello world JSP:
If you start Tomcat (or whichever J2EE container you are using) and type in (assuming that your context path is "
/", ie. starting application from Eclipse) into your browser you should get this result:
This illustrates that the Convention plugin will find results even when no action exists and it is all based on the URL passed to Struts.
Let's expand on this example and add a code behind class. In order to do this we need to ensure that the Convention plugin is able to find our action classes. By default, the Convention plugin will find all action classes that implement
com.opensymphony.xwork2.Action or whose name ends with the word Action in specific packages.
These packages are located by the Convention plugin using a search methodology. First the Convention plugin finds packages named
struts,
struts2,
action or
actions. Any packages that match those names are considered the root packages for the Convention plugin. Next, the plugin looks at all of the classes in those packages as well as sub-packages and determines if the classes implement
com.opensymphony.xwork2.Action or if their name ends with Action (i.e. FooAction). Here's an example of a few classes that the Convention plugin will find::
Next, the plugin determines the URL of the resource using the class name. It first removes the word Action from the end of the class name and then converts camel case names to dashes. In our example the full URLs would be:
You can tell the Convention plugin to ignore certain packages using the property
struts.convention.exclude.packages. You can also tell the plugin to use different strings to locate root packages using the property
struts.convention.package.locators. Finally, you can tell the plugin to search specific root packages using the property
struts.convention.action.packages.
Here is our code behind action class:
If you compile this class and place it into your application in the WEB-INF/classes, the Convention plugin will find the class and map the URL /hello-world to it. Next, we need to update our JSP to print out the message we setup in the action class. Here is the new JSP:
If start up the application server and open up in our browser, we should get this result:.
Building on our example from above, let's say we want to provide a different result if the result code from our action is the String
zero rather than
success. First, we update the action class to return different result codes:
Next, we add a new JSP to the application named
WEB-INF/content/hello-world-zero.jsp. Notice that the first part of the file name is the same as the URL of the action and the last part of the name is the result code. This is the convention that the plugin uses to determine which results to render. Here is our new JSP:
Now, if you compile the action and restart the application, based on the current time, you'll either see the result from
WEB-INF/content/hello-world.jsp or
WEB-INF/content/hello-world-zero.jsp.
The result type is based on the extension of the file. The supported extensions are: jsp,ftl,vm,html,html. Examples of Action and Result to Template mapping:
It is possible to define multiple names for the same result:
Such functionality was added in Struts 2.5
If one action returns the name of another action in the same package, they will be chained together, if the first action doesn't have any result defined for that code. In the following example:
The "foo" action will be executed, because no result is found, the Convention plugin tries to find an action named "foo-bar" on the same package where "foo" is defined. If such an action is found, it will be invoked using the "chain" result.
Actions are placed on a custom XWork package which prevents conflicts. The name of this package is based on the Java package the action is defined in, the namespace part of the URL for the action and the parent XWork package for the action. The parent XWork package is determined based on the property named
struts.convention.default.parent.package(defaults to
convention-default), which is a custom XWork package that extends
struts-default.
Therefore the naming for XWork packages used by the Convention plugin are in the form:
Using our example from above, the XWork package for our action would be:
The Convention plugin uses a number of different annotations to override the default conventions that are used to map actions to URLs and locate results. In addition, you can modify the parent XWork package that actions are configured with.
The Convention plugin allows action classes to change the URL that they are mapped to using the Action annotation. This annotation can also be used inside the Actions annotation to allow multiple URLs to map to a single action class. This annotation must be defined on action methods like this:
Our action class will now map to the URL
/different/url rather than
/hello-world. If no
@Result (see next section) is specified, then the namespace of the action will be used as the path to the result, on our last example it would be
/WEB-INF/content/different/url.jsp.
A single method within an action class can also map to multiple URLs using the Actions annotation like this:.
There are circumstances when this is desired, like when using Dynamic Method Invocation. If an
execute method is defined in the class, then it will be used for the action mapping, otherwise the method to be used will be determined when a request is made (by Dynamic Method Invocation for example)
Interceptors can be specified at the method level, using the Action annotation or at the class level using the
InterceptorRefs annotation. Interceptors specified at the class level will be applied to all actions defined on that class. In the following example:
The following interceptors will be applied to "action1":
interceptor-1, all interceptors from
defaultStack,
validation.
All interceptors from
defaultStack will be applied to "action2".
The Convention plugin allows action classes to define different results for an action. Results fall into two categories, global and local. Global results are shared across all actions defined within the action class. These results are defined as annotations on the action class. Local results apply only to the action method they are defined on. Here is an example of the different types of result annotations:
Parameters can be passed to results using the params attribute. The value of this attribute is a string array with an even number of elements in the form {"key0", "value0, "key1", "value1" ... "keyN", "valueN"}. For example:
From 2.1.7 on, global results (defined on the class level) defined using annotations will be inherited. Child classes can override the inherited result(s) by redefining it. Also, results defined at the method level take precedence (overwrite), over results with the same name at the action level.
The namespace annotation allows the namespace for action classes to be changed instead of using the convention of the Java package name. This annotation can be placed on an action class or within the package-info.java class that allows annotations to be placed on Java packages. When this annotation is put on an action class, it applies to all actions defined in the class, that are not fully qualified action URLs. When this annotation is place in the package-info.java file, it changes the default namespace for all actions defined in the Java package. Here is an example of the annotation on an action class:
In this example, the action will respond to two different URLs
/different/url and
/custom/url.
Here is an example of using this annotation in the package-info.java file:
This changes the default namespace for all actions defined in the package
com.example.actions. This annotation however doesn't apply to sub-packages.
The ResultPath annotation allows applications to change the location where results are stored. This annotation can be placed on an action class and also in the package-info.java file. Here is an example of using this annotation:
The result for this class will be located in
WEB-INF/jsps rather than the default of
WEB-INF/content.
The ParentPackage annotation allows applications to define different parent Struts package for specific action classes or Java packages. Here is an example of using the annotation on an action class:
To apply this annotation to all actions in a package (and subpackages), add it to package-info.java. An alternative to this annotation is to set
struts.convention.default.parent.package in XML.
This annotation can be used to define exception mappings to actions. See the exception mapping documentation for more details. These mappings can be applied to the class level, in which case they will be applied to all actions defined on that class:
The parameters defined by
params are passed to the result. Exception mappings can also be applied to the action level:
By default the Convention plugin will not scan jar files for actions. For a jar to be scanned, its URL needs to match at least one of the regular expressions in
struts.convention.action.includeJars. In this example
myjar1.jar and
myjar2.jar will be scanned:
Note that the regular expression will be evaluated against the URL of the jar, and not the file name, the jar URL can contain a path to the jar file and a trailing "!/".
The Convention plugin can automatically reload configuration changes, made in classes the contain actions, without restarting the container. This is a similar behavior to the automatic xml configuration reloading. To enable this feature, add this to your
struts.xml file:
This feature is experimental and has not been tested on all container, and it is strongly advised not to use it in production environments.
When using this plugin with JBoss, you need to set the following constants:
You can also check the JBoss 5 page for more details.
When using this plugin with Jetty in embedded mode, you need to set the following constants:
/orders/view.actionis not mapping to any action class. Check the namespace and the name of the action.
successresult for it. Check that the result file exists, like
/WEB-INF/content/orders/view-success.jsp.
struts.convention.action.includeJarsis matching jar URLs from external jars.
struts.convention.default.parent.package) passing the name of the package that defines the interceptor, or 2) Create a package in XML that extends the package that defines the interceptor, and use @ParentPackage(or
struts.convention.default.parent.package) to point to it.
The Convention plugin can be extended in the same fashion that Struts does. The following beans are defined by default:
To plugin a different implementation for one of these classes, implement the interface, define a bean for it, and set the appropriate constant's value with the name of the new bean, for example:
Add a constant element to your struts config file to change the value of a configuration setting, like: | https://cwiki.apache.org/confluence/plugins/viewsource/viewpagesrc.action?pageId=105613 | CC-MAIN-2018-34 | refinedweb | 2,094 | 53.81 |
Re-sizing the Image using Interpolation
When we have an image, it is always taken as a
2D matrix. The size of the image is nothing but the dimension of the matrix.
Credits of Cover Image - Photo by Kai Krog Halse on Unplash
In Python when we read the image, the size can be easily found by the
.shape method. In order to find the shape, we should first read the image and obtain the matrix.
Let's first implement for a
2D matrix or array and then replicate the same for images.
Finding Size of a Matrix
First, we are creating a matrix using the module
NumPy. The code for the same can be seen below.
import random import numpy as np mat = np.array([ [random.randint(10, 100) for i in range(5)] for j in range(5) ]) print(mat) [[ 30 91 12 44 52] [ 37 72 19 100 77] [ 94 77 60 48 64] [ 65 26 59 52 40] [ 37 58 13 74 36]] >>>
We will use the method
.shape to find out the dimension of the matrix. Clearly, the matrix has
5 rows and
5 columns.
5, 5) >>>dimension = mat.shape print(dimension) (
If we plot the same, we will get the matrix image as below.
import matplotlib.pyplot as plt plt.figure(figsize=(12, 6)) <Figure size 1200x600 with 0 Axes> mat_plot = plt.imshow(mat) plt.colorbar(mat_plot) <matplotlib.colorbar.Colorbar object at 0x0AF04418> plt.show() >>>
Size Manipulation of the Matrix
We can increase the size by using the interpolation technique. In mathematics, interpolation is a type of estimation to construct new data points within the range of a discrete set of known data points.
Let's create a function that will resize the given matrix and returns a new matrix with the new shape.
def resize_image(image_matrix, nh, nw): if len(image_matrix.shape) == 3: oh, ow, _ = image_matrix.shape else: oh, ow = image_matrix.shape re_image_matrix = np.array([ np.array([image_matrix[(oh*h // nh)][(ow*w // nw)] for w in range(nw)]) for h in range(nh) ]) return re_image_matrix
Credits of the above code - NumPy scaling the Image.
Function Breakdown
We are passing three arguments -
image_matrix→ Basically any matrix that we want to change the dimension.
nh→ New height (goes row-wise).
nw→ New width (goes column-wise).
In the function, we are taking old height (
oh) and old width (
ow) according to the length of the image matrix's shape. The shape of the image varies for the colored image and grayscale image.
Also, we have 2
for loops for 2 levels (row-wise and column-wise) and performing an integer division for each iterative in the range of new height and new width. This will decide the index for which the element is extracted from the matrix.
Let's increase the matrix size where the new matrix should have
8 rows and
8 columns.
8, nw=8) print(re_mat) [[ 30 30 91 91 12 44 44 52] [ 30 30 91 91 12 44 44 52] [ 37 37 72 72 19 100 100 77] [ 37 37 72 72 19 100 100 77] [ 94 94 77 77 60 48 48 64] [ 65 65 26 26 59 52 52 40] [ 65 65 26 26 59 52 52 40] [ 37 37 58 58 13 74 74 36]] >>>re_mat = resize_image(image_matrix=mat, nh=
We can clearly see, the new matrix
re_mat is bigger than the original matrix
mat. The dimension of the
re_mat is
8x
8. If we were to visualize the same, there won't any particular difference between the original matrix plot and the new matrix plot.
1, ncols=2, figsize=(10, 20)) ax1.title.set_text('Original') ax2.title.set_text("Re-sized") ax1.imshow(mat) ax2.imshow(re_mat) plt.show() >>>fig, (ax1, ax2) = plt.subplots(nrows=
We can notice the difference in the axis scaling. The second image is more scaled than the first image. The resizing is handled with care to retain the pixel values. The same thing is applied to the image for increasing and decreasing the size respectively.
Note - We can use the
OpenCV's resizing techniques. I wanted to learn how we can do the same from scratch. | https://msameeruddin.hashnode.dev/re-sizing-the-image-using-interpolation?guid=none&deviceId=357cefc8-0532-44ef-ae0f-1612e5af379b | CC-MAIN-2021-10 | refinedweb | 696 | 65.01 |
In the last posts, we have seen how we can set up a Kubernetes cluster on Amazons EKS platform and spin up our first nodes. Today, we will create our first workloads and see pods and deployments in action.
Creating pods
We have already introduces pods in an earlier post as the smallest units that Kubernetes manages. To create pods, there are several options. First, we can use the kubectl command line tool and simply pass the description of the pod as arguments. Second, we can use a so-called manifest file which contains the specification of the pod, which has the obvious advantage that this file can be reused, put under version control, developed and tested, according to the ideas of the “Infrastructure as a code” approach. A manifest file can either be provided in JSON format or using the YAML markup language. And of course, we can again use the Kubernetes API directly, for instance by programming against the Python API.
In this post, we will demonstrate how to create pods using manifest file in YAML format and Python scripts. We start with a YAML file which looks as follows.
apiVersion: v1 kind: Pod metadata: name: naked-pod-demo namespace: default spec: containers: - name: naked-pod-demo-ctr image: nginx
Let us go through this file step by step to understand what it means. First, this is a YAML file, and YAML is essentially a format for specifying key-value pairs. Our first key is apiVersion, with value v1. This is the first line in all manifest files and simply specifies the API version that we will use.
The next key – kind – specifies the type of Kubernetes resource that we want to access, in this case a pod. The next key is metadata. The value of this key is a dictionary that again has two keys – name and namespace. The name is the name that our pod will have. The namespace key specifies the namespace in which the pod will start (namespaces are a way to segment Kubernetes resources and are a topic for a future post, for the time being we will simply use the so-called default namespace).
The next key – spec now contains the actual specification of the pod. This is where we tell Kubernetes what our pod should be running. Remember that a pod can contain one or more application containers. Therefore there is a key containers, which, as a value, has a list of containers. Each container has a name and further attributes.
In our case, we create one container and specify that the image should be nginx. At this point, we can specify every image that we could also specify if we did a
docker run locally. In this case, Kubernetes will pull the nginx image from the default docker registry and run it.
So let us now trigger the creation of a pod by passing this specification to kubectl. To do this, we use the following command (assuming that you have saved the file as
naked_pod.yaml, the reason for this naming will become clear later).
kubectl apply -f naked_pod.yaml
After a few seconds, Kubernetes should have had enough time to start up the container, so let us use kubectl to check whether the node has been created.
$ kubectl get pods NAME READY STATUS RESTARTS AGE naked-pod-demo 1/1 Running 0 38s
Nice. Let us take a closer look at this pod. If you run
kubectl get pods -o wide, kubectl will give you a bit more information on the pod, including the name of the node on which the pod is running. So let us SSH into this node. To easily log into one of your nodes (the first node in this case, change Node to 1 to log into the second node), you can use the following sequence of commands. This will open the SSH port for all instances in the cluster for incoming connections from the your current IP address. Note that we use the cluster name as a filter, so you might need to replace myCluster in the commands below by the name of your cluster.
$ Node=0 $ IP=$(aws ec2 describe-instances --output text --filters Name=tag-key,Values=kubernetes.io/cluster/myCluster Name=instance-state-name,Values=running --query Reservations[$Node].Instances[0].PublicIpAddress) $ SEC_GROUP_ID=$(aws ec2 describe-instances --output text --filters Name=tag-key,Values=kubernetes.io/cluster/myCluster Name=instance-state-name,Values=running --query Reservations[0].Instances[0].SecurityGroups[0].GroupId) $ myIP=$(wget -q -O-) $ aws ec2 authorize-security-group-ingress --group-id $SEC_GROUP_ID --port 22 --protocol tcp --cidr $myIP/32 $ ssh -i ~/eksNodeKey.pem ec2-user@$IP
Use this – or your favorite method (if you use my script up.sh to start the cluster, it will already open ssh connections to both nodes for you) – to log into the node on which the pod has been scheduled by Kubernetes and execute a
docker ps there. You should see that, among some management containers that Kubernetes brings up, a container running nginx appears.
Now let us try a few things. First, apply the YAML file once more. This will not bring up a second pod. Instead, Kubernetes realizes that a pod with this name is already running and tells you that no change has been applied. This makes sense – in general, a manifest file specifies a target state, and we are already in the target state, so there is no need for action.
Now, on the EKS worker node on which the pod is running, use
docker kill to actually stop the container. Then wait for a few seconds and do a
docker ps again. Surprisingly, you will see the container again, but with a different container ID. What happened?
The answer is hidden in the logfiles of the component of Kubernetes that controls a single node – the kubelet. On the node, the kubelet is started using systemctl (you might want to verify this in the AWS provided bootstrap.sh script). So to access its logs, we need
$ journalctl -u kubelet
Close to the end of the logfile, you should see a line stating that … container … is dead, but RestartPolicy says that we should restart it. In fact, the kubelet constantly monitors the running containers and looks for deviations from the target state. The restart policy is a piece of the specification of a pod that tells the kubelet how to handle the case that a container has died. This might be perfectly fine (for batch jobs), but the default restart policy is Always, so the the kubelet will restart our container.
This is nice, but now let us simulate a more drastic event – the node dies. So on the AWS console, locate the node on which the pod is running and terminate the pod.
After a few seconds, you will see that a new node is created, thanks to the AWS auto-scaling group that controls our nodes. However, if you check the status of the pods using
kubectl get pods, you will see that Kubernetes did not reschedule the pod on a different node, nor does it restart the pod once the replacement node is up and running again. So Kubernetes takes care (via the kubelet) of the containers that belong to a pod on a per-node level, but not on a per-cluster level.
In productions, this is of course typically not what you want – instead, it would be nice if Kubernetes could monitor the pods for you and restart them automatically in case of failure. This is what a deployment does.
Replica sets and deployments
A deployment is an example for a more general concept in Kubernetes – controllers. Basically, this is a component that constantly monitors the cluster state and makes changes if needed to get back into the target state. One thing that a deployment does is to automatically bring up a certain number of instances called replicas of a Docker image and distribute the resulting pods across the cluster. Once the pods are running, the deployment controller will monitor them and take action of the number of pods running deviates from the desired number of pods. As an example, let us consider the following manifest file.
apiVersion: apps/v1 kind: Deployment metadata: name: alpine spec: selector: matchLabels: app: alpine replicas: 2 template: metadata: labels: app: alpine spec: containers: - name: alpine-ctr image: httpd:alpine
The first line is specifying the API version as usual (though we need to use a different value for the API version, the additional apps is due to the fact that the Deployment API is part of the API group apps). The second line designates the object that we want to create as a deployment. We then again have a name which is part of the metadata, followed by the actual specification of the deployment.
The first part of this specification is the selector. To understands its role, recall that a deployment is supposed to make sure that a specified number of pods of a given type are running. Now, the term “of a given type” has to be made precise, and this is what the selector is being used for. All pods which are created by our deployment will have a label with key “app” and value “alpine”. The selector field specifies that all pods with that label are to be considered as pods controlled by the deployment, and our controller will make sure that there are always exactly two pods with this label.
The second part of the specification is the template that the deployment uses to create the pods controlled by this deployment. This looks very much like the definition of a pod as we have seen it earlier. However, the label that is specified here of course needs to match the label in the selector field (kubectl will actually warn you if this is not the case).
Let us now delete our existing pod, run this deployment and see what is happening. Save the manifest file as
deployment.yaml, apply it and then list the running nodes.
$ kubectl delete -f naked_pod.yaml $ kubectl apply -f deployment.yaml deployment.apps/alpine created $ kubectl get pods NAME READY STATUS RESTARTS AGE alpine-8469fc798f-rq92t 1/1 Running 0 21s alpine-8469fc798f-wmsqc 1/1 Running 0 21s
So two pods have been created and scheduled to nodes of our cluster. To inspect the container further, let us ask kubectl to provide all details as JSON and use the wonderful jq to process the output and extract the container information from it (you might need to install jq to run this).
$ kubectl get pods --output json | jq ".items[0].spec.containers[0].image" "httpd:alpine"
So we see that the pods and the containers have been created according to our specification and run the httpd:alpine docker image.
Let us now repeat our experiment from before and stop one of the node. To do this, we first extract the instance ID of the first instance using the AWS CLI and then use the AWS CLI once more to stop this instances.
$ instanceId=$(aws ec2 describe-instances --output text --filters Name=tag-key,Values=kubernetes.io/cluster/myCluster Name=instance-state-name,Values=running --query Reservations[0].Instances[0].InstanceId) $ aws ec2 terminate-instances --instance-ids $instanceId
After some time, Kubernetes will find that the instance has died, and if you run a
kubectl get nodes, you will find that the node has disappeared. If you now run
kubectl get nodes -o wide, however, you will still find two pods, but now both are running on the remaining node. So the deployment controller has replaced the pod that went down with our node by a new pod on the remaining node. This behaviour is in fact not realized by the deployment, but by the underlying ReplicaSet. We could also create a replica set directly, but a deployment offers some additional features, like the possibility to run a rolling upgrade automatically.
Creating deployments in Python
Having discussed how to create a deployment from a YAML file, let us again do the same thing in Python. The easiest approach is to walk our way upwards through the YAML file. After the usual preparations (imports, loading the configuration), we therefore start with the specification of the container.
import client container = client.V1Container( name="alpine-ctr", image="httpd:alpine")
This is similar to our YAML file – in each pod, we want to run a container with the httpd:alpine image. Having the container object, we can now create the template section.
template = client.V1PodTemplateSpec( metadata=client.V1ObjectMeta( labels={"app": "alpine"}), spec=client.V1PodSpec(containers=[container]))
Again, note the label that we will use later to select the pods that our deployment controller is supposed to watch. Now we put the template into the specification part of our controller.
selector = client.V1LabelSelector( match_labels={"app" : "alpine"}) spec = client.V1DeploymentSpec( replicas=2, template=template, selector=selector)
And finally, we can create the actual deployment object and ask Kubernetes to apply it – the complete Python script is available for download here).
deployment = client.V1Deployment( api_version="apps/v1", kind="Deployment", metadata=client.V1ObjectMeta(name="alpine"), spec=spec) apps.create_namespaced_deployment( namespace="default", body=deployment)
This completes our post for today. We now know how to create pods and deployments to run Docker images in our cluster. In the next few posts, we will look at networking in Kubernetes – services, load balancers, ingress and all that.
One thought on “Kubernetes 101 – creating pods and deployments” | https://leftasexercise.com/2019/04/25/kubernetes-101-creating-pods-and-deployments/ | CC-MAIN-2021-04 | refinedweb | 2,256 | 61.87 |
This is one thing that crossed my mind recently.
I need to define an ansible playbook to have tests.
The yaml already has (arguably reasonable amount of) boilerplate:
- hosts: localhost # why do I need this?
roles:
- role: standard-test-basic # why is this not the default?
- classic # dtto
Yet in order to provision more RAM, I need to create another yaml with:
---
standard-inventory-qcow2:
qemu:
m: 3G # Amount of VM memory
Why can't I simply state that in the first YAML?
Why do I need to say "standard-inventory-qcow2"? Those are implementation details. What I really need to say is: "I need 3 GiB RAM."
Yet that YAML doesn't work alone, I need to install a custom tool, to run fmf init that creates another file, tests/.fmf/version - and that contains just this:
fmf init
tests/.fmf/version
1
Why cannot I just put this the FMF version in that one YAML ?
And now if I want to gate, I need yet another YAML file for that.
For comparsion, please take a look at major public CI services, such as Travis CI. There is exactly 1 YAML file. It has all I need, it lets me define:
It doesn't bother me with:
Yes, you are right in many points. That's why we are looking for a better solution for test metadata and ci metadata in long-term. Some of the main ideas are already described here:
For example, the memory use case you've mentioned should be very simple, as you say, without any unnecessary implementation details. Please, have a look at the Provision section here:
Does that make sense to you? We would be glad for any feedback.
I'll read that, thanks. The Provision section looks awesome.
It would be much easier to grasp if there was a complete example. I don't really understand the code snippets in there - are those yams? tree dir views? where do i put this...
You can find an initial bunch of examples at the top of the L2 page. For example workflow.fmf shows how individual steps would be configured in a single file.
The whole idea is based on fmf so basically yaml with a few enhancements such as inheritance and elasticity which prevent unnecessary duplication and minimize maintenance. Elasticity allows to store everything in a single file ci.fmf for small project as well as distribute metadata into separate files or even directories when the project grows.
fmf
yaml
ci.fmf
Thanks.
Regarding the location: Single ci.fmf file could be stored directly in the dist git rpms namespace root. Another option is to have all L2 metadata files stored under a ci directory. Whatever is more suitable for particular project.
ci
Ie just seen the gating.yaml example and that is even worse. Not only it has tedious boilerplate, but it has some custom YAML tags that make it unparsable by humans (me) and machines (PyYAML) alike.
Completely agreed. Here's the proposed alternative using fmf and L2 metadata:
/test/pull-request/pep:
gate:
- merge-pull-request
Does it look better? See also this pull request with more detailed documentation.
It does look better indeed!
@psss @churchyard can we consider this solved and close it out?
This is not solved at all. From the Fedora CI user perspective, nothing has changed since I've open this ticket. I understand everything discussed here is "future feature". Or have I misunderstood this and I can use the new fmf L2 thing today to run package tests?
to comment on this ticket. | https://pagure.io/fedora-ci/general/issue/52 | CC-MAIN-2022-21 | refinedweb | 598 | 76.42 |
Adding an API to a static site interpretation here.
If you consider just the read aspect of an API, it is possible to create such a thing for a static site, but only if you ignore things like searching, filtering, or otherwise manipulating the data.
That certainly sounds like a heck of a lot to give up, but it doesn’t mean your site is completely devoid of any way of sharing data. Consider for a moment that static site generators (SSGs) all have some way of outputting RSS. Folks don’t tend to think of it because it “just works”, but RSS is a data-friendly format for your blog and is - barely - an API for your site.
Let’s consider a real example. You’ve got a site for your e-commerce shop and you’ve set up various products. We’ll use Jekyll for the demo but any SSG would be fine. I created a new Jekyll site and then added a
_data folder. Within that folder I created a file called products.json. It was a simple array of products.
[ { "name":"Apple", "description":"This is the Apple product.", "price":9.99, "qty":13409 }, { "name":"Banana", "description":"This is the Banana product.", "price":4.99, "qty":1409 }, { "name":"Cherry", "description":"This is the Cherry product.", "price":9.99, "qty":0 }, { "name":"Donut", "description":"This is the Donut product.", "price":19.99, "qty":923 } ]
This is the standard way by which you can provide generic data for a Jekyll site. See the docs on this feature for more examples. At this point I can add product information to my site in HTML. I edited the main home page and just added a simple list. I decided to list the name and price - this was completely arbitrary.
<h1 class="page-heading">Products</h1> <ul class="post-list"> {% for p in site.data.products %} <li>{{ p.name }} at {{p.price}}</li> {% endfor %} </ul>
And here it is on the site:
Yeah, not terribly pretty, but for a demo it will suffice. I could also create HTML pages for my products so you can click to view more information. (For Jekyll, that could be done by hand, or by using a generator to read the JSON file and automatically create the various detail pages.)
So let’s create a JSON version of our products. Technically, we already have the file, but it isn’t accessible. In order to make this public, we need to create a file outside of
_data. I chose to make a folder called
api and a file named
products.json. Here is how I made that file dynamically output the products in JSON format.
--- layout: null --- {{ site.data.products | jsonify }}
Yeah, that’s it. So a few things. In order to have anything dynamic in a random page in Jekyll, you must use front matter. For me that was just fine as I wanted to ensure no layout was used for the file anyway. Jekyll also supports a
jsonify filter that turns data into JSON. So basically I went from JSON to real data to JSON again, and it outputs just fine in my browser:
Of course, this assumes that my core data file matches, 100%, to what I want to expose to my “API”. That may not work for every case. I could manually output the JSON by looping over my site data and picking and choosing what properties to output. Heck, I could even make new properties on the fly. For an example of this, see this Jekyll snippet: JSONify your Jekyll Site.
Cool! But what about sorting, filtering, etc? Well, we could do it manually. For example, I made a new file,
products.qty.json, that supports a sorted by qty list, with highest first:
--- layout: null --- {{ site.data.products | sort: "qty" | reverse | jsonify }}
This resulted in this JSON:
[{"name":"Apple","description":"This is the Apple product.","price":9.99,"qty":13409},{"name":"Banana","description":"This is the Banana product.","price":4.99,"qty":1409},{"name":"Donut","description":"This is the Donut product.","price":19.99,"qty":923},{"name":"Cherry","description":"This is the Cherry product.","price":9.99,"qty":0}]
I could do similar sorts for price or name. How about filtering? I built a new file,
products.instock.json, to represent products that have a
qty value over zero. I had hoped to do this in one line like in the example above, and Liquid (the template language behind Jekyll) does support a where filter, but from what I could see, it did not support a where filter based on a “greater than” or “not equal” status. I could be wrong. I just used the tip from the Jekyll snippet above.
--- layout: null --- [ {% for p in site.data.products %} {% if p.qty > 0 %} { "name":"{{p.name}}", "description":"{{p.description | escape}}", "price":{{p.price}}, "qty":{{p.qty}}, } {% endif %} {% endfor %} ]
And the result. Note the white space is a bit fatter. I could fix that by manipulating my source code a bit.
[ { "name":"Apple", "description":"This is the Apple product.", "price":9.99, "qty":13409, } { "name":"Banana", "description":"This is the Banana product.", "price":4.99, "qty":1409, } { "name":"Donut", "description":"This is the Donut product.", "price":19.99, "qty":923, } ]
So I think you get the idea. If I wanted to, I could add any number of possible combinations (in stock, sorted by name, but with a price less than 100). It is definitely a manual process, and I’m not supporting dynamic sorting and filtering, but it is certainly something, and it may be useful to your site users.
I’d love to know what you think of this technique, and if you are making use of a SSG and have done something like this, please share an example in the comments below! | https://www.raymondcamden.com/2016/03/01/adding-an-api-to-a-static-site | CC-MAIN-2019-04 | refinedweb | 971 | 76.11 |
Hello, I'm trying to get started with CGI scripts which I've read I can use from anywhere. What I'm trying to do is run a classified ad script and I've read through the instructions many times and every time I try to run it I get a 500 Internal Server Error.
I've gone in and done the 'pwd' command and put that at the top of the script where it tells me to but I still get the error. I've tried renaming it with a .pl extension with no luck. I've tried it with and without the #! that preceedes is.
Can anyone tell me what I'm doing wrong? Is there any hope for me?
Thanks in advance for any help.
Jim
One common mistake beginners make with perl (and some old-timers, too), is that they upload a file in MS-DOS format to a Unix/Linux box. DOS text file format ends each line with a carriage return (Ctrl-M - ASCII 13) and line feed (Ctrl-J - ASCII 10) and Unix text file format ends each line of text with only an line feed. Perl chokes on those carriage returns.
The other common mistake people make is setting file permissions incorrectly. You should have scripts set at 755 and data files set at 644.
You also get an "internal server error" if your script won't compile. A simple typo can do this.
If you run your perl script from the command line, you will get a more informative error message. It's especially handy when you have a typo, because it tells you what like to look at.
Suggestion: You won't eliminate every bug this way, but as a preliminary, it is a lot faster to check out your scripts on your own PC than to keep FTPing edits and telnetting to run the scripts. It's very easy to download and install the ActiveState Perl distribution (free!) on your PC, and it comes with HTMLized documentation for not only Perl but many common Perl modules as well.
If you continue to have problems, please post with more details... | https://discussion.dreamhost.com/t/cgi-help/11718 | CC-MAIN-2018-05 | refinedweb | 360 | 81.73 |
We've known about Homograph attacks since the 1990s -- so you may be wondering why I'm writing about them in 2018. Don't worry, I'll get to that. In this post, we'll explore the history of homograph attacks, and why, like many of the internet’s problems that stem from path dependence, it seems like they just won’t ever go away.
Origins of my Interest
I first got interested in homograph attacks a few months back when I was working through tickets for Kickstarter's Hackerone program. HackerOne is a "bug bounty program", or, an invitation that hackers and security researchers find vulnerabilities in our site in exchange for money.
When I was looking through the tickets, one caught my attention. It wasn't a particularly high risk vulnerability, but I didn't understand a lot of words in the ticket, so of course I was interested. The hacker was concerned about Kickstarter's profile pages. (We often get reports about our profile and project pages.)
Profile pages often create vulnerabilities for websites. Whenever you are in the position to “host” someone on your site, you are going to have to think about the ways they’ll abuse that legitimacy you give them. Our hacker was specifically concerned about a field that allows our users to add user-urls or "websites" to their profile.
They thought this section could be used in a homograph attack. To which I was like, what the heck is a homograph attack? And that question lead me down a rabbit hole of international internet governance, handfuls of RFCs, and a decades-old debate about the global nature of the internet.
Internet Corporation for Names and Numbers (ICANN)
We have to start with ICANN, the main international internet body in charge in this story. ICANN makes all the rules about what can and cannot be a domain name (along with performing the technical maintenance of the DNS root zone registries and maintaining the namespaces of the Internet).
For example, say you go to Namecheap to register "loganisthemostawesome.com". Namecheap uses the “extensible provisioning protocol” to verify your name with Verisign. Verisign is the organization that manages the registry for the “.com” gTLD. Versign checks the ICANN rules and regulations for your registration attempt, tells Namecheap the result, and Namecheap tells me if I can register "loganisthemostawesome.com". Spoilers: I can!
This is great. But I primarily speak English and I use ASCII for all my awesome businesses on the internet. What happens to all those other languages that can’t be expressed in a script compatible with ASCII?
Version 1 of Internationalized Domain Names
ICANN attempted to answer this question when they proposed and implemented IDNs as a standard protocol for domain names in the late 90s. They wanted a more global internet so they opened up domains to a variety of unicode represented scripts.
What's a script? A script is a collection of letters/signs for a single system. For example, Latin is a script that supports many languages, whereas a script like Kanji is one of the scripts supporting the Japanese language. Scripts can support many languages, and languages can be made of multiple scripts. ICANN keeps tables of all unicode character it associates with any given script.
This is even better now! Through IDNs, ICANN has given us the ability to express internet communities across many scripts. However, there was one important requirement. ICANN’s Domain Name System, which performs a lookup service to translate user-friendly names into network addresses for locating Internet resources, is restricted in practice to the use of ASCII characters.
Punycode
Thus ICANN turned to Punycode. Punycode is just puny unicode. Bootstring is the algorithm that translates names written in language-native scripts (unicode) into an ASCII text representation that is compatible with the Domain Name System (punycode).
For example, take this fictional domain name (because we still can't have emojis in gTLDs 😭):
hi👋friends💖🗣.com
If you put this in your browser, the real lookup against the Domain Name System would have to use the punycode equivalent:
xn--hifriends-mq85h1xad5j.com
So, problems solved. We have a way to use domain names in unicode scripts that represent the full global reach of the internet and can start handing out IDNs. Great! What could go wrong?
Homographs
Well, things aren’t always as they seem. And this is where homographs and homoglyphs come in.
A homograph refers to multiple things that look or seem the same, but have different meanings. We have many of these in English, for example “lighter” could refer to the fire starter or the comparative adjective.
The problem when it comes to IDNs is that homoglyphs exist between scripts as well, with many of the Latin letters having copies in other scripts, like Greek or Cyrillic.
Example of lookalikes from homoglyphs.net:
Let's look at an example of a domain name.
washingtonpost.com
vs
wаshingtonpost.com
Can you tell the difference? Well, let's translate both of these to purely ASCII:
washingtonpost.com
vs
xn--wshingtonpost-w1k.com
Uh oh, these definitely aren't the same. However, user-agents would make them appear the same in a browser, in order to make the punycode user-friendly. But in reality, the first "a" in the fake-WaPo is really a Cyrillic character. When translated to punycode we can see the ASCII remaining characters, "wshingtonpost" and then a key signifying the Cyrillic a, "w1k".
This presented ICANN with a big problem. You can clearly see how these may be used in phishing attacks when user-agents interpret both Washington Post's as homographs, making them look exactly same. So what was ICANN to do?
Internationalized Domain Names Version 2 & 3
By 2005, ICANN had figured out a solution. They told gTLD registrars they had to restrict mix scripts. Every single registered domain had to have a "label" on it to indicate the single pure script that the domain name would use to support it's language. Today, if you went and tried to register our copy-cat Washington Post at
xn--wshingtonpost-w1k.com, you would get an error. Note: There were a few exceptions made, however, for languages that need to be mixed script, like Japanese.
Problem fixed, right? Well, while mixed scripts are not allowed, pure scripts are still perfectly fine according to ICANN's guidelines. Thus, we still have a problem. What about pure scripts in Cyrillic or Greek alphabets that look like the Latin characters? How many of those could there be?
Proof of Concept
Well, when I was talking to my friend @frewsxcv about homograph attacks, he had the great idea to make a script to find susceptible urls for the attack. So I made a homograph attack detector that:
- Takes the top 1 million websites
- For each domain, checks if letters in each are confusable with latin or decimal
- Checks to see if the punycode url for that domain is registered through a WHOIS lookup
- Returns all the available domains we could register
A lot of the URLs are a little off looking with the Cyrillic (also a lot of the top 1 million websites are porn), but we found some interesting ones you could register.
For example, here's my personal favorite. In both Firefox and Chrome, visit:
Here's what they look like in those Browsers.
Firefox:
Chrome:
Pretty cool! In Firefox, it totally looks like the official PayPal in the address bar! However, in Chrome, it resolves to punycode. Why is that? 🤔
User-Agents & Their Internationalized Domain Names Display Algorithms
It is because Chrome and Mozilla use different Internationalized Domain Name Display Algorithms. Chrome's algorithm is much stricter and more complex than Mozilla's, and includes special logic to protect against homograph attacks. Chrome checks to see if the domain name is on a gTLD and all the letters are confusable Cyrillic, then it shows punycode in the browser rather than the unicode characters. Chrome only changed this recently because of Xudong Zheng’s 2017 report using as a POC.
Firefox, on the other hand, still shows the full URL in its intended script, even if it's confusable with Latin characters. I want to point out that Firefox allows you to change your settings to always show punycode in the Browser, but if you often use sites that aren't ASCII domains, this can be pretty inaccessible.
So, what's next?
So what, now, is our responsibility as application developers and maintainers if we think someone might use our site to phish people using a homograph? I can see a couple paths forward:
- Advocate to Mozilla and other user-agents to make sure to change their algorithms to protect users.
- Advocate that ICANN changes its rules around registering domains with Latin confusable characters.
- Implement our own display algorithms. This is what we ended up doing at Kickstarter. We used Google's open-source algorithm and show a warning if it's possible that the url shown on the page is a homograph for another url.
- Finally, we could just register these domains like @frewsxcv and I did with PayPal so that they aren't able to be used maliciously. Possibly, if we are part of an organization with a susceptible domain, we should just register it.
To summarize, this is a hard problem! That's why it's been around for two decades. And fundamentally what I find so interesting about the issues surfaced by this attack. I personally think ICANN did the right thing in allowing IDNs in various scripts. The internet should be more accessible to all.
I like Chrome's statement in support of their display algorithm, however, which nicely summarizes the tradeoffs as play:
We want to prevent confusion, while ensuring that users across languages have a great experience in Chrome. Displaying either punycode or a visible security warning on too wide of a set of URLs would hurt web usability for people around the world.
The internet is full of these tradeoffs around accessibility versus security. As users and maintainers of this wonderful place, I find conversations like these to be one of the best parts of building our world together.
Now, we just gotta get some emoji support.
Thanks for reading! 🌍💖🎉🙌🌏
Resources
Background
- Wikipedia on Homograph Attacks
- Wikipedia on IDNs
- Plagiarism Detection in Texts Obfuscated with Homoglyphs
- A Collective Intelligence Approach to Detecting IDN Phishing by Shian-Shyong Tseng, Ai-Chin Lu, Ching-Heng Ku, and Guang-Gang Geng
- Exposing Homograph Obfuscation Intentions by Coloring Unicode Strings by Liu Wenyin, Anthony Y. Fu, and Xiaotie Deng
- Phishing with Unicode Domains by Xudong Zheng
- The Homograph Attack by Evgeniy Gabrilovich and Alex Gontmakher NOTE: The original paper!
- Cutting through the Confusion: A Measurement Study of Homograph Attacks by Tobias Holgers, David E. Watson, and Steven D. Gribble
- Assessment of Internationalised Domain Name Homograph Attack Mitigation by Peter Hannay and Christopher Bolan
- Multilingual web sites: Internationalized Domain Name homograph attacks by Johnny Al Helou and Scott Tilley
- IDN Homograph Attack Potential Impact Analysis by @jsidrach
Browser policies
- Chrome's IDN Display Algorithm
- Mozilla's IDN Display Algorithm
- UTC Mixed Script Detection Security Mechanisms
- Chrome's IDN Spoof Checker
- Bugzilla Open Bug on IDNs
Tools
- Homograph Attack Generator for Mixed Scripts NOTE: It is no longer possible to register mixed script domain names.
- Homograph Attack Finder for Pure Cyrillic Scripts
- Homograph Attack Finder + WHOIS lookup for Pure Cyrillic Scripts
- Homoglyph Dictionary
- Puncode converter
ICANN CFPs and Guidelines
- 2005 IDN Version 2.0 Guidelines
- ICANN 2005 RFC Announcement for Version 2.0 of IDN Guidelines
- IDNA2008 Version 2.2 draft
- 2011 IDN Version 3.0 Guidelines
ICANN, Verisign, and the Domain Registration Process
- Wikipedia for TLD. Each TLD has its own Registry that manages it and defines its IDN rules.
- Wikipedia for Domain Name Registry, like Verisign
- Wikipedia for Domain Name Registrar, like Namecheap, Godaddy, or Gandi.net
- ICANN Accreditation and Verisign Certification for distributing .com domains
- Wikipedia for the Extensible Provisioning Protocol, which is used when a user on a registry requests a .com domain. The registry uses the EPP protocol to communicate with verisign to register the domain.
- Verisign's IDN Policy. Verisign requires you specify a three letter language tag associated with the domain upon registration. this tag determines which character scripts you can use in the domain. presumably the language tag for https://аррӏе.com/ (cyrillic) is 'RUS' or 'UKR'.
- PIR, manager of .org TLDs, IDN rules
Misc Security related to Domains
Homograph Major Site Copy-cat Examples
- http://аоӏ.com/
- https://раураӏ.com/
- https://аррӏе.com/
-.спп.com/
Discussion (30)
+1 for the sheer body of research attached to this post :)
Firefox users: you can go to about:config and switch
network.IDN_show_punycodeto
true.
Yep! Unfortunately this always shows punycode for all IDNs not just malicious ones. Wish they'd come up with a solution as a default for just the potentially malicious ones like Chrome did!
Or they could show it likeаypal.com/ (punycode there)
Yes! This is similar to what IE does with IDNs, by showing an informational alert that you're on one as a pop up. (Not sure which IE version does this). Some have suggested color coating non-ASCII text as well. Lots of potential solutions 😊
Interesting. I've never heard of homographs attack before.
I learned quite a bit. Thank you!
Wow, that is all super interesting.
Great post,
also an interesting tidbit with Firefox is that it suggests the real PayPal in the link:
As I was writing this I realized you put that icon there. Awesome touch! Definitely fooled me 🙈
If you published this as an npm package (e.g.
sanitizeHomograph(url)) then all of us could use it to sanitize URLs we display on profile pages.
Kickstarter is about to publish the ruby code as a gem! Would be down to do in js as well 😊
sorry this took a while! github.com/kickstarter/ruby-homogr...
When punycode first came out Firefox would only display the unicode version on a whitelisted set of TLDs. The rule, if I recall, was that a registrar must have published a policy on how they avoid the registration of homographs. This meant, for example, that
.dewould be okay since the registrar policy was limited script, but
.comwould always show punycode since it was a free-for-all.
I kind of think this is registrar problem. The registration of homographs on common script characters should just be rejected.
Great proposal! I think, based on my reading of ICANN's meeting minutes and IDN RFCs, that as an international organization they are worried limiting some scripts that support non-ASCII languages would be an overreach in favor of English speakers and Latin. They are taking time to make sure that whatever decision they make doesn't over-exclude non-latin-language speakers. (And in the meantime hoping the Browsers just do this for them 😉.) Turns out internet governance is just as slow-paced as any other kind of governance.
This is a dang good post.
Super interesting.
Awesome article. One small error - the past tense of "to lead" is "led", not "lead".
Ha! English is hard. I'll change. Thank you for pointing that out!
This was super informative! Anyway, what about requiring a human moderator to double check links with punycode in them? Ie show the warning until the moderator has had a chance to look at it and confirm it's not a homograph attack. I don't know how much of a burden that would be, but if there aren't that many punycode URLs, then the amount of work they'd need to do could be very low. And if the cost does turn out to be high, you might be able to use Mechanical Turk.
Thanks, a very interesting article.
Last year I had fun with apples safari and mail:
tᴏ.com vs to.com vs tᴑ.com
This ended up in CVE-2017-7106 and CVE-2017-7152
I wrote about this in
blog.to.com/phishing-with-an-apple...
Additionally I built a "live js injection reverse proxy" for demonstration purposes on https://ṫo.com
It's not dirty on your screen, its a special T and it works.
Nice! I love the blog post.
Woah, this is fascinating! I love that Chrome is actively combating this. Thanks for the well-researched article :D
10/10 article. Awesome research work!
Wow, great article! Thanks!
Top quality post. Learned a lot reading it. Thanks for writing!
Thanks for the post. Very well explained. Would be awesome if people could get their hands on the script you guys had written to do the site matching search!
+1 Great post, well documented and very instructive!
(PS: struggling every day with phishing e-mails using (in a dumb manner) this kind of cheat :P )
My Mozilla shows me the link behind any clickable text. Your argument is invalid. It shows me the false one. Nice article otherwise.
Sure, it shows it on links, but what about a redirect during a checkout process? If an injected script could change a redirect to paypal to actually go to a homograph'ed domain instead, it would be quite hard to spot.
Amazing post. Great work! | https://dev.to/logan/homographs-attack--5a1p | CC-MAIN-2022-05 | refinedweb | 2,887 | 65.42 |
How to Use Hex with Binary for C Programming Harry Potter. It’s short for hexadecimal, which is the base 16 counting system. That’s not as obtuse as it sounds because it’s easy to translate between base 16 (hex) and binary.
For example, the value 10110001 translates into B1 hexadecimal. Hexadecimal numbers include the letters A through F, representing decimal values 10 through 15, respectively. A B in hex is the decimal value 11. Letters are used because they occupy only one character space.
Here are the 16 hexadecimal values 0 through F and how they relate to four bits of data:
These hexadecimal values are prefixed with 0x. This prefix is common in C, although other programming languages may use different prefixes or a postfix.
The next hexadecimal value after 0xF is 0x10. Don’t read it as the number ten, but as “one zero hex.” It’s the value 16 in decimal (base 10). After that, hex keeps counting with 0x11, 0x12, and up through 0x1F and beyond.
Yes, and all of this is just as much fun as learning the ancient Egyptian counting symbols, so where will it get you?
When a programmer sees the binary value 10110100, he first splits it into two 4-bit nibbles: 1011 0100. Then he translates it into hex, 0xB4. The C programming language does the translation as well, as long as you use the %x or %X conversion characters, as shown in A Little Hex.
A LITTLE HEX
#include <stdio.h> char *binbin(int n); int main() { int b,x; b = 21; for(x=0;x<8;x++) { printf("%s 0x%04X %4dn",binbin(b),b,b); b<<=1; } return(0); } char *binbin(int n) { static char bin[17]; int x; for(x=0;x<16;x++) { bin[x] = n & 0x8000 ? '1' : '0'; n <<= 1; } bin[x] = ' '; return(bin); }
The code in A Little Hex displays a value in binary, hexadecimal, and decimal and then shifts that value to the left, displaying the new value. This process takes place at Line 13 by using the %X conversion character in the printf() statement.
Well, actually, the placeholder is %04X, which displays hex values using uppercase letters, four digits wide, and padded with zeros on the left as needed. The 0x prefix before the conversion character merely displays the output in standard C style.
Exercise 1: Start a new project using the code from A Little Hex. Build and run.
Exercise 2: Change the value of variable b in Line 9 to read this way:
b = 0x11;
Save that change, and build and run.
You can write hex values directly in your code. Prefix the values with 0x, followed by a valid hexadecimal number using either upper- or lowercase letters where required. | https://www.dummies.com/programming/c/how-to-use-hex-with-binary-for-c-programming/ | CC-MAIN-2019-35 | refinedweb | 462 | 72.76 |
For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See:
Handling CTRL-C when trading live
Hi Everybody, dear @backtrader,
I’d like to handle CTRL-C (SIGINT) when trading Live. In this case I’d like to trigger method
runstop(), and in the
stop()method I want to save the statistics of my trading to a DB. I googled this and added the recommended code:
def next(self, frompre=False): try: ...business logic here... except KeyboardInterrupt: self.log('CTRL-C detected - stopping') self.stopped=True self.env.runstop() return
However, this does not work apparently:
2017-05-16T12:27:15, 2017-05-16T12:27:20, 2017-05-16T12:27:25, Portfolio value: 99776.2408, cash: 97511.7685 2017-05-16T12:27:25, Position: USD_CZK, size: 4093.0000, price: 23.8976, value: 97812.9177 2017-05-16T12:27:25, Position: USD_MXN, size: 10487.0000, price: 18.7057, value: 196167.0954 2017-05-16T12:27:25, Position: GBP_ZAR, size: 5780.0000, price: 16.9514, value: 97978.9764 2017-05-16T12:27:25, Position: EUR_TRY, size: 24923.0000, price: 3.9304, value: 97956.6115 forrtl: error (200): program aborting due to control-C event Image PC Routine Line Source libifcoremd.dll 00007FFDA97643E4 Unknown Unknown Unknown KERNELBASE.dll 00007FFDD2EE717D Unknown Unknown Unknown KERNEL32.DLL 00007FFDD5DB2774 Unknown Unknown Unknown ntdll.dll 00007FFDD5FC0D61 Unknown Unknown Unknown
How can I achieve the goal to be able to stop a live trading and in that moment save everything to the DB?
Thanks and best regards,
Tamás
- backtrader administrators last edited by | https://community.backtrader.com/topic/410/handling-ctrl-c-when-trading-live/2 | CC-MAIN-2021-43 | refinedweb | 264 | 72.32 |
#include <iostream> #include <stdlib.h> #include <string> #include <conio.h> #include <sstream> using namespace std; string order; int direction,move=1; int loc =0; int x,y=11; string ROOM []={}; string shdesc[]={"You're in the north plaza.", "You're in the arena.", "You're in the equipment shop.", "You're in the temple.", "You're in the tavern.", "You're in the weapon shop.", "You're in the armor shop.", "You're in the guild hall.", "You're in the magic shop.", "You're in the south plaza.", "You're on the docks." }; string lngdesc[]={"You are standing at the north end of a large open plaza surrounded by shade trees as well as assorted flowering shrubs and hedges whose scent wafts through the warm, dry air. To the west stands an ornate temple adorned with ornate stone and brass architecture. A quaint little shop lies to the northwest, and a dimly lit, smokey little tavern to the northeast. To the east lay the huge wrought iron gates of the arena. To north is the town guild hall. The other half of the plaza lies to the south.", "You now stand inside the gates of the village's most noteworthy and notorious point of interest, the arena. The charred and stained stone walls and scattered bits of bone, armor, and weaponry lying among the debris on the packed earthen floor. A huge brass gong hangs at the center of the arena suspended on a framework made of the bones of some of the less fortunate of the arena's combatants. To the south is a wide stone staircase leading down to the dungeon gate. The exit through the arena gates is to the west.", "You are now inside the equipment shop, where one may purchase supplies and provisions essential to survival in the dungeons or the wilderness. The shop keeper sits on a tall stool behind a counter along the north wall, eyeing you warily. The exit is to the southeast.", "You are now in the main entry hall of a great temple. Huge black marble columns veined with platinum stretch to the ceiling some 100' overhead. The patterns in the tile floor are so ornate and complex as to be nearly impossible to follow or comprehend. Many black silk tapestries depicting various scenes of a religious or magical nature block your view into the many other areas of the temple. Blue robed priests and acolytes move here and there going about their daily duties. The exit is to the east.", "You are in the village tavern. The smoke from the many oil lamps decrease visibility to your immediate surroundings, and the wavering light they cast about the room leaves many shadows and unlit corners. There are several tables about the room, where the tavern's patrons eat, drink, and generally enjoy themselves. The bar stands along the north wall and a door to the kitchen lies to the northeast. A large, full length mirror adorns the west wall. The only visible exits lie up a steep wooden ladder near the rear of the room, and out into the north plaza to the southwest.", "You are now inside the weapon shop, where you may purchase various weapons to use for sparring in the arena or for personal defense from the denizens of the dungeons. The shop keeper sits on a high stool behind a tall counter along the south wall. The exit is to the west. ", "You are now inside the armor shop, where you may acquire different sorts of armor and protective gear to guard yourself against the blows of your opponents in the arena, or from the various non-human foes to be found in the dungeons beneath the town. The shop keeper sits on a high behind a worn counter along the south wall, eyeing you warily. The exit is to the east.", "You have entered the town guild hall, where one may seek the training necessary to refine your combat and spell skills. This building is also a center of learning where the ancient art of sorcery is taught to those with the intelligence and skill to wield it effectively. Mages, wizards, and masters-at-arms rush about on errands or to and from training sessions. Ornately carved marble archways exit to the north and south, and a wide stone staircase leads downward.", "You're standing in a warm, dimly lit shop where various magical items are bought and sold by a shrewd mage in crimson robes with an air of power and complete confidence about him. He doesn't look like someone whose spells you'd like to be on the wrong end of. The sweet scent of incense drifts in from a workroom in the rear of the shop. The exit is to the north.", "You are standing at the south end of a large open plaza surrounded by shade trees as well as assorted flowering shrubs and hedges who's scent wafts through the warm, dry air. To the west stands the town's armor shop. The weapon shop lies to the east. There is a magic shop to the south. To the north lies the other half of the plaza and to the southwest lies the town's outer gate.", "You are standing on the docks. The water from the lake laps quietly against the docks making the peaceful scene on the lake all that more serene. An ornately carved marble archway exits through the town wall to the south into the guild hall." }; int room [11][11]={ {0,8,10,2,4,5,3,0,0,0,0}, {0,0,0,1,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,1,0,0,0}, {0,0,0,1,0,0,0,0,0,0,0}, {0,0,0,0,0,0,0,0,1,0,0}, {0,0,0,10,0,0,0,0,0,0,0}, {0,0,10,0,0,0,0,0,0,0,0}, {0,11,1,0,0,0,0,0,0,0,0}, {0,10,0,0,0,0,0,0,0,0,0}, {0,1,9,6,7,0,0,0,0,0,0}, {0,0,8,0,0,0,0,0,0,0,0} }; int main(int argc, char *argv[]) { // Display short room desc, and promput cout << shdesc[loc]<<endl; cout << ">"; // Ask for user input cin > order; //Need a routine to convert input to lowercase here // Convert direction to a number (re-write routine later, make into function) if (order=="n"){ cout << "You went north"<<endl; direction=1; }; if (order=="s"){ cout << "You went south"<<endl; direction=2; }; if (order=="e"){ cout << "You went east"<<endl; direction=3; }; if (order=="w"){ cout << "You went west"<<endl; direction=4; }; if (order=="ne"){ cout << "You went northeast"<<endl; direction=5; }; if (order=="nw"){ cout << "You went northwest"<<endl; direction=6; }; if (order=="se"){ cout << "You went southeast"<<endl; direction=7; }; if (order=="sw"){ cout << "You went southwest"<<endl; direction=8; }; if (order=="u"){ cout << "You went up"<<endl; direction=9; }; if (order=="d"){ cout << "You went down"<<endl; direction=10; }; move=room[loc] [direction]; loc=move; cout << shdesc[loc]<<endl; system("PAUSE"); return 0; }
When I compile this I get:
No match for IO_ISTREAM_withasign&> string&'
ok... I'm sure that's not all that big... These are the topics I'd like to hit.
1. How can I structure the data better?
2. What's the difference between static / dynamic arrays, and which one do I need here? (I was told I needed static by 1 person and dynamic by another)
3. How can I make a function and condense all those "if" statements?
4. Do I need all those "include" statements at the top?
Please keep in mind, I'm a truck driver, and though I've been interested in C++ for 6 weeks, I only get to my computer about an hour a week, so I'm a 6 hour c++ vet... grin | http://www.dreamincode.net/forums/topic/32903-structures-cin-and-functions/ | CC-MAIN-2016-26 | refinedweb | 1,337 | 80.21 |
Linear of an LFSR output bit stream make it possible to distinguish LFSR output from that of a “perfect” generator of random bits; while this is undesirable in some applications that need a source of statistically-random bits, it is ideal for other applications, which we will be describing next.
In this and the next few articles, we will be discussing spread-spectrum techniques, which is where LFSRs really excel.
Note: I will caution the reader that I have no professional experience with applications of communication theory — unless you count CRCs and COBS, and neither of those are hardcore theoretical topics. So some experienced readers may notice discrepancies between the examples in these articles and what is used in real-world communications systems. I would appreciate it if you bring any such discrepancies to my attention so I can attempt to fix them. In these articles, however, I am less concerned with getting the practical details perfect than with trying to explain the underlying concepts.
Spread-Spectrum in a Nutshell
The essence of spread-spectrum techniques is to take some signal \( x(t) \) of energy \( E \) that occupies a bandwidth \( B \) and encode it evenly within a bandwidth \( K_S B \) as a new signal \( x_S(t) \) using some kind of reversible operation. The value \( K_S > 1 \) is the spreading factor or spreading ratio. There are two major advantages of this approach:
- it allows the power spectral density \( E/B \) of the original signal to be reduced to \( E/K_S B \)
- such an encoding is less sensitive to additive disturbance signals \( d(t) \)
A receiver can reverse the process by “despreading” the raw received signal \( x_S(t) + d(t) \) and then filtering the resulting signal within bandwidth \( B \) to attempt to recover the original signal \( x(t) \).
There are several techniques for spread-spectrum; the two most common are frequency-hopping and direct-sequence spread spectrum (DSSS). This article focuses on DSSS.
Fourier Transforms for the Uninitiated
It occurs to me that some readers may not be familiar with Fourier transforms; I included some graphs in the last article that were frequency spectra, and these may be jumping the gun. So here’s a 10-minute introduction.
Suppose we have a sine wave \( x(t) = \cos (\omega t - \phi) \). There are a pair of mathematical identities based on Euler’s formula \( e^{j\theta} = \cos \theta + j\sin \theta \), that relate \( \cos \theta \) and \( \sin \theta \) to complex exponentials:
$$\begin{align} \cos\theta &= \frac{e^{j\theta} + e^{-j\theta}}{2} \cr \sin\theta &= \frac{e^{j\theta} - e^{-j\theta}}{2j} \end{align}$$
So we can write \( \cos \theta \) and \( \sin\theta \) as sums of complex exponentials. Do the math and you can figure it out for any phase angle:
$$\begin{align} \cos(\theta - \phi) &= \frac{e^{-j\phi}}{2}e^{j\theta} + \frac{e^{j\phi}}{2}e^{-j\theta} \end{align}$$
The mathematician and physicist Jean-Baptiste Joseph Fourier showed that any periodic function could be decomposed into a weighted sum of sine waves (and therefore as a weighted sum of complex exponentials); the Fourier transform is a mathematical technique that does this. The symbolic Fourier transform, like that of the Laplace transform, is a method of turning eager young students into despondent refugees from a life of mathematical study, for example by making them memorize all sorts of tables where a rectangular waveform is transformed into a sinc function, and a triangle wave is \( \frac{8}{\pi^2}\sum\limits_{n\ {\rm odd}}\frac{(-1)^{(n-1)/2}}{n^2} \sin \frac{n\pi x}{L} \), and so on. Aside from tests in university classes, you can look that stuff up if you need it; otherwise you may go mad. Don’t worry about it. The numeric Fourier transform, on the other hand, is incredibly useful; you just take any periodic waveform with \( N \) discrete-time samples \( x[k] \) taken at time intervals \( \Delta t \), and plug it into the Discrete Fourier Transform (DFT) formula:
$$ X [k] = \sum\limits _ {i=0} ^ {N-1} x[i] e^{-2\pi jik/N}$$
In go the time-domain samples \( x[k] \). Out come the frequency-domain coefficients \( X[k] \); these are measured in steps \( \Delta f = \frac{1}{N\Delta t} \). In practice, a Fast Fourier Transform (FFT) is usually used, which is algebraically identical but uses a divide-and-conquer approach to compute in \( O(N \log N) \) time rather than the \( O(N^2) \) that you get from evaluating the DFT directly. The time-domain samples are usually real-valued (as opposed to complex-valued with nonzero imaginary components), and this causes the frequency-domain coefficients to have some symmetric properties.
Let’s go through some examples using
np.fft.fft to compute the FFT; here’s a 32-point cosine waveform:
%matplotlib inline import numpy as np import matplotlib.pyplot as plt pi = np.pi N = 32 t = np.arange(N)*1.0/N ax_time, ax_freq = show_fft(np.cos(2*pi*t), t, style=('.-','.')) ax_time.set_ylabel('x(t)') ax_time.set_xlabel('t') ax_freq.set_ylabel('X(f)') ax_freq.set_xlabel('f');
The top graph is the time-domain waveform \( x(t) = \cos 2\pi t \) sampled at \( N=32 \) subintervals; the bottom graph is the complex amplitude of the frequency domain \( X(f) \). For an integer number of periods \( P \) of a pure sine wave, the Fourier transform is zero except at two points: \( P \) and \( N-P \). Here we have one period, so the only nonzero coefficients of the Fourier transform are at \( f=1 \) and \( f=31 \). The amplitude is kind of funny; it’s 16, which is half the number of samples. You’d think, “Hey, the sine wave has amplitude 1, why doesn’t the frequency-domain component have amplitude 1? Why is it 16?” This is an artifact of the way the DFT is defined, and there are different definitions which can be used. Don’t worry about the amplitude in absolute terms; it’s just going to tell us the relative content at various frequencies. We only plot the complex amplitude here, which means we throw away phase information. Diehard DSP gurus will probably tell me that there’s some useful way to show phase information, but I don’t know of any, and when I use FFTs I usually just care about the amplitude, not the phase.
For real-valued time-domain waveforms, the upper half of the FFT is always symmetric with the lower half. This is kind of confusing, but there are a few ways of thinking about it. One is that the upper half of the FFT is really showing you negative frequencies (remember, \( \cos \omega t = \frac{e^{j\omega t} + e^{-j\omega t}}{2} \)), and it just wraps around in index like a circular buffer; we can display the FFT in a zero-centered way:
ax_time, ax_freq = show_fft(np.cos(2*pi*t), t, style=('.-','.'), upper_half_as_negative=True) ax_time.set_ylabel('x(t)') ax_time.set_xlabel('t') ax_freq.set_ylabel('X(f)') ax_freq.set_xlabel('f');
The other way to think about it is that we really do have a component at 31 times the fundamental frequency; this is an aliasing issue, and the Nyquist theorem says we can’t distinguish frequencies \( f \) from \( f_s-f \) where \( f_s \) is the sampling frequency — see for example the waveforms below — so the FFT will show both.
fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(t,np.cos(31*2*pi*t),'.') t2 = np.arange(2000)*1.0/2000 ax.plot(t2,np.cos(31*2*pi*t2),'-') ax.set_xlim(0,1);
In any case, because of the symmetry of real-valued waveforms, the upper half of the FFT doesn’t show any new information, so we’ll just hide it in all upcoming graphs. cos2pi(t): return np.cos(2*pi*t) show_fft_real(cos2pi(t),t,style=('.-','.'));
There. A sine wave at the fundamental frequency shows up as a single nonzero coefficient at \( f=1 \).
How about twice the fundamental frequency?
show_fft_real(cos2pi(2*t),t,style=('.-','.'));
One nonzero coefficient at twice the fundamental frequency. Piece of cake! Now let’s get into some more interesting stuff; let’s try a sum of sine waves:
show_fft_real(cos2pi(t) - cos2pi(3*t)/6,t,style=('.-','.'), ylim=[-1,1]);
That’s two nonzero components, one at the fundamental and one with 1/6 the amplitude at the third harmonic.
It is often easier to show FFT output on a decibel scale, showing the amplitudes relative to some standard level:
show_fft_real(cos2pi(t) - cos2pi(3*t)/6,t,style=('.-','.'), dbref=32, ylim=[-1,1]);
Here’s the spectrum of a flattened sine wave:
N=128 t = np.arange(N)*1.0/N def smod(x,n): return (((x/n)+0.5)%1-0.5)*n def flatcos(t): y = (cos2pi(t) - cos2pi(3*t)/6) / (np.sqrt(3)/2) y[abs(smod(t,1.0)) < 1.0/12] = 1 y[abs(smod(t-0.5,1.0)) < 1.0/12] = -1 return y show_fft_real(flatcos(t),t,style=('.-','.'), dbref=N);
This is a variant of a raised cosine, and it has all sorts of harmonics, but they roll off fairly quickly (below -60dB for 9th harmonics and above), and we get a nice pleasing time-domain waveform.
We can also look at multiple periods of this waveform (or any other waveform, for that matter). Below is a graph showing \( N_1 = 10 \) periods; the nonzero amplitudes in frequency spectrum are exactly the same, but between each point of the frequency spectrum for 1 period, we’ve added another \( N_1-1 \) (=9) points with zero amplitude.
N1 = 10 # number of periods N2 = 128 # number of samples per period N = N1*N2 t = np.arange(N)*1.0/N2 show_fft_real(flatcos(t),t,style=('-','.'), dbref=N);
Okay, now we’ll do something a little more interesting and look at sending a bit pattern, in this case the ASCII encoding of the string “Hi” as a UART bitstream LSB-first, with one start bit, one stop bit, and two idle bits. We’ll look at the FFT of the resulting bitstream, with and without raised-cosine transitions:
import pandas as pd def uart_bitstream(msg, Nsamp, idle_bits=2, smooth=True): """ Generate UART waveforms with Nsamp samples per bit of a message. Assumes one start bit, one stop bit, two idle bits. """ bitstream = [] b_one = np.ones(Nsamp) b_zero = b_one*-1 bprev = 1 if smooth: # construct some raised-cosine transitions t = np.arange(2.0*Nsamp)*0.5/Nsamp w = flatcos(t) b_one_to_zero = w[:Nsamp] b_zero_to_one = w[Nsamp:] def to_bitstream(msg): for c in msg: yield 0 # start bit c = ord(c) for bitnum in xrange(8): b = (c >> bitnum) & 1 yield b yield 1 # stop bit for _ in xrange(idle_bits): yield 1 for b in to_bitstream(msg): if smooth and bprev != b: # smooth transitions bitstream.append(b_zero_to_one if b == 1 else b_one_to_zero) else: bitstream.append(b_one if b == 1 else b_zero) bprev = b x = np.hstack(bitstream) t = np.arange(len(x))*1.0/Nsamp return pd.Series(x,t,name=msg) def show_uart_bitstream_fft(msg, Nsamp=32, style='-', **kwargs): x = uart_bitstream(msg, Nsamp, **kwargs) axis_time, axis_freq = show_fft_real(x,dbref=len(x),style=style) axis_time.set_title('UART bitstream of message "%s"' % msg, fontsize=None) return axis_time, axis_freq show_uart_bitstream_fft('Hi', Nsamp=16) show_uart_bitstream_fft('Hi', Nsamp=16, smooth=False);
There are a few things to notice here.
- The bit period and fundamental frequency are both normalized to 1.0 here. So, for example, if we were using 100 bits per second, then the bit period would be 10ms and the fundamental frequency would be 100Hz, but the waveforms would look the same aside from scaling.
- Unlike all the waveforms we’ve looked at so far, these have mostly nonzero coefficients; these two spectra are kind of smeared out. That’s because they are rather nonperiodic in nature — although realize that the DFT and FFT treat their input as an infinitely-repeating signal.
- The frequency content in the interval \( f \in [0,1] \) represents the low-frequency content, and depends mostly on the bit pattern.
- The frequency content above \( f=1 \) represents the high-frequency content, and depends mostly on the transitioning between bits; with raised-cosine transitions, the frequency content drops by around 60dB (a voltage factor of 1000!) by \( f=4 \), whereas with sharp rectangular transitions the frequency content sticks around at high frequencies, like an unwanted relative who has stayed too long.
From here on out we’ll just use the raised-cosine waveform, and we’ll concentrate on the part of spectrum where \( f\leq 8 \); the higher frequency stuff is just more of the same, but attenuated. Below is the spectrum for the bitstream for the ASCII string “UUUUUUUU”. Why “U”? Because it has the ASCII code 85 =
0b01010101; in LSB-first form with start and stop bits it becomes
0101010101, and because of this nice alternating bit pattern it is often used in microcontrollers for automatic baud rate detection in UARTs.
axis_time, axis_freq = show_uart_bitstream_fft('UUUUUUUU') axis_freq.set_xlim(0,8);
The spike at \( f=\frac{1}{2} \) is due to this strong regularity of alternating
0 and
1 bits; the only reason the spectrum doesn’t have complete gaps between these peaks is the idle time at the end of transmission, which tarnishes the periodicity of this waveform, and we get so-called spectral leakage between the peaks. If we get rid of the idle time then we’re left with a purely periodic waveform:
axis_time, axis_freq = show_uart_bitstream_fft('UUUUUUUU', idle_bits=0, style=('-','.')) axis_freq.set_xlim(0,8);
More erratic bit patterns lead to blurring in the spectrum:
axis_time, axis_freq = show_uart_bitstream_fft( 'The quick brown fox jumps over the lazy dog') axis_freq.set_xlim(0,8);
Here we have some smaller peaks which are most likely due to regularity at the byte level, and they should be at increments of \( f=\frac{1}{10} \) since there are 10 bits transmitted per byte (1 start bit + 8 data bits + 1 stop bit).
We’ll wrap up our spectral menagerie by looking at three more waveforms:
- white Gaussian noise from Python’s
numpy.random.randnPRNG
- white Bernoulli process noise (-1s and +1s) from the Python
numpy.random.randintPRNG
- pseudonoise from an LFSR
N = 511 t = np.arange(N)*1.0/N np.random.seed(1234) xwhitenoise = np.random.randn(N) ax_time, ax_freq = show_fft_real(xwhitenoise,t,dbref=N) ax_time.set_title('White Gaussian noise, N=%d' % N) xwhitebnoise = np.random.randint(0,2,size=(N,))*2-1 ax_time, ax_freq = show_fft_real(xwhitebnoise,t,dbref=N) ax_time.set_title('White Bernoulli noise, N=%d' % N) from libgf2.gf2 import GF2QuotientRing, checkPeriod H211 = GF2QuotientRing(0x211) assert checkPeriod(H211,511) == 511 def lfsr_output(field, initial_state=1, nbits=None): n = field.degree if nbits is None: nbits = (1 << n) - 1 u = initial_state for _ in xrange(nbits): yield (u >> (n-1)) & 1 u = field.lshiftraw1(u) H211bits = np.array(list(lfsr_output(H211)))*2-1 ax_time, ax_freq = show_fft_real(H211bits,t,dbref=N) ax_time.set_title('LFSR output, poly=0x%x, N=%d' % (H211.coeffs, len(H211bits)));
Both white noise spectra consist of irregular samples throughout their entire frequency range; they contain frequency content at all frequencies, low and high.
The LFSR output looks vaguely like white Bernoulli noise in the time domain, but in the frequency domain the spectrum is completely uniform in amplitude, which is something we looked at in the last article. Completely uniform! This is not an error! Over its entire period, the LFSR frequency components do this elaborate dance in the complex plane, with different phase angles but all having the same amplitude. As I mentioned last time, it’s better than a perfect PRNG. Too perfect for some applications, but as we’ll see next, it’s just right for spread-spectrum applications.
If we don’t sample the entire LFSR period, we get something that’s a little less regular:
H211bits = np.array(list(lfsr_output(H211, nbits=397)))*2-1 N = len(H211bits) t = np.arange(N)*1.0/N ax_time, ax_freq = show_fft_real(H211bits,t,dbref=N) ax_time.set_title('LFSR output, poly=0x%x, N=%d' % (H211.coeffs, N));
Anyway, now we’re ready to move back to the topic of spread-spectrum.
Spread-Spectrum Fundamentals
I mentioned there are several types of spread spectrum in common use; the two most common are frequency-hopping and direct-sequence spread spectrum. With both techniques, the idea is to conceal a narrow-band signal within a wider frequency range used for transmission.
For example, suppose I have some UART bitstream at 115200 baud. Because of the Nyquist theorem, the minimum bandwidth needed to transmit is twice this rate, or 230.4kHz. (We could reduce the bandwidth by using a multilevel modulation that uses a digital-to-analog converter to encode multiple bits at once — for example QAM or QPSK — but we’re not going to pursue those kind of schemes in this article.) If we just put out bits directly, they are centered at DC, and the required band is from -115200Hz to +115200Hz. But signals near DC don’t transmit via wireless, so unless we have a wire from point A to point B, this isn’t used. For RF transmission, we need to move the useful bandwidth to a higher frequency. This could be done with amplitude modulation (AM) or frequency modulation (FM) and use a different frequency range — but in any case, it will need at least 230.4kHz to reconstruct the original UART waveform, so we could use a band centered at 900MHz, from 899.885MHz to 900.115MHz.
Frequency-hopping
We could also use a wider range of frequencies. Frequency-hopping moves the band around over time, so for example, perhaps we transmit from 899.885MHz to 900.115MHz for one millisecond, then shift up to 904.885MHz to 905.115MHz for one millisecond, and shift to another band, and so on. It’s like a game of whack-a-mole — there’s a large pattern of holes on a table, and within any short time interval, the mole pops up from some specific hole in the table, but it moves around so much that to interfere with it, you need to target the entire set of holes. Apparently this — frequency hopping, not whack-a-mole — was invented by Nikola Tesla in a patent filed in 1900, although the language is kind of cryptic and Tesla never used the words spread-spectrum or frequency-hopping. An early non-cryptic explanation of spread-spectrum may be found in Jonathan Zenneck’s 1915 book Wireless Telegraphy, in which he describes:
185. Methods for Preserving Secrecy of Messages. — The fact that tuning does not in itself suffice to guard the secrecy of messages is a great disadvantage for many purposes (as in army and navy work).
The interception of messages by stations other than those called, can be prevented to some extent by telegraphing so rapidly that such relays as are customarily used will not respond and only specially trained operators will be able to read the messages in the telephone. Furthermore the apparatus can be so arranged that the wave-length is easily and rapidly changed and then vary the wave-length in accordance with a prearranged program perhaps automatically. This method makes it very difficult for an uncalled listener to tune his receiver to the rapid variations, but it is of no avail against untuned, highly sensitive receivers.
The more popularly-cited (but not the first) instance of early frequency-hopping was a patent filed in 1941 by Hedy Kiesler Markey (aka the actress Hedy Lamarr) and George Antheil covering secret communications by
a transmitting station including means for generating and transmitting carrier waves of a plurality of frequencies, a first elongated record strip having differently characterized, longitudinally disposed recordings thereon, record-actuated means selectively responsive to different ones of said recordings for determining the frequency of said carrier waves, means for moving said strip past said record-actuated means whereby the carrier wave frequency is changed from time to time in accordance with the recordings on said strip
and a similar receiving station with another “record strip” which could be operated in synchronization with the transmitting station “to maintain the receiver tuned to the carrier frequency of the transmitter.” In particular the record strips were envisioned as being like piano rolls with 88 rows, and one application was for “the remote control of dirigible craft, such as torpedoes.”
Lamarr and Antheil’s scheme utilized the whack-a-mole concept of frequency hopping, but did not describe the mathematically rigorous frequency-spreading aspect. It is harder to envision a signal modulated with a time-changing frequency, as a signal that is spread in overall bandwidth. Perhaps one can think of a time-lapse film of the mole moving so fast that it is a blur and appears to occupy all of the holes at once, although with a sort of ghostly transparency from being in each hole for only a short fraction of time.
At any rate, that’s the idea of frequency-hopping.
Direct-sequence spread-spectrum
The other commonly-used method of spread-spectrum is direct-sequence spread-spectrum, or DSSS. Suppose we consider our UART signal again, at 115200 baud. Each bit takes approximately 8.68μs to transmit. Now, we use some chipping signal \( s(t) \) consisting of values that are either \( +1 \) or \( -1 \), and is much faster, say at 16 times the bit rate, or 1.843MHz. The trick here is that we multiply the two to produce a spread waveform:
$$ x_s(t) = x(t)s(t) $$
Then at the receiving end, we take the received signal and multiply it by the same signal \( s(t) \), so that if there is no noise, \( x_r = x_s(t)s(t) = x(t)s^2(t) = x(t) \) because the chipping signal \( s(t) \) has the nice property that \( s^2(t) = 1 \). The faster bit rate of 16 times the bit rate is called the “chip rate”, and in this example the spreading factor \( K_S = 16 \).
Not any chipping signal will do; let’s use a square wave and see what happens:
msg = "Hi!" fig = plt.figure() x = uart_bitstream(msg, Nsamp=16, idle_bits=2) s = np.hstack([[-1,-1,1,1]*(len(x)/4)]) xs = x*s N = len(x) show_fft_real(x, fig=fig, dbref=N) ax_time, ax_freq = show_fft_real(xs, fig=fig,style=('.','-'),markersize=2.0,dbref=N); ax_freq.legend(['$x(t)$','$x_s(t)$'],labelspacing=0);
Here the bandwidth has been shifted from DC to centered around \( 4f \) because we are multiplying by a periodic chipping sequence with period of 4 × the bit rate. The process of multiplying by a carrier frequency is called heterodyning, and it shifts the frequency but leaves its bandwidth unchanged. Now, however, let’s use an LFSR sequence:
def transmit_uart_dsss(msg, field, Nsamp, smooth=False, init_state=1): x = uart_bitstream(msg, Nsamp=Nsamp, idle_bits=2, smooth=smooth) N = len(x) s = np.array(list(lfsr_output(field, initial_state=init_state, nbits=N)))*2-1 return x, x*s def show_dsss(msg, field, Nsamp=16, smooth=False): fig = plt.figure() x,xs = transmit_uart_dsss(msg, field, Nsamp=Nsamp, smooth=smooth) N = len(x) show_fft_real(x, fig=fig, dbref=N) ax_time, ax_freq = show_fft_real(xs, fig=fig, style=('.','-'), markersize=2.0, dbref=N) ax_freq.legend(['$x(t)$','$x_s(t)$'],labelspacing=0,loc='best') H1053 = GF2QuotientRing(0x1053) assert checkPeriod(H1053,4095) == 4095 show_dsss("Hi!", field=H1053, Nsamp=16, smooth=True)
Here the reduced spectral density isn’t immediately obvious, but let’s crank up the spreading factor to 128:
show_dsss("Hi!", field=H1053, Nsamp=128, smooth=True)
Now, in the real world, the DSSS modulation would happen before the edge-transition smoothing: just XOR the PN sequence with the desired data waveform. But the idea’s the same. We spread the spectrum out smoothly:
show_dsss("Hi!", field=H1053, Nsamp=128, smooth=False)
Now let’s model the receiver. We’re going to sidestep the problem of synchronizing the PN sequence as well as the bit boundaries, and pretend that our receiver already knows where both are, so that we can just use the following procedure:
- Take received waveform \( y(t) = x_{s}(t) + d(t) \) and multiply by the PN sequence \( s(t) \) to get \( x_r(t) = s(t)y(t) \)
- Integrate \( y(t) \) over each bit boundary
- Output a
1if the integration result is positive, otherwise output a
0
In reality, the process of synchronizing PN at the receive side is nontrivial. It’s one thing to have a nice clean signal where we can look at bits and try to find a stretch of them where the data bit didn’t change, so that \( y(t) = s(t) \), and then just use state recovery techniques described in part VIII. But spread-spectrum is typically used in situations where the signal is buried among other competing signals, so some kind of fancy correlator gets used that figures out how to align things. (That’s my way of handwaving because I don’t really know an elegant way to get this done.) The UART side of things isn’t too fancy — just a state machine to sample multiple times per bit and synchronize to the data bit pattern by finding the start and stop bit — but since it’s not relevant to the concept at hand, I’ll just pretend we can skip that part.
With no noise, it’s kind of boring. Here’s an example:
def receive_uart_dsss(bitstream, field, Nsamp, initial_state=1, return_raw_bits=False): N = len(bitstream) if field is None: xr = bitstream else: s = np.array(list(lfsr_output(field, initial_state=initial_state, nbits=N)))*2-1 xr = bitstream*s # Integrate over intervals: # - cumsum for cumulative sum # - decimate by Nsamp # - take differences # - skip first sample xbits = (xr.cumsum().iloc[::Nsamp].diff()/Nsamp).shift(-1).iloc[:-1] if return_raw_bits: return xbits # - test whether positive or negative, convert to 0 or 1 xbits = (xbits > 0).values*1 # now take each stretch of 10 bits, # grab the middle 8 to drop the start and stop bits, # and convert to an ASCII coded character msg = '' for k in xrange(0,10*(len(xbits)//10),10): bits = xbits[k+1:k+9] ccode = sum(bits[j]*(1<<j) for j in xrange(8)) msg += chr(ccode) return msg def error_count(msg1, msg2): errors = 0 for c1,c2 in zip(msg1, msg2): if c1 != c2: errors += 1 return errors def signal_energy(signal, t=None): if t is None: t = signal.index return np.trapz(signal**2, t) def error_count_msg(msg1, msg2): err = error_count(msg1, msg2) return err, "%d errors out of %d bytes" % (err, len(msg1)) msg1 = '''"The time has come," the Walrus said, "To talk of many things: Of shoes---and ships---and sealing-wax--- Of cabbages---and kings--- And why the sea is boiling hot--- And whether pigs have wings."''' x,xs = transmit_uart_dsss(msg1, H1053, Nsamp=128) msgr = receive_uart_dsss(xs, H1053, Nsamp=128) print msgr print error_count_msg(msg1, msgr)[1] print "signal energy:", signal_energy(x)
"The time has come," the Walrus said, "To talk of many things: Of shoes---and ships---and sealing-wax--- Of cabbages---and kings--- And why the sea is boiling hot--- And whether pigs have wings." 0 errors out of 195 bytes signal energy: 1951.9921875
Round 1: Spread-spectrum vs. Noise
Now here’s the neat part; we’ll add some noise. It looks like Gaussian noise starts to cause some difficulties once we reach about a standard deviation of about 4-4.5 times the DSSS-modulated waveform. That’s a huge amount of noise! Here’s random noise with standard deviation of 4.7 times greater than the digital waveform:
np.random.seed(123) disturbance1 = 4.7*np.random.randn(len(xs)) y1ss = xs + disturbance1 ndisp=160 plt.plot(y1ss[:ndisp]) plt.plot(xs[:ndisp]) plt.xlabel('bit index') msgr = receive_uart_dsss(y1ss, H1053, Nsamp=128) print msgr print error_count_msg(msg1, msgr)[1] def energy_msg(x, disturbance): Ex = signal_energy(x) Ed = signal_energy(disturbance, t=x.index) snr = 10*np.log10(Ex/Ed) msg = ( "signal energy: %7.1f\n" +"disturbance energy: %7.1f\n" +"SNR: %7.1fdB") % (Ex, Ed, snr) return snr, msg print energy_msg(xs, disturbance1)[1]
"The time has`come," the Walrus said, "Po talk of many things: Ofshoes---anD ships-�-aod sealin'-wax--- Od cabbages---and kings--- And why the sea is boiling hot--- And whataer pigs (ave!w�ngs." 13 errors out of 195 bytes signal energy: 1952.0 disturbance energy: 42977.5 SNR: -13.4dB
In this and the following graphs, I’m only showing the first 160 bits so that it is a little bit easier to see — otherwise the full 1950 bits is one big blur.
The thing is, we can get about the same bit error rate without bothering to use spread-spectrum:
y1 = x + disturbance1 ndisp=160 msgr = receive_uart_dsss(y1, field=None, Nsamp=128) plt.plot(y1[:ndisp]) plt.plot(x[:ndisp]) plt.xlabel('bit index') print msgr print error_count_msg(msg1, msgr)[1] print energy_msg(x, disturbance1)[1]
"The time law come," the Walvus said, "To ta�k of many thyngs: Of sj�es---and ships---and sealing-wax--- Of cabbages---and0kings--- And why the sea is bgiling �/t--- And whgthws pigs have wings." 14 errors out of 195 bytes signal energy: 1952.0 disturbance energy: 42977.5 SNR: -13.4dB
The noise immunity here comes from the fact that we take the average value over the 128 samples per bit, in which case the raw received waveform looks like this:
ndisp = 160 Nsamp=128 msgr = receive_uart_dsss(y1, field=None, Nsamp=Nsamp, return_raw_bits=True) x0 = x.iloc[::Nsamp] # original bit pattern (1 sample per bit) # received bits and original +/- 1 bits are opposite sign error_indices = np.argwhere(msgr*x0<0)[:,0] ii = error_indices[error_indices<ndisp] for w in [msgr,x0]: plt.plot(w[:ndisp],drawstyle='steps-post',linewidth=0.8) plt.plot(ii+0.5, msgr[ii],'.') plt.xlabel('bit index');
The averaged noise is just enough to flip a few of the received bits (highlighted with red dots in the graph above). With oversampling by a factor of \( N_a \) and averaging, we reduce the variance of the noise by a factor of \( N_a \), which is a reduction in standard deviation by a factor of \( \sqrt{N_a} \), or \( \sqrt{128} \approx 11.31 \) in this case, bringing the effective noise level to 4.7/11.31 = an expected standard deviation of 0.415 relative to the bit level. We can see that if we take a histogram:
from scipy.stats import norm def show_error_histogram(err, bins=None, binsize=0.1): errmin = min(err) errmax = max(err) if bins is None: bins = np.arange(np.floor(errmin/binsize), np.ceil(errmax/binsize))*binsize err.hist(bins=bins, label='histogram') sample_std = err.std() print "sample standard deviation of error:", sample_std norm_distribution = norm.pdf(bins/sample_std) A = len(err)/sum(norm_distribution) errx = np.arange(0,1,0.005)*(errmax-errmin) + errmin plt.plot(errx, A*norm.pdf(errx/sample_std), label='expected distribution') plt.xlabel('signal error (received-original)') plt.ylabel('number of samples') plt.legend(loc='best',fontsize=10,labelspacing=0) show_error_histogram(x0-msgr,binsize=0.1)
sample standard deviation of error: 0.426599071579
Round 2: Spread-spectrum vs. Disturbance
What’s more interesting is if we add in a disturbance signal that is not noise, for example a frequency sweep:
from scipy.integrate import cumtrapz t = x.index p = 120.0 triwave = np.abs(4*((t/p - 0.5) % 1) - 2)-1 trapz = np.clip(1.1*triwave,-1,1) freq = 5.5+4.5*trapz angle = cumtrapz(freq, t, initial=0) disturbance2 = pd.Series(2.2*np.sin(angle),t) ndisp = 160 disturbance2[:ndisp].plot() x[:ndisp].plot() plt.xlabel('bit index');
Now let’s see how this affects reception of the message:
y2 = x + disturbance2 ndisp=160 msgr = receive_uart_dsss(y2, field=None, Nsamp=128) plt.plot(y2[:ndisp]) plt.plot(x[:ndisp]) plt.xlabel('bit index') print msgr print error_count_msg(msg1, msgr)[1] print energy_msg(x, disturbance2)[1]
#�he time �1s�come," t�9!�alrus sa�91�"To talk�3(many thi� 1* Of shoew 9-and shipU 9-and sealU̹-wax--- OU��abbages--U��e kings--U��md why thU��ea is boiM��U hot--- A��Twhether p�NU have winfFV" 57 errors out of 195 bytes signal energy: 1952.0 disturbance energy: 4723.7 SNR: -3.8dB
Almost one third of the received bytes have errors! Now let’s add the same disturbance to a spread-spectrum transmission of the same message:
y2ss = xs + disturbance2 ndisp=160 plt.plot(y2ss[:ndisp]) plt.plot(xs[:ndisp]) plt.xlabel('bit index') msgr = receive_uart_dsss(y2ss, H1053, Nsamp=128) print msgr print error_count_msg(msg1, msgr)[1] print energy_msg(xs, disturbance2)[1]
: 4723.7 SNR: -3.8dB
No errors! What’s going on?
The frequency-sweep disturbance is concentrated at the low end of the spectrum. For the bitstream transmitted without spread-spectrum, this disturbance occupies much of the same signal. For the bitstream transmitted with spread-spectrum, this disturbance contaminates only a small portion of the transmitted frequency spectrum:
fig = plt.figure() N = len(x0) show_fft_real(xs, fig=fig, dbref=N,freq_only=True) _, ax_freq = show_fft_real(disturbance2, fig=fig,dbref=N,freq_only=True) ax_freq.set_ylim(-90,40) ax_freq.legend(['DSSS-encoded signal','disturbance2'],fontsize=10) ax_freq.set_title('Frequency spectra of raw received signal');
When we multiply by the chipping sequence,
- the DSSS-encoded signal is transformed back to the original bitstream, occupying only the low-frequency bandwidth
- the disturbance is transformed into a scrambled disturbance over the entire frequency range
The graph below shows this, and towards the lower end, the original signal peeks up above the noise of the scrambled disturbance:
N = len(x0) s = np.array(list(lfsr_output(field=H1053, nbits=len(xs))))*2-1 x_processed = xs*s disturbance2_processed = disturbance2*s fig = plt.figure() show_fft_real(x_processed, fig=fig, dbref=N,freq_only=True) _, ax_freq = show_fft_real(disturbance2_processed, fig=fig, dbref=N,freq_only=True) ax_freq.legend(['Original signal','scrambled disturbance2'], fontsize=10,loc='lower left') ax_freq.set_ylim(-90,40) ax_freq.set_title('Frequency spectra of despread signal');
In the real world we don’t receive our signal and noise separately; they are added together, but you can still see the signal peeking up above the noise at the low end:
fig = plt.figure() _, ax_freq = show_fft_real(y2ss * s, fig=fig, dbref=N,freq_only=True) ax_freq.set_ylim(-90,40) ax_freq.set_title('Frequency spectra of despread signal');
After filtering, the signal does have some residual error, but not enough to flip a bit:
msgr = receive_uart_dsss(y2ss, field=H1053, Nsamp=Nsamp, return_raw_bits=True) msgr[:ndisp].plot(drawstyle='steps-post') x0[:ndisp].plot(drawstyle='steps-post') plt.xlabel('bit index');
show_error_histogram(x0-msgr,binsize=0.02)
sample standard deviation of error: 0.140077253704
We should be able to double or maybe even triple the disturbance level and still avoid bit errors:
def signal_energy(signal, t=None): if t is None: t = signal.index return np.trapz(signal**2, t) snrlist = [] for kd in np.arange(2.0,4.01,0.2): disturbance3 = kd*disturbance2 y2ss2 = xs + disturbance3 msgr = receive_uart_dsss(y2ss2, H1053, Nsamp=128) errct, errmsg = error_count_msg(msg1, msgr) snr, emsg = energy_msg(xs, disturbance3) snrlist.append((1.0*errct/len(msg1),snr)) if errct < 30: # we don't need to see results for lots of errors print "disturbance amplitude: %.2f" % np.abs(disturbance3).max() print msgr print errmsg print emsg print ""
disturbance amplitude: 4.40 : 18894.7 SNR: -9.9dB disturbance amplitude: 4.84 : 22862.6 SNR: -10.7dB disturbance amplitude: 5.28 "The time hqs come," the Walrus said, "To talk of many things: Of shoes---and ships---and sealing-wax--- Of cabbages---and kings--- And why the sea is boiling hot--- And whether pigs have wings." 1 errors out of 195 bytes signal energy: 1952.0 disturbance energy: 27208.4 SNR: -11.4dB disturbance amplitude: 5.72 "The time hq� come," the Walrus said, "To talk of many things: Of shoes---and ships---and sealing-wax--- Of cabbages---and kings--- And why the sea is boiling hot--- And whether pigs have wings." 2 errors out of 195 bytes signal energy: 1952.0 disturbance energy: 31932.1 SNR: -12.1dB disturbance amplitude: 6.16 "The time hq� come," the Walrus said, "To talk of many things: Of sioes---and ships---and sealing-wax--- Of cabbages---and kings--- And why the sea is boiling hot--- And whether pigs have wings." 3 errors out of 195 bytes signal energy: 1952.0 disturbance energy: 37033.6 SNR: -12.8dB disturbance amplitude: 6.60 "The time hq� come," the Walrus said, "To talk of many things: Of sioes---and ships---and sealing-wax--- Of cabbages---and kings--- And why the sea!is boiling hot--- And whether pigs have wings." 4 errors out of 195 bytes signal energy: 1952.0 disturbance energy: 42513.1 SNR: -13.4dB disturbance amplitude: 7.04 "The time hq� come." the Walrus siid, "To talk of many things: Of sioes---anl ships---and sealing-wa�--- Of cabbages---and kings--- And why the seq!�s boiling hot--- And whether pigs have whngs." 11 errors out of 195 bytes signal energy: 1952.0 disturbance energy: 48370.5 SNR: -13.9dB disturbance amplitude: 7.48 "Tle time0hq� come." the Walrus siid, "To talk of many things: Of sioes-)-anl ships---and sealing-wa�--= Of c`bbages---and kings--= And why the seq!�s boilinf hot%-- And whether rigs hafe whngs." 21 errors out of 195 bytes signal energy: 1952.0 disturbance energy: 54605.7 SNR: -14.5dB
The signal-to-noise ratio (SNR) at which we start to see errors is no coincidence; with our filtering method of averaging each 128 samples, this reduces the energy of white noise by \( 10 \log_{10} 128 \approx 21 \)dB for an SNR gain of 21dB. White Gaussian noise with unit amplitude has a standard deviation of 1, so it’s pretty easy to figure out a bit rate error as the fraction of samples that exceed ±1 for some large number of normally-distributed pseudorandom samples:
from scipy.stats import norm np.random.seed(123) npoints = 1000*1000 unit_noise = np.random.randn(npoints) def db_to_amplitude(db): return 10**(db/20.0) dbmin = -5 dbmax = 15 dbrange = np.arange(dbmin,dbmax) dbrangefine = np.linspace(dbmin,dbmax,1000) bit_error_rate = [np.count_nonzero((unit_noise /db_to_amplitude(db)>1) ) *1.0/npoints for db in dbrange] walrus_data = np.array(snrlist) walrus_bit_error_rate = walrus_data[:,0] walrus_snr = walrus_data[:,1] + 10*np.log10(128) # account for 128:1 averaging fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.semilogy(dbrangefine, norm.cdf(-db_to_amplitude(dbrangefine)), '-', linewidth=0.5) ax.semilogy(dbrange, bit_error_rate, '.') ax.semilogy(walrus_snr, walrus_bit_error_rate,'x') ax.grid('on') ax.set_xlim(dbmin,dbmax) ax.set_xlabel('SNR (dB)') ax.set_ylabel('Bit error rate') ax.legend(['Theoretical curve', 'Random samples of Gaussian', 'Walrus spread-spectrum data'], fontsize=10, loc='lower left', labelspacing=0);
This is a pretty typical graph for bit error rate. At low values of SNR there are lots of errors; with higher values of SNR the probability of error rapidly drops, and it’s hard to verify empirically without running into Black Swan effects, which is why that green dot at 13dB is below the curve and there are no dots beyond it. The experimental data for our spread-spectrum data is a little bit higher than for independent random Gaussian samples — and perhaps that’s because the simple boxcar averaging of 128 values isn’t a very good filter to use; maybe we should be using a filter with a sharper cutoff.
The theoretical curve here is based on the cumulative distribution function (CDF) of a Gaussian distribution; we’re basically looking at area of one tail of the probability distribution function (PDF) where outlying data is greater than a particular threshold, namely \( Q(a) = \Phi(-a) \) for some number of standard deviations \( a = 10^{\rm SNR/20} \) for SNR in dB. (\( Q(x) \) is the Q-function in statistics, found on some scientific calculators, and represents the probability that a unit Gaussian random variable is greater than \( x \). The astute reader will also note that sometimes when converting between dB and a raw signal, we use 20dB/decade and other times 10dB/decade; the conversion factor is 20dB/decade when dealing with amplitude, and 10dB/decade when dealing with power and energy, which are proportional to amplitude squared.)
(Note: originally this article used both tails of the PDF, which is incorrect. If we have a bit sequence consisting of \( \pm A \), then when the signal is \( +A \), a positive error will not result in a decoded bit flip, whereas a negative error will cause a decoded bit flip only if that error is below \( -A \). Similarly, when the signal is \( -A \), a negative error will not result in a decoded bit flip, whereas a positive error will cause a decoded bit flip only if that error is above \( +A \). Both tails are involved, but only one at a time depending on what the signal is, so the average error rate is \( Q(a) \).)
x = np.arange(-5,5,0.001) fig=plt.figure(figsize=(6,1.5)) ax=fig.add_subplot(1,1,1) y = norm.pdf(x) a = 1.8 ax.plot(x,y) for whichtail in [-1,1]: xtail = x[x*whichtail > a] ytail = norm.pdf(xtail) ax.fill_between(xtail,0,ytail,facecolor='green',edgecolor='none') ax.set_xlim(-5,5) ax.set_xticks([-a,a]); ax.set_yticks([]) ax.set_xticklabels(['-a','a']); ax.set_title('Tails of a Gaussian PDF');
Round 3: Sharing is Caring — Spread-spectrum vs. Spread-Spectrum
So far we’ve looked at only one transmitter; part of the benefits of spread-spectrum is that it is possible for more than one transmitter to share bandwidth; we just have to use a non-interfering modulation sequence. Shown below is one example; we’ll encode the Walrus and Carpenter excerpt using DSSS with one LFSR output sequence, and encode part of Jabberwocky using DSSS with another LFSR output sequence, and add them together, then decode both from the same transmitted signal:
msg!''' # pad msg1 so they have the same length msg1 += ' '*(len(msg2) - len(msg1)) Nsamp=128 x1,xs1 = transmit_uart_dsss(msg1, H1053, Nsamp=Nsamp) # offset LFSR sequence by about half the period state2 = H1053.lshiftraw(1,2048) x2,xs2 = transmit_uart_dsss(msg2, H1053, Nsamp=Nsamp, init_state=state2) # add both sequences y = xs1 + xs2 msgr1 = receive_uart_dsss(y, H1053, Nsamp=Nsamp, initial_state=1) print msgr1 print error_count_msg(msg1, msgr1)[1] print "" msgr2 = receive_uart_dsss(y, H1053, Nsamp=Nsamp, initial_state=state2) print msgr2 print error_count_msg(msg2, msgr2)[1]
"The time has come," the Walrus said, "To talk of many things: Of shoes---and ships---and sealing-wax--- Of cabbages---and kings--- And why the sea is boiling hot--- And whether pigs have wings." 0 errors out of 263! 0 errors out of 263 bytes
No errors! This is because spread-spectrum makes each of the signals look like noise to the other one; here’s the received spectrum as seen by receiver 2:
s = np.array(list(lfsr_output(field=H1053, initial_state=state2, nbits=len(xs1))))*2-1 fig = plt.figure() _, ax_freq = show_fft_real(y * s, fig=fig, dbref=len(xs1)/Nsamp, freq_only=True) ax_freq.set_ylim(-90,40) ax_freq.set_title('Frequency spectra of despread signal');
You can see the spectrum of message 2 peeking up above the “noise” of message 1 after it’s been despread by the LFSR sequence used for receiving message 2.
There is a catch here; we have to keep the chipping sequences uncorrelated. One way to do this is use the same LFSR output sequence, shifted in time — as we did in this contrived example — but that’s not really practical, because it would require time synchronization among all transmitters. Without such time synchronization, there’s a small chance that two transmitters would use a chipping sequence which is close enough that they would interfere, and the birthday paradox says that this goes up as the square of the number of transmitters. In practice, other methods are used, and we’ll talk about one of them (Gold codes) in an upcoming article.
Where do we go from here? For basic direct-sequence spread-spectrum concepts, that’s about it. Next time we’ll be looking at applications of spread-spectrum for system identification.
Wrapup
With so many graphs — I think this is the article that sets the record — what we showed may be a little bit hazy. Here’s a summary:
Direct-sequence spread spectrum (DSSS) is implemented by modulating a data bitstream with a chipping signal with amplitude ±1 and a bit rate that is some factor \( K_S \) greater than the bandwidth of the data bitstream.
- With a digital data bitstream, XOR is used
- With an analog data bitstream, multiplication is used
- \( K_S \) is the spreading factor or spreading ratio.
- LFSR output bit sequences are sufficient for a chipping signal
- The chipping signal doesn’t have to be that of an LFSR; any sequence with amplitude ±1 and reasonably flat frequency spectrum can be used, although LFSR output sequences are very convenient
DSSS demodulation reverses the process
- The same chipping sequence must be used in a synchronized manner to demodulate the received bitstream
- DSSS demodulation essentially scrambles the spectrum of any disturbance
- Low-pass filtering to include all of the original signal bandwidth is used, to filter out noise or disturbance
DSSS provides no benefits in reducing the disturbance of wideband noise that covers the full spectrum of the modulated sequence (see Round 1)
- DSSS provides increased noise immunity to narrow-band disturbances, by spreading their energy (during demodulation) over the entire bandwidth of the modulated signal (see Round 2)
- DSSS can allow multiple transmitters to share transmission bandwidth when using chipping signals that are uncorrelated (see Round 3)
References
Jan Meel, Spread Spectrum (SS) — introduction, De Nayer Instituut, Sint-Katelijne-Waver, Belgium, 1999.
Raymond L. Pickholtz, Donald L. Schilling, Laurence B. Milstein, Theory of Spread-Spectrum Communications — A Tutorial, IEEE Transactions on Communications, vol. 30, no. 5, pp. 855-884, May 1982.
Both of these are excellent introductory references; of the two, Jan Meel’s has more pretty pictures, and Pickholtz et al goes deeper into the theory.
Previous post by Jason Sachs:
Linear Feedback Shift Registers for the Uninitiated, Part XI: Pseudorandom Number Generation
Next post by Jason Sachs:
A Wish for Things That Work
- Write a CommentSelect to add a comment
Jason, well done sir,
I wish I had you as my professor in college before I started at AT&T bell labs in radio (wireless) communications. Obviously there is more to this tip of the iceberg you have referenced. But a good job getting the ball rolling. Worth more then a beer for sure!!!! kudo. | https://www.embeddedrelated.com/showarticle/1124.php | CC-MAIN-2019-47 | refinedweb | 7,797 | 53.71 |
“Component Save System” is a save system for Unity. The plugin was originally made for the RPG Farming Kit, but has been expanded upon to work in multiple projects. The decision to make this plugin free was because there are already a lot of paid alternatives to saving for Unity, such as Easy Save.
I felt that an inclusion of a system would be good for the ecosystem of Unity projects. Making it possible to include a free save tool into sample games or kits. I also felt like Unity lacked an easy way of saving components, akin to the current modularity the engine has.
You can find the plugin on GitHub here. You can also download it through the Unity Asset Store using the link below. Do note that the plugin has become paid since v1.1. The version of Github lacks some features and architectural changes the new version has.
How does it work?
- Once the plugin has been added to your project, a global instance called Save Master will be instantiated before any scene is loaded. This makes it as easy as possible to get started with the plugin, making it plug and play out of the box.
- On startup it tries to load and create a save by default, based on a save slot. This can be turned off in a settings window.
- In order to save and load save games, it uses the Save Utility to get/create the appropriate files.
- Once the Savegame class has been retrieved or created, the save master will inform all currently subscribed Saveable of this reference, as well as all newly spawned objects that contain Saveable components.
- Once a Saveable component is informed (On Load), it will attempt to apply all data from the save game to each component that implements ISaveable. The identification is based on GUIDS. <SceneName><GameObjectName + GUID><ComponentName + GUID>.
- Once a Saveable component gets destroyed it will sent the current state of all the ISaveable components back to the save game.
- Once the Savemaster gets destroyed or the game gets paused (out of focus in Android) the current savegame will get written to the harddisk. You can also call SaveMaster.WriteActiveSaveToDisk() to force this, and you can also disable the auto write of a save in the configuration settings in case you want to have specific save spots in the game.
What does this system add in comparison to other save systems?
Most save systems do not handle the work of giving each object a unique identification. For instance, I want two enemies in my scene with a uniquely saved position, visibility and such. You could store a string like this with a popular saving system:
ASaveSystem.Set<string>("my saved content", "<scenename><objectname><scriptname>")
However, once you rename your object or scene name you already run into issues. Also, you would have to rewrite the same implementation in each script, each time to create a proper identification. It could easily turn into a mess, adding more work to turn it into a extendable system. In this case, most of that legwork has already been done for you. Only requiring you to write implementations for your components.
Add a “Saveable” component to an object and let the save system handle the rest for you.
Once you add a Saveable component to the root of a game object, it will fetch all components that implement ISaveable, and serve as a messener and identifier for the object and the components that implement the interface. The saveable is responsible for sending messages to all components that implement ISaveable, for instance once a save or load request has been made. Once a Saveable gets destroyed, the data gets automatically sent to the save game file, meaning that when you switch scenes or exit to the main menu everything is kept saved.
How to create your own saveable components
Creating a saveable component is quite straightforward, atleast if you use a IDE like Microsoft Visual Studio or Rider. Which make it very easy to implement interfaces into existing classes by pressing ALT+Enter.
For your component you have to add this using statement:
using Lowscope.Saving;
Afterwards you can add the ISaveable interface to your component. Which adds three methods to your class.
public void OnLoad(string data) { // Data has been given to this component // I can now do something with it! } public string OnSave() { return "The data that I want to save"; } public bool OnSaveCondition() { // Should I save, return true or false to potentially improve performance. return true; }
You may be wondering, how can I use a string to send and retrieve data and make it usable for my object? The easiest answer is to use JSON. A text based representation of data. Unity has a built-in tool to turn objects into a JSON string and back into an object. This is called the JSONUtility. Below is an example that uses this utility. This same example is also shown on the github repository.
using Lowscope.Saving; using UnityEngine; public class TestScript : MonoBehaviour, ISaveable { [System.Serializable] public class Stats { public string Name = "Test Name"; public int Experience = 100; public int Health = 50; } [SerializeField] private Stats stats; // Gets synced from the SaveMaster public void OnLoad(string data) { stats = JsonUtility.FromJson<Stats>(data); } // Send data to the Saveable component, then into the SaveGame (On request of the save master) // On autosave or when SaveMaster.WriteActiveSaveToDisk() is called public string OnSave() { return JsonUtility.ToJson(stats); } // In case we don't want to do the save process. // We can decide within the script if it is dirty or not, for performance. public bool OnSaveCondition() { return true; } }
The representation of this data will look like this once saved:
Ô�{ "metaData": { "gameVersion": 0, "creationDate": "11/26/2019 2:47:31 PM", "timePlayed": "00:00:20" }, "saveData": [ { "guid": "TestScene-TestScriptGameobject-d4dbf-TestScript-ac11c", "data": "{\"Name\":\"Test Name\",\"Experience\":100,\"Health\":50}" } ] }
Storing spawned objects. For instance pickups that you drop/pickup
Within the SaveMaster class there is a method called SpawnSavedPrefab(InstanceSource source, string filePath)
Currently there is only support of the Resources folder as a source. How it works is, each scene has a Instance Manager, which keeps track of what has been spawned, and the save identification it contained. Using the given path it will spawn it again using the tracked data.
I have a problem. Everything fine with “SaveVisibility” script, when object was enabled in the beginning. But when object disabled on start it not saved at all, no matter if it was enabled later. Save System just can’t see them. I don’t know how to fix it, I’m afraid to broke something.
Hello id0, for scripts to work in Unity the object needs to be active. Else no code can run on the game object.
This means it wont be possible to load data for objects while they haven’t been activated yet.
Could you tell me what it is that you want to achieve? I’ll probably be able to help you out. You can send me an email to info@low-scope.com
Ok I just change visibilitySaver like that. This seems to work. Maybe this will help someone else:
public bool hideOnStart = true;
bool isLoaded = false;
private bool isEnabled;
private bool firstTime = true;
private void OnEnable(){
if(!isLoaded){
if(hideOnStart){
if(firstTime){
gameObject.SetActive(false);
isEnabled = false;
firstTime = false;
}
}
else
isEnabled = true;
}
else
isEnabled = true;
}
public void OnLoad(string data){
isEnabled = (data == “1”);
gameObject.SetActive(isEnabled);
isLoaded = true;
}
Well, maybe some tweak in the script that checks if the object should be disabled after adding it to the save system? I have some level objects enabling by trigger, and they should remain the same after saving / loading. Yeah, and thanks for great save system 🙂
Hey, could you create a component or show me some code on how to do manuall saving?
I managed to do some kind of manual saving but it would still auto save becouse I hade to untoggle manual saving.
Thanks
Hi! Thanks for your interest in the Component Save System.
Regarding your question, you can toggle off auto saving by going to “Saving/Open Save Settings” in Unity.
You then untoggle “Auto Save On Exit” and “Auto Save On Slot Switch”. Afterwards, you can call this static method in your code to save
the game to disk: “SaveMaster.WriteActiveSaveToDisk();” Hope this helps!. If you have further questions. Feel free to email me at
info@low-scope.com
How do I save transform data? I’m trying to save spawned prefabs as children of a gameobject and when I restart the game, they don’t remain as child objects and instead are spawned directly into the game world.
Hello Josh,
It’s difficult to save to what transform an object has been attached to.
The instance saving system also hasn’t been designed for that. Mainly because there isn’t any way to get a “Key”
that is always the same for a transform, you could do it by using GameObject.Find() but that is also not very robust/performant.
I will add a new event called onSpawnedSavedInstance in the next update. This is a delegate that sends a scene struct and InstanceID.
This way you can at least fetch potential instances when they are spawned.
Hi there thanks for your awesome asset it made my life a lot easier
i want to add a manual save and load button to the UI but i haven’t had any luck yet if it’s not too much trouble would you mind helping me implement this ? thanks in advance
Well, I’m may start to cry after I send this comment.
Thank you so much for your work, all these years I left games made with unity, unfinished and unpublished, because I couldn’t design them to save the data only as int strings etc…
[I don’t know how this works but using my monkey brain, unity has a load scene functionality so…
Spawn game objects on the scene and record them when instantiation happens, then save the position, rot, references parent-child,script, etc..]
Anyway! I Love your work and thank you so much!!!!
Good evening, during a change of scene and a return to the original scene, the reference of recorded components lost their reference, how to do? Thanking you.
I’m assuming this happens because you have things that are dynamically created correct?
You can go around this by saving the state within the spawner of such objects, so that it remembers what to spawn and how
it is connected. Or you can use SaveMaster.SpawnSavedPrefab() if you want to have prefabs keep the state.
Firstly..Awesome asset
I have a chest open animation, how do I save the completed animation? When I re-enter the scene the chest is closed again
Thanks
Hello Col,
I assume you have a open and closed animation, correct?
If you open the animator, do you have a open and closed state?
What you want to do, is save the state of the chest, and apply that upon load.
So you would use this as save data:
[System.Serializeable]
private class SaveData
{
public bool opened;
{
You then send back this data using the OnSave() interface callback,
and during the onload you look if this is set to true. If it is true, you set your local variable to opened (if necessary).
And set the animation to the open state using Animator.Play()
Thanks for the quick response. This asset is so much better than the paid ones I have!
This is an AMAZING Asset that helps me to save a lot of time and is even free to use.
The background of my project is that I have a login system that users have to sign up from which directs them to the scene. If they leave the scene at any given time their location are supposed to be recorded and saved alongside their login information so that in the future, that individual is able to revisit their progress. To put it simply, I am trying to achieve something like the game Minecraft where each different user have differing spawn locations after they log in where their positions are based on their previous activity. The problem that I am facing is that I am unable to successfully save the different variables of the user inside their own user information.
I took a reference from this video for the login system.
It would be a great help if you guys can help me out on the problem.
Thanks in advance and keep up the great work!
I was just wondering how to manually load data instead of auto loading data. I cant figure out how (am tired so may have missed it). Great Asset!
Hey Alex, thank you so much for this. I started my game with your script and even after many versions, your asset is still working perfectly. There’s just one place where I am stuck. Currently to spawn any prefab it needs to be in resource folder right? How do I change it into my own custom folder?
Thank you in advance.
Hello Sharu, a new version of the Component Save System is getting launched soon.
binding.
Make sure to get it from the Asset Store, because the new version will become a paid version. (Existing users get it free)
The new version will allow you to add a new resource loader by providing a Func
This new version should be dropping in the upcoming weeks.
SaveMaster.AddPrefabResourceLocation("ExampleCustomResourceLoader", (id) =>
{
GameObject getResource = ObtainMyResourceMethodHere(id);
return getResource;
});
And calling it like this:
var spawnBomb = SaveMaster.SpawnSavedPrefab(InstanceSource.Custom, "ExplodingBomb", "ExampleCustomResourceLoader");
I have a problem with saving the prefabs. I get an error “object reference not set to an instance of an object” when i am trying to pick up the object.
Hello Maarten,
Could you reply me with a more specific error log?
You can send me a mail at info@low-scope.com
is it possible to instanciate a gameobject under a certain parent when loading
Currently this is only possible by listening to this event:
Sorry, this isn’t possible yet with the current version. The next update will make this a bit easier.
SaveMaster.OnSpawnedSavedInstance
The reason for this is, there isn’t an easy way to save what an object was previously parented to.
Unless there is code in place for the saveable to know what to parent to. Using the SaveMaster.OnSpawnedSavedInstance
event you can monitor for spawn events and use savedInstance.transform.SetParent(myParent) to set the parent after spawning.
Example of how you can implement the event:
private void Awake()
{
SaveMaster.OnSpawnedSavedInstance += SpawnedInstance;
}
private void OnDestroy()
{
// Never forget to unsubscribe
SaveMaster.OnSpawnedSavedInstance -= SpawnedInstance;
}
private void SpawnedInstance(Scene scene, SavedInstance savedInstance)
{
// Check if spawned instance is actually being spawned in the same
// Scene as parent, you can remove this if you like.
if (scene == this.gameObject.scene)
{
// Set the parent to this transform, could also be done for something else.
savedInstance.transform.SetParent(this.transform);
}
}
Really awesome asset. Thank you so much for providing this. I got a couple questions if you don’t mind. How do you stop it from auto loading? the function “Load default slot on start” doesn’t seem to be working for me. If I have an inventory system based on scriptable objects, can this save it? Or a set of bools that keep track of which dialogue has been completed? As you can probably tell, I’m trying to use it for a 3d adventure game (same mechanics as a point and click but 3d). Thanks again for your help and for the great asset.
That’s a bit odd, if you turn off the Load Default Slot On Start it shouldn’t load anything.
Unless you set SaveMaster.SetSlot() somewhere in code. I recommend you to watch the Youtube video to get an idea of how to
save individual components. It should help you enough to move forward. If you got further questions, please send me an email to info@low-scope.com
Hello, This asset is awesome.
But I have problem
When I take a pick up, then reload the scene I want to check if the pick up has been taken or not ?
please help me
There are multiple approaches for this, you could opt for using the SaveVisibility, so it never shows up again.
And having a variable stored in some kind of GameManager. You can check out the YouTube video, in this video I explain how to save
variables. You can also choose to save variables in each pickup and iterate over them / or send it to a singleton.
If you need more help, send me an email to info@low-scope.com
hi, i face a problem while using “Savevisibility” it works fine when the parent object is setActive true but it does not work when i turn off then parent object
Hey Abdul, do you still have issues with the latest version?
Apologies for my late response, please send me an email to info@low-scope.com if you still have issues. | https://low-scope.com/unity-plugin-free-save-system/ | CC-MAIN-2021-31 | refinedweb | 2,869 | 64.2 |
Code Focused
Extension methods bring together old and new ways of working with data, and open doors to new language opportunities.
LINQ will revolutionize the way Visual Studio developers think about and write code. At the most basic level, using LINQ-style queries in your code transitions your code from imperative- to declarative-style programming: you no longer say how things are done step-by-step, but instead move to stating what your goal is. LINQ also represents bigger, yet more subtle changes in the way we code. Delayed evaluation, functional programming, and language translation through expression trees are all part of the fabric of LINQ and significantly alter the landscape for VB and C#.
Consider this simple query that selects Customers with the last name "Smith":
Dim query = From c in Customers _
Where c.LastName = "Smith"
You might recall from my "Beautify Your Code with Extensions" article [Programming Techniques, VSM May 2007] that this query will call the Where extension method based on the most derived match for argument of the Where methods in scope. If the System.Linq namespace has been imported either at project level or at the start of the code file, and Customers is a List(Of Customer), the closest matching Where extension will be found in the System.Linq.Enumerable class.The System.Linq.Enumerable class contains all the LINQ extension methods for types defined as IEnumerable(Of T): The extensions include Any, All, Max, Min, Join, GroupBy, GroupJoin, OrderBy, ThenBy, Select, SelectMany, and Where -- to name a few. A query on a List(Of Customer) typically uses these extension methods. The Where method used in this case has this signature:
Public Shared Function Where(Of TSource) ( _
source As IEnumerable(Of TSource), _
predicate As Func(Of TSource, Boolean) _
) As IEnumerable(Of TSource)
The query is compiled to use the Where method of System.Linq.Enumerable:
Dim query = Customers.Where( _
Function(c As Customer) c.LastName = "Smith" )
Notice the Function(c As Customer) c.LastName = "Smith" part of the query expression. This is a lambda expression, also referred to as an inline function. In this case, the lambda expression is compiled to match the required signature of the predicate argument in the Where extension method.
The predicate argument is of the signature Func(Of TSource, Boolean), which is a concrete delegate signature based on the generic Funct(Of T, TResult). In this case, TResult is a Boolean so it means the function must return a Boolean.
Putting this all together, the Where method called would have this concrete signature:
Public Shared Function Where ( _
source As IEnumerable(Of Customer), _
predicate As Func(Of Customer, Boolean) _
) As IEnumerable(Of Customer)
An interesting feature of this approach is that even though you've defined the query to search through the LastName property, nothing has happened yet. This is because the lambda function is compiled as a delegate, which gets called only by the IEnumerator(Of T) the Where method returns. So, it's only when you iterate over the query that the lambda expression gets called on each customer to see whether the customer's LastName is Smith.
If you were to look inside the System.Linq.Enumerable class, you would find nested enumerator classes specialized for different kinds of expressions. In this case, the enumerator class is a <WhereIterator>d__0(Of TSource). The name doesn't matter because you shouldn't see that. The key thing to note is that it implements IEnumerable(Of TSource) and IEnumerator(Of TSource). In this case TSource being Customer.
This iterator class is both IEnumerable and IEnumerator, so it allows the class to return a reference to itself when IEnumerable.GetEnumerator is called. If GetEnumerator is called a second time or from a different thread, a reset clone is returned. The lambda function is stored in a field called predicate and when IEnumerator.MoveNext is called, the source is enumerated and the lambda is called on each item.
This delayed evaluation allows you to re-use items declared once. For example, assume you change the original query to take input from a textbox:
Dim query = From c in Customers _
Where c.LastName = TextBoxName.Text
You can now re-use this query. The value that is in the text box won't be evaluated until the query is iterated, but it will be evaluated on each loop. Assume, for example, there are multiple Smiths in your customer list when you implement this code:
For each c As Customer in query
MsgBox(c.LastName)
TextBoxName.Text = "Smith"
The first item returned is what the text box text was when you entered the code, but "Smith" will be the item looked for in all subsequent items. This is because the lambda in this case is compiled as a function, similar to this:
Function Lambda1(ByVal c as Customer) As Boolean
Return c.LastName = TextBoxName.Text
End Function
This approach comes with some advantages and potential disadvantages. You might get some unexpected results if the variable being used in the query expression (the TextBoxName.Text property, in this case) can change during the iteration. In some cases, this approach can result in a runtime exception. For example, you can get a runtime exception to occur if you modify the original List(Of Customer) by adding or removing items while the query is being iterated.
Of course, these problems are no different than the issues you need to deal with today when iterating a list; the difference is, the issues might not be as inherently obvious to you when using queries. The key thing to note here is this query expression is not evaluated until the query is iterated over:
Where c.LastName = _
TextBoxName.Text
This is in distinct contrast to a function call, where the TextBoxName.Text value is evaluated as the parameter value. The new code is declarative in nature.
Queries are Compositional
Queries aren't evaluated until they are iterated, so you can build a query on a query. For example, you might decide to get all Smiths with the first name "John" from the original query:
Dim query2 = From c in query _
Where c.FirstName = "John"
With a List(Of Customer), query2 would be another Where iterator that uses the where iterator from the first query as the source. Adopting this approach results in cascading iterators. In this particular case, it would be more efficient to express the query espression like this:
Where c.LastName = "Smith" AndAlso c.FirstName = "John"
Here, the expression is compiled into the one lambda function. But in cases where you are using a nested From clause, such as selecting invoices from the customer, separating the query into two parts can often help in readability:
Dim invoices = From c in Customers _
Where c.LastName = "Smith" _
From inv In c.Invoices _
Where inv.Date > searchDate _
Select inv
You could write the query as two separate queries:
Dim smiths = From c in Customers _
Where c.LastName = "Smith"
Dim invoices = From inv In smiths _
Where inv.Date > searchDate
If you are writing heavily nested queries, breaking them down into their compositional parts also makes for easier debugging. Both VB and C# allow an IEnumerable(Of T) to be evaluated while execution is paused for debugging purposes. This allows you to see easily how many Smiths there are.
Expression Trees
So far I've only talked about LINQ queries where the lambda expressions are compiled as functions. If you are using LINQ to SQL, you don't want these expressions to be functions called on the client side. In that case, you would be fetching all the data, then running the functions one-by-one as the entire data is iterated. Instead, what you want, and what you get, is a translation of the lambda expression to TSQL.
For example, assume you were to write a query similar to the one illustrated earlier. The generated T-SQL looks like this:
SELECT [t0].[FirstName], [t0].[LastName]
FROM [dbo].[Customers] AS [t0]
WHERE ([t0].[ FirstName] = @p0)
AND ([t0].[ LastName] = @p1)
— @p0: Input String
(Size = 4; Prec = 0; Scale = 0) [John]
— @p1: Input String
(Size = 5; Prec = 0; Scale = 0) [Smith]
LINQ to SQL will generate the same query whether or not you write this as one query or as a query on a query. In other words, LINQ to SQL optimizes compositional queries. You can write the query like this:
Dim finalquery = From c in Customers _
Where c.FirstName = "John" _
AndAlso c.LastName = "Smith"
Or, you can write it like this:
Dim query1 = From c in Customers _
Where c.FirstName = "John"
Dim finalquery = From c in query1 _
Where c.LastName = "Smith"
Either way, LINQ to SQL generates the same T-SQL.
An expression tree is a symbolic way of describing query expressions. This description is not executable code; rather, it's the information needed to compile executable code. This description allows for different providers to compile the query as appropriate. As such, expression trees provide the information about what the expression is meant to do, not how it does it. This is declarative coding; the actual implementation is up to the query provider.
Assume you write the original query with this Where clause:
c.LastName = "Smith"
This creates an expression consisting of a BinaryExpression at the root of the tree. The BinaryExpression has a Method property that stores MethodInfo -- the String.Equals method, in this case. The BinaryExpression also includes Left and Right Expressions, which are the Expression representations of the operands. Here, the Left expression is a PropertyExpression, c.LastName. The Right expression is a ConstantExpression: Smith (see Figure 1).
Form c in Customers Where c.LastName = "Smith"
Expression trees give you an easy way to compose queries, by building upon an expression (see Figure 2). The magic of expression trees comes from the way the VB compiler generates them for you automatically from your query expressions or lambda functions. The VB compiler knows to compile to an expression tree based on the extension method that is in scope. For LINQ to SQL, the extension method will be in the System.Linq.Queryable class. The Queryable class echoes the features of the System.Linq.Enumerable class; however, the predicate arguments are as Expression(Of Func(Of T, TResult), rather than as Func(Of T, TResult). In other words, the predicates become expressions that describe the function, rather than the function as a delegate.
Function (c) c.FirstName
= "John" AndAlso c.LastName = "Smith"
The System.Linq.Queryable.Where extension has this signature:
<Extension> _
Public Shared Function Where(Of TSource)( _
ByVal source As IQueryable(Of TSource), _
ByVal predicate As Expression( _
Of Func(Of TSource, Boolean)) _
) As IQueryable(Of TSource)
Not only is the predicate as Expression, but the source is as IQueryable(Of T) instead of IEnumerable(Of T). IQueryable(Of T) implements IEnumerable(Of T) and adds three new properties: ElementType, Expression, and Provider.
The extension methods in System.Linq.Queryable call on the IQueryable's Provider to create the query passing the expression tree to it and to get an IQueryable back. The Provider does the necessary translation of the expression. For LINQ to SQL, the provider is the System.Data.Linq.Table(Of Entity) class.
There is one other standard provider. The System.Core.dll that contains the System.Linq.Enumerable and System.Linq.-
Queryable classes comes with a default IQueryableProvider, EnumerableQuery(Of T). The EnumerableQuery(Of T) provider translates an expression tree to executable code, MSIL. You can use this provider implicitly through a data source as IEnumerable or IEnumerable(Of T) and calling the AsQueryable extension. For example, if you have a List(Of Customer), you can call the AsQueryable extension on it to return an IQueryable(Of Customer) with an EnumerableQuery(Of Customer) as the Provider. This allows you to pass in expression trees to the extension methods instead of delegates, which makes it possible to create a dynamic query for things such as a UI. The sample application shows you how to build an expression tree by constructing a dynamic query based on user input.
You can build your own Expression tree and pass that to any of Queryable extensions. Take the example of searching for last name of Smith, you can create the expression tree as follows:
' build an expression tree for the lambda function
' Function( c As Customer ) c.LastName = "Smith"
' create a parameter expression for the c As Customer
Dim exParam = Expression.Parameter( _
GetType(Customer), "c")
' create a proeprty expression for c.LastName
Dim exProperty = Expression.Property( _
exParam, "LastName")
' create a constant expression for the string "Smith"
Dim exConst = Expression.Constant( _
"Smith", GetType(String))
' create the equal expression using
' the property and constant
Dim exEquals = _
Expression.Equal(exProperty, exConst)
' create the lambda expression from the parts
Dim exLambda = Expression.Lambda( _
Of Func(Of Customer, Boolean)) _
(exEquals, exParam)
' create the query
Dim query = _
Customers.AsQueryable.Where( _
exLambda)
The Queryable extension methods provide a means for you to translate from one boundary to another. The LINQ to SQL provider translates your VB code to T-SQL through an expression tree, and the EnumerableQuery provider translates an expression to MSIL. You can use other providers to translate code to different Web services or different data stores or anything that has some form of query language.
As your code is translated in this manner, it becomes more declarative and you lose precise control over the when and how it works. For example, consider the case of the text box I discussed earlier. Using List(Of Customers) and changing the text to search for, while iterating immediately changes the rest of the iteration because the function is called on each iteration. With LINQ to SQL, you don't get this behavior because it's translated to T-SQL and evaluated at the start of the iteration.
The advantage of having the providers decide how to evaluate the lambdas is that you get more efficient code, as is the case with the generated T-SQL. Soon, we're likely to see providers that also do operations in parallel, taking advantage of multi-core processors. Again, you'll relinquish the control of "how" to them and focus on specifying the "what." The empowerment is from letting go, by letting providers choose what's best. The change to using a declarative functional programming style has many subtleties about it that change the problem boundaries and the way you will write code. You'd be right to feel a little uncertain: The subtleties are all too often the cause of unwanted side effects. But by understanding how these things work, you can learn to trust them and let them do the work for you. It might take time, but in the end I'm sure you'll find the declarative styles encouraged by LINQ empowering.
Printable Format
> More TechLibrary
I agree to this site's Privacy Policy.
> More Webcasts | https://visualstudiomagazine.com/articles/2007/09/01/linq-changes-how-you-will-program.aspx | CC-MAIN-2018-13 | refinedweb | 2,509 | 63.7 |
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of C++17 status.
Section: 20.14.16.2.5 [func.wrap.func.targ] Status: C++17 Submitter: Daniel Krügler Opened: 2016-01-31 Last modified: 2017-09-10
Priority: 3
View all other issues in [func.wrap.func.targ].
View all issues with C++17 status.
Discussion:
This issue is a spin-off of LWG 2393, it solely focuses on the pre-condition of 20.14.16.2.5 [func.wrap.func.targ] p2:
Requires: T shall be a type that is Callable (20.9.12.2) for parameter types ArgTypes and return type R.
Originally, the author of this issue here had assumed that simply removing the precondition as a side-step of fixing LWG 2393 would be uncontroversial. Discussions on the library reflector indicated that this is not the case, although it seemed that there was agreement on removing the undefined behaviour edge-case.There exist basically the following positions:
The constraint should be removed completely, the function is considered as having a wide contract.
The pre-condition should be replaced by a Remarks element, that has the effect of making the code ill-formed, if T is a type that is not Lvalue-Callable (20.9.11.2) for parameter types ArgTypes and return type R. Technically this approach is still conforming with a wide contract function, because the definition of this contract form depends on runtime constraints.
Not yet explicitly discussed, but a possible variant of bullet (2) could be:
The pre-condition should be replaced by a Remarks element, that has the effect of SFINAE-constraining this member: "This function shall not participate in overload resolution unless T is a type that is Lvalue-Callable (20.9.11.2) for parameter types ArgTypes and return type R".
The following describes a list of some selected arguments that have been provided for one or the other position using corresponding list items. Unless explicitly denoted, no difference has been accounted for option (3) over option (2).
It reflects existing implementation practice, Visual Studio 2015 SR1, gcc 6 libstdc++, and clang 3.8.0 libc++ do accept the following code:
#include <functional> #include <iostream> #include <typeinfo> #include "boost/function.hpp" void foo(int) {} int main() { std::function<void(int)> f(foo); std::cout << f.target<void(*)()>() << std::endl; boost::function<void(int)> f2(foo); std::cout << f2.target<void(*)()>() << std::endl; }
and consistently output the implementation-specific result for two null pointer values.
The current Boost documentation does not indicate any precondition for calling the target function, so it is natural that programmers would expect similar specification and behaviour for the corresponding standard component.
There is a consistency argument in regard to the free function template get_deleter
template<class D, class T> D* get_deleter(const shared_ptr<T>& p) noexcept;
This function also does not impose any pre-conditions on its template argument D.
Programmers have control over the type they're passing to target<T>(). Passing a non-callable type can't possibly retrieve a non-null target, so it seems highly likely to be programmer error. Diagnosing that at compile time seems highly preferable to allowing this to return null, always, at runtime.
If T is a reference type then the return type T* is ill-formed anyway. This implies that one can't blindly call target<T> without knowing what T is.
It has been pointed out that some real world code, boiling down to
void foo() {} int main() { std::function<void()> f = foo; if (f.target<decltype(foo)>()) { // fast path } else { // slow path } }
had manifested as a performance issue and preparing a patch that made the library static_assert in that case solved this problem (Note that decltype(foo) evaluates to void(), but a proper argument of target() would have been the function pointer type void(*)(), because a function type void() is not any Callable type).
It might be worth adding that if use case (2 c) is indeed an often occurring idiom, it would make sense to consider to provide an explicit conversion to a function pointer (w/o template parameters that could be provided incorrectly), if the std::function object at runtime conditions contains a pointer to a real function, e.g.
R(*)(ArgTypes...) target_func_ptr() const noexcept;
[2016-08 Chicago]
Tues PM: Moved to Tentatively Ready
Proposed resolution:
This wording is relative to N4567.
Change 20.14.16.2.5 [func.wrap.func.targ] p2 as indicated:
template<class T> T* target() noexcept; template<class T> const T* target() const noexcept;
-3- Returns: If target_type() == typeid(T) a pointer to the stored function target; otherwise a null pointer.-3- Returns: If target_type() == typeid(T) a pointer to the stored function target; otherwise a null pointer.
-2- Requires: T shall be a type that is Callable (20.14.16.2 [func.wrap.func]) for parameter types ArgTypes and return type R. | https://cplusplus.github.io/LWG/issue2591 | CC-MAIN-2019-22 | refinedweb | 829 | 51.07 |
Over ...Read More »
Sublime VS. Atom: Can GitHub Take the Lead?
Comparing ...Read More »
Grails Goodness: Custom Data Binding with @DataBinding Annotation
Grails has a data binding mechanism that will convert request parameters to properties of an object of different types. We can customize the default data binding in different ways. One of them is using the @DataBinding annotation. We use a closure as argument for the annotation in which we must return the converted value. We get two arguments, the first ...Read More »
Software Development Lessons Learned from Consumer Experience
Because ...Read More »
Simple Class to Measure Latency
This is a very simple class I wrote to measure latency. It’s not the Rolls Royce solution that is HDRHistogram but if you want to add just one class to your project this does the trick quite nicely. Here’s a simple test program to show you how it’s used: package util; public class LatencyMeasureExample { public static void main(String[] args) throws ...Read More »
#102030: Celebrating 20 Years of Java by Running 20 10K in 30 Days
May ...Read More »
Functions Named as Adjectives. :) Naming Functions Generally, people are told to name methods and .. » | http://www.javacodegeeks.com/page/7/ | CC-MAIN-2015-22 | refinedweb | 195 | 66.13 |
Red Hat Bugzilla – Bug 831550
g++ 4.7 and armadilo 3.2.2: operator() is inaccessible
Last modified: 2013-07-31 20:59:25 EDT
Created attachment 591420 [details]
arma.ii.gz
Description of problem:
g++ refuses to compile seemingly valid C++ code from Armadillo library
Version-Release number of selected component (if applicable):
gcc version 4.7.0 20120507 (Red Hat 4.7.0-5)
How reproducible:
use g++ to compile code using "ivec3" class from Armadillo
Steps to Reproduce:
1. install Armadillo 3.2.2 from
2. try to compile code using "ivec3" class from Armadillo
3. look bewildered
Actual results:
refuses to compile, stating that various forms of operator() are inaccessible.
Expected results:
should compile; known to compile with gcc 4.4.6 (RHEL 6.2).
also known to compile with clang 3.0
Additional info:
See attached file, "arma.ii.gz", generated using
g++ -v -save-temps -O2 -o arma arma.cpp
Upstream GCC bug:
Code that fails to compile:
#include <iostream>
#include "armadillo"
using namespace arma;
using namespace std;
int main(int argc, char** argv)
{
cout << "Armadillo version: " << arma_version::as_string() << endl;
ivec3 x;
x.ones();
x.print("x:");
return 0;
}
*** Bug 831548. | https://bugzilla.redhat.com/show_bug.cgi?id=831550 | CC-MAIN-2018-05 | refinedweb | 197 | 53.07 |
14 July 2008 11:47 [Source: ICIS news]
LONDON (ICIS news)--Indian phosphate fertilizer importers have purchased Tunisian diammonium phosphate (DAP) for the first time ever despite the country's reliance on North African phosphoric acid, a feedstock for phosphate fertilizers, a trader source said on Monday.
?xml:namespace>
In a surprising move, the Indian Farmers’ Fertilizer Cooperative (IFFCO) bought 30,000 tonnes of DAP fertilizer from Groupe Chimique Tunisien (GCT) at $1,274.75/tonne (€803/tonne) CFR (cost and freight) for July shipment, the source said.
Two reasons for the change of strategy were highlighted by market sources. Firstly, IFFCO had never bought phosphoric acid from GCT prior to this sale therefore traders deemed there was no fundamental conflict of interest.
Secondly, slow demand in ?xml:namespace>
Traders were unsure whether this sale was a one-off or represented the start of a more long-term arrangement between IFFCO and G | http://www.icis.com/Articles/2008/07/14/9139844/india-buys-tunisian-dap-for-first-time.html | CC-MAIN-2014-10 | refinedweb | 153 | 51.07 |
scroll during drag-n-drop
scroll during drag-n-drop
Hi,
I need functional support for scrolling when you navigation upon drop target and already have some items to drop there. As it visible in example, gxt 3.x does not provide this functionality ( see,) , but it works for ExtJS (see,)
Any ideas ?
thanks,
Alex.
The Tree DnD examples do have scroll support - take a look at the TreeDropTarget classes, and how the AutoScrollSupport type is used in there (calling start() and stop() at appropriate times). A subclass of GridDropTarget could also make these same calls to support this feature.
Thank you for your response.
But as I see from source code, scrollSupport field appears only in TreeDropTarget as a privet field and is used in overridden method there onDragEnter. The way, as you describe, is duplicate same behavior in GridDropTarget's child with overridden onDragEnter, that smells bad!
Why it is not implemented on GXT library level ?
GridDropTarget does not provide any from-the-box opportunity to support AutoScrollSupport.
And do you know how to make mouse wheel works during DnD ?
thanks,
Alex.
Its true that GridDropTarget doesn't have support for it - were I to add that today and get it right on the first try, it could be weeks or more before we have a release that would bring this feature to you, so I tried instead to explain how it worked and how it could be achieved in your own project quickly.
GridDropTarget doesn't already include this, as you've noted - I suspect this is as the LiveGridView and other frequent modifications wouldn't support it well, but I'd need to dig in and try it to find out.
With regard to the suggestion that this code 'smells bad', I think that this use of composition instead of inheritance is the right solution here. DragSource and DropTarget are mostly the logical details of DnD, and shouldn't be concerned with exactly how the UI will be implemented. For example, building a DropTarget for a TabPanel to drag tabs around won't use the same kind of AutoScrollSupport, as tab scrolling is not quite the same as overflow:auto that Tree and (normal) Grid uses.
Additionally, AutoScrollSupport wraps up the scrolling details pretty neatly - as I tried to point out before, all that needs to be called from within any DropTarget is
a) set it up, giving it the region to scroll (varies by use case)
b) call start() when the drag enters the actual droppable region (may vary by widget)
c) call stop() when the drag ends or leaves the droppable region (again, this varies).
All of this variation means that we can't trivially just make a single subclass of DropTarget (call it ScrollableDropTarget) that ListViewDropTarget, TreeDropTarget, GridDropTarget could extend from. The purpose of the AutoScrollSupport type is to wrap all that behavior and logic, so those three steps above can be added, based on the rules and requirements of any class, DnD related or not.
On your last question, I'm not entirely sure what prevents the scroll wheel from working, but I suspect it has to do with the other workarounds needed to prevent browsers from selecting text as the mouse keeps moving. That said, I'm barely agile enough to click, move the mouse, and scroll at the same time...
Hi Colin,
thanks again for such a long comment.
As you have mentioned, you are planning to include changes regarding Grid and AutoScrollSupport into one of next GXT releases, could you say when we can expect it ?
Also, It would be nice to see the code, which solves the problem, as you said you have it now;
or you can take a look on code bellow, that I wrote.
Unfortunately, my code does not make Grid auto-scrollable, I hope you help me figure what I did wrong.
Code:
package xxx.yyy.zzz; import com.sencha.gxt.core.client.dom.AutoScrollSupport; import com.sencha.gxt.dnd.core.client.*; import com.sencha.gxt.widget.core.client.grid.Grid; public class ScrollableGridDropTarget<T> extends GridDropTarget<T> { private AutoScrollSupport scrollSupport; public ScrollableGridDropTarget(Grid<T> grid) { super(grid); } @Override protected void onDragCancelled(DndDragCancelEvent event) { super.onDragCancelled(event); scrollSupport.stop(); } @Override protected void onDragDrop(DndDropEvent e) { super.onDragDrop(e); scrollSupport.stop(); } @Override protected void onDragEnter(DndDragEnterEvent e) { if (scrollSupport == null) { scrollSupport = new AutoScrollSupport(getGrid().getElement()); } else if (scrollSupport.getScrollElement() == null) { scrollSupport.setScrollElement(getGrid().getElement()); } scrollSupport.start(); super.onDragEnter(e); } @Override protected void onDragFail(DndDropEvent event) { super.onDragFail(event); scrollSupport.stop(); } @Override protected void onDragLeave(DndDragLeaveEvent event) { super.onDragLeave(event); scrollSupport.stop(); } }
....
best regards,
Alex
Sorry if I wasn't sufficiently clear - I mean to say that even *if* I started the work right now, putting all bugs on hold to add a new feature, it would be some time before it would be available. I wouldn't expect this before 3.1, and no, I don't have a date on that.
In your sample, you've (almost) discovered one of the interesting things that makes this hard to do!
Code:
if (scrollSupport == null) { scrollSupport = new AutoScrollSupport(getGrid().getElement()); } else if (scrollSupport.getScrollElement() == null) { scrollSupport.setScrollElement(getGrid().getElement()); }
And as long as we are talking about future releases, touch support. Every feature that gets added has to be met by even more testing and future planning, so we don't break the API more than we have to. Many like GXT as it is today, but just want it more stable (we don't get all the bugs we run into as it is) - these people don't want changes to the API so their apps continue to work. Many people like the new features and look forward to modifying their apps to bring in any new achievements we can bring - these users tend to not mind the API updating so long as their features are added. We have to juggle both sides.
Lastly - I'm a laptop user running linux - the touchpad is maybe 2.5in x 1.5in, and my hands stretch over an octave on the piano, so that as my normal use case doesn't work. Even with a mouse attached, I still have large, long fingers...
Hi Colin,
yes, getScroller() helps, thank you!
I still have 2 questions :
1) Does sencha has some public issue tracker where you can open ticket regarging AutoScrollSupport in Grid and I can follow up ? Just to be up to dateed regrding this problem.
2) Who can give me answer on my question regarding scroll wheel ?
The Bugs forum, where you posted, is where we publicly track the status of bugs. Following that theme, this will be marked as CLOSED, as there still isn't really a bug in here.
We also have a Feature Request forum available to support subscribers.
With regard to the scroll wheel issue, it appears that the Draggable type (which manages the actually dragging of items) listens to all events while you are dragging and invokes preventDefault() to stop them from actually happening. Among its useful side effect, this prevents text from being selected, and it appears to also stop key events and scroll events. One way to work around this would be to add another preview handler when the drag starts/stops in your own code, listen for Event.ONMOUSEWHEEL, and cause the scroll to occur.
Looks like we can't reproduce the issue or there's a problem in the test case provided. | http://www.sencha.com/forum/showthread.php?246770-scroll-during-drag-n-drop&s=21853b0200d990b47d2ab3a1dde1da7c&p=906987 | CC-MAIN-2015-14 | refinedweb | 1,245 | 61.36 |
So far we have studied well-formed and valid documents containing data and other elements. XML is a language that allows other standards to be built upon it. Included in the list of additions to the XML family is XSL (XML Stylesheet Language). You will read more about XSL and how it can be used to transform XML data into neatly formatted output in Chapter 7.
The World Wide Web Consortium has also recommended additional standards for interconnecting documents and addressing precise locations within XML documents. Among these other XML standards are XPointer and XPath, which extend XML. This section gives an overview of each of these and the URI (Uniform Resource Identifier) standard for identifying and locating resources used by XML documents. These recommendations have been grouped together here, as they often work together. However, they can also work independently.
Keep in mind that this section is a very basic overview to help you understand these additions to XML, parsing of XML with FileMaker Pro, and how these standards work with XML and FileMaker Pro. Remember, too, that the specifications and recommendations may change, although it is unlikely that these changes will affect the current technology. The changes may enhance the current specifications just as XPath and XPointer have added to the functionality of XML. You may consult the World Wide Web Consortium for the latest information,.
Uniform Resource Identifiers (URIs) encompass all references to web files: text, images, mailboxes, and other resources. URIs include URLs (Uniform Resource Locators): ftp, gopher, http, mailto, file, news, https, and telnet, common protocols for accessing information on the Internet. Some examples of these are found in Listing 1.18. Remember that the World Wide Web is only a part of the Internet. URIs may be used in XPaths and XPointers if they refer to an address on the Internet.
Another URI type is the URN (Uniform Resource Name). The URN has globally persistent significance; only the name of the resource need be known, not the location of it as in the URL. The Uniform Resource Name can be associated with Uniform Resource Characteristics (URC), which allows descriptive information to be associated with a URN. A URN can also have a URL. A more complete URL is found in Listing 1.17.
<link href="http:anyserver/documents/myPaper.txt"> <author>Me!</author> <date>03 JAN 1999</date> <revised>05 FEB 1999</revised> <title>My Important Paper</title> </link>
Uniform Resource Identifiers can be absolute or relative. Relative paths assume the current document location, and every link from there builds upon the path. A document can have a BASE path specified at the beginning of the document. urn:here://iris mailto:me@mydomain.com?subject=Inquiry%20About%20Your%20Site telnet://myServer.edu/ news:comp.databases.filemaker
The Request For Comment (RFC) document number 2396 was written to specify the standards for Uniform Resource Identifiers. This document, "Uniform Resource Identifiers (URI): Generic Syntax", can be found at. Notable are the standards for naming these URIs. You should read this list of standards for naming.
Suggestions for naming URIs include using the alphanumeric characters: a-z, A-Z, and 0-9. Any character not within these ranges can be escaped or translated to an octet sequence consisting of "%" and the hexadecimal representation of the character. This means that the space character is often encoded as "%20" in a URL so that it may pass safely as a valid URI. There are other characters used to format a URL that are reserved to specify the format of the URL. These are: ";", "/", ":", "#", "%", "@", "&", "=", "+", "$", and ",". There are also unreserved characters that may be used for specific purposes: "-", "_", ".", "!", "∼", "'", "(", and ")". Characters listed as unwise to use include: "{", "}", "|", "\", "ˇ", "[", "]", and "‘". If you stick with the alphanumeric characters for your own naming standards, you are less likely to disrupt any usage for the URI itself.
Another document, "RFC 2368, The mailto URL scheme",, gives us more specifics for the mailto protocol. This particular URI is often used to send email and can easily be created from calculations in a FileMaker Pro field. The most basic form of this URI is mailto:yourEmail@yourDomain.com. It simply provides the protocol (mailto) and the Internet address. To send the same message to multiple people, you may list them all after the protocol as comma-separated values. An example mailto format is shown here:
mailto:joe@hisDomain.com,betty@herDomain.net?body=This%20is%20a%20short% 20message.
The body of the message can be included in a mailto URI, but since the URI cannot contain spaces (or other reserved characters), these are converted. The body attribute was never intended to include a very large message. Some email cannot be sent without a subject, so that also can be included in the URI. The subject must also be converted or encoded. The space character is %20. Additional attributes are separated with the "&", so if your subject or message body contain this character, change it to "&". The "from" is implied by the email application sending the message. The mailto protocol is often used on web pages as a hyperlink. You can use double or single quotes for the link, but do not include these within the URI.
Mailto as a link:
<a href="mailto:Joe_Brown@eddress.org?subject=Call%20Me!&body=I' ll%20be%20at%20home%20today%20&%20tomorrow." >call me</a>
The link, as it appears in an email client:
to: Joe_Brown&eddress.org from: me@myDomain.com subject: Call Me! I'll be at home today & tomorrow.
You can create this link by calculation and use the OpenURL script step in FileMaker Pro to "send" the message. It actually opens your email client if one is mapped as the default and pastes these fields into the proper location of the new email. In the process of pasting into the proper locations, any encoding is converted back. In reality, your email client may be retaining these for sending and receiving, but you do not see them. The message must still be sent by you; it may only be placed in your "outbox" by FileMaker Pro. Using the Web Companion external function Web-ToHTTP is a convenient way to convert errant characters that might need it.
The calculation:
SendMessage = "mailto:" & ToField & "?" & External("Web-ToHTTP", subjectField) & "&" & External("Web-ToHTTP", bodyField)
The script step:
OpenURL [ no dialog, SendMessage ]
FileMaker Pro Help will help you use the OpenURL script step correctly for each platform. If you use OpenURL to send email, it will use whatever your default email client is in the URL.DLL for Windows. On a Macintosh, the Internet Config settings will determine which email client will send the message. On Macintosh OS X, the Send Mail script step with mail.app is not supported in the first release of FileMaker Pro for OS X. Also, remember that some browsers do not process the mailto protocol properly. Several FileMaker Pro plug-ins may be used in conjunction with web-published databases for sending and receiving email.
XML Path Language (XPath),, is a language for addressing parts of an XML document and is used by XPointer and XSLT (Extensible Stylesheet Language Transformations). XPath expressions often occur in attributes of elements of XML documents. XPath uses the tree-like structure of an XML document and acts upon the branches or nodes. The nodes are not merely the elements of the document, but also include the comments, processing instructions, attribute nodes, and text nodes. The human family tree has aunts, uncles, cousins, grandparents, sisters, brothers, parents, sons, and daughters. XPath uses similar designators for the branches of the XML tree. All of the branches of the tree (axes) are related to each other. We'll look again at the people.xml example, shown in Listing 1.19, to understand the XPath language.
<people> <vendor> <firstname>John</firstname> <company>Paper Cutters</company> </vendor> <customer> <firstname>Jane</firstname> <lastname>Doe</lastname> </customer> <customer> <firstname>John</firstname> <lastname>Doe</lastname> </customer> </people>
The child:: is a direct node from any location or the successor of a particular location source. The child node is also the default and can often be omitted from an XPath.
<anyNode> <child> </child> </anyNode>
In the people.xml example, the children of people are vendor and customer. There are multiple customer children. There could also be multiple vendor children. The element firstname occurs as a child of vendor or customer; however, company is only a child of vendor. Because the child is the default node in the path, you can specify firstname with the XPath format as full or shortcut:
people/vendor/firstname root::people/child::vendor/child::firstname root::people/child::customer/child::firstname people/customer/firstname
The descendant:: is a sub-part of a node and can be children, grand-children, or other offspring. The descendants of people are vendor, firstname, company, customer, and lastname. An example is shown here:
<anyNode> <descendant1> <descendant3></descendant3> </descendant1> <descendant2 /> </anyNode>
The ancestor:: is the super-part of a node, so that the ancestor contains the node. If we use firstname from our example, it has the ancestor's vendor, customer, and people. Not all firstname elements have a vendor or customer ancestor.
<ancestor> <anyNode></anyNode> </ancestor>
The attribute:: node is relative to the referenced node and can be selected with the name of the attribute.
<node attribute="attrName" />
The namespace:: node contains the namespace. More about the namespace will be discussed in Chapter 7 with XSL.
The self:: node is the reference node and another way to specify where you already are, but it may be used in conjunction with ancestor or descendant (ancestor-or-self:: and descendant-or-self::).
XPath expressions (statements) have one or more location steps separated by a slash ("/"). The location steps have one of the above axis items, a node test, and an optional predicate. The node test is used to determine the principal node type. Node types are root, element, text, attribute, namespace, processing instruction, and comment. For the attribute axis, the principal node type is attribute, and for the namespace axis, the principal node type is namespace. For all others, the element is the principal node type. The predicate will filter a node-set with respect to the axis to produce a new node-set. This is the real power of XPath using the syntax shortcuts, functions, and string-values as the predicate to select fragments of an XML document.
Each of the nodes has a value returned by the xsl:value-of function. This is the key to getting the content of your XML document. This section explains each node's string value.
The root() node string-value is the concatenation of the string-values of all text node descendants of the root node. If you want the text of the entire document, this will give it to you. Take note that white space will be ignored and you will lose the meaning of the individual elements. One possible benefit of using this value is to search an entire document for a particular value. In our people.xml example, the root is the outermost element, <people>…</people>. The value of the root() is all the text (contents) of all the elements in the document.
The element() node string-value is the concatenation of the string-values of all text node descendants of the element node. The element can have text and other elements, so all text of a particular element is returned here. The value of vendor is John Paper Cutters. The value of customer[1] is Jane Doe.
The attribute() node string-value is the value of the attribute of the parent element. However, the attribute is not a child of the element. If you had an element, <customer preferred="yes">… </customer>, the attribute preferred has the value "yes."
The namespace() node is like the attribute node, as an element can have a namespace. The string-value of the namespace node is the URI or other link specified in the namespace. Namespaces will be discussed more fully in Chapter 7.
The processing instruction() node has the local name of the processing instruction's target. The string-value of the processing instruction node is the part of the processing instruction following the target. A common processing instruction is for an XSL stylesheet. The value of <?xml-stylesheet href="headlines.xsl" type="text/xsl" ?> is the target, headlines.xsl.
The comment() node string-value is the content of the comment not including the surrounding markup (<!– and –>). The comment <!– here is a comment –> has a string-value of "here is a comment."
The text() node contains the character data in the element that is the string-value of the text node. The value of /vendor/firstname/ text() is the same as the value of /vendor/firstname or John.
There are additional functions as a part of the XPath language. These can extract more precisely the particular text you need. FileMaker Pro has similar text functions such as Left(text, number) or Middle-Words(text, start, number). These additional XPath functions are not discussed here. The standards are changing, and these new functions may not be fully supported by all XML processors at this time. Your particular choice of XML parser may allow you to use the full set of functions. See Chapter 6 for some of these XPath functions.
XML Pointer Language (XPointer) is another method of extracting the content of an XML document. Some applications use XPointer or a combination of XPointer and XPath to parse the XML data tree. The notation is different from XPath and uses the locators root(), child(), descendant(), and id().
root() is similar to XPath "/" or the entire document. The paths to the elements are based off the root() with a "." dot notation. For example, root().child().child() would be similar to "/parent/child."
id() is similar to root() but is a specific element's ID attribute. Because the ID of an element is unique for each element in an XML document, it does matter what path the element is on. The XPointer request for "ID(890)" will jump right to that element and return the element and any of its descendants. Listing 1.20 is a small XML document used to explain the XML Pointer Language.
<elements> <element ID="23469">xyz</element> <element ID="123" /> <element ID="890"> <element ID="57">1245</element> </element> </elements>
The child() node has some parameters that will narrow down which child. The first parameter is a number or "all." The number is the number of the child in the document. "root().child(1).child(3)" is the same as calling "ID(890)" because the third child of the first element of the entire document has the ID attribute of 890. The parameter of "all" will return all elements in a path. "root().child(1).child(all)" returns all elements except the first element.
child(# or all, NodeName, AttributeName="")
The descendant() node is similar to the child() node, except it can be anywhere as a reference to any element's descendants.
You can read more about XPointer at. This book does not use this language in any of the examples. | http://etutorials.org/XML/filemaker+pro+6+developers+guide+to+xml_xsl/Chapter+1+The+Basics+of+XML/1.5+Beyond+Basic+XML-Other+Standards/ | CC-MAIN-2017-22 | refinedweb | 2,526 | 56.86 |
C. Its important to address the gaps at compile/link time because its [difficult to impossible to add hardening on a distributed executable after the fact on some platforms... And a project will still require solid designs and architectures..
In addition, its. the, including the full force of assertions.
ASSERT
Asserts will help you create self-debugging code. They help you find the point of first failure quickly and easily. Asserts should be used throughout your program, including parameter validation, return value checking and program state. If you have thorough code coverage, you will spend less time debugging and more time developing because f_DEBUG.
GCC provided details of the warning options can be found at Options to Request or Suppress Warnings.no-type-limits endif ifeq ($(GNU_LD210_OR_LATER),1) MY_LD_FLAGS += -z,nodlopen -z,nodld viagcc .
aSee Jon Sturgeon's discussion of the switch at Off By Default Compiler Warnings in Visual C++.
bWhen using /GS, there are a number of circumstances which affect the inclusion of a security cookie. For example, the guard is not used if there is no buffer in the stack frame, optimizations are disabled, or the function is declared naked or contains inline assembly.
c#pragma strict_gs_check(on) should be used sparingly, but is recommend in high risk situations, such as when a source file parses input from the internet.
Authors and Editors
- Jeffrey Walton - jeffrey, owasp.org
- Kevin Wall - kevin, owasp.org | https://www.owasp.org/index.php?title=C-Based_Toolchain_Hardening&oldid=145104 | CC-MAIN-2015-18 | refinedweb | 236 | 57.37 |
I am in the process of attempting to upgrade the version of Selenium WebDriver. I was previously using Firefox 31.6 ESR and Selenium 2.42, however I am now using Firefox 45.4 ESR and Selenium 3.0.0 (which I believe should be compatible as this is the latest Firefox ESR).
The C# test projects were referencing a Nuget package with the older version of Selenium, so I changed the package config files to pull the latest, and I now have the nuget package for Selenium 3.0.0 added to my packages directory (downloaded from the Selenium website). However now when I build I am getting errors with the using statements for Selenium
using OpenQA.Selenium;
using OpenQA.Selenium.Firefox;
using OpenQA.Selenium.Remote;
You might want to check the path to the Selenium .dll in your project, as just upgrading the version in the packages.config doesn't always update the path to the dll in the .csproj. | https://codedump.io/share/NvTK4Oo76IYM/1/selenium-c-compilation-errors-with-300 | CC-MAIN-2016-50 | refinedweb | 162 | 60.72 |
the seekg() function (for input) and seekp() function (for output). In case you are wondering, the g stands for “get” and the p for “put”. tell
The fstream class is capable of both reading and writing a file at the same time -- almost! The big caveat here is that it is not possible to switch between reading and writing arbitrarily. Once a read or write has taken place, the only way to switch between the two is to perform an operation that modifies the file position (e.g. a seek). If you don’t actually want to move the file pointer,.
typo in the first little snippet of..
"Reading and writing a file at the same time using fstream"
also this restoring of the read/write position might not work with some compilers. with the g++ 4.8.4 i’m using currently tellg() and tellp() will always return the same value after any seekg, seekp, write or read. on the other hand someone on the net wrote they work separately with his g++ 4.7.2.
this is typical:
- first some reference website didn’t tell me what the g/p postfixes actually mean, which makes remembering a command harder.
- then i learned it means get (read) and put (write) positions and i thought "yes, what a cleverer thing to have those working independently.
- now i not only learned 4 commands instead of only 2 (seek,tell) but additionally i learned this abstraction has basically no relevance for reality and you shouldn’t rely on it, funny c++.
Hi Alex!
What’s the use of -1 here?
iofile.seekg(-1, ios::cur); // why do we want the pointer to move one byte backwards to our current position?
The program is trying to replace vowels with #. So first we have to read the character to see if it’s a vowel. If so, then we need to replace it. Replacing it means backing up 1 character, then overwriting the vowel with a ‘#’ symbol. The line you quoted backs up 1 character.
thats what i am asking, is our pointer is at a vowel then why do we need to move to the previous character to replace the vowel? May be I am missing somewhere please clear it.
Thanks.
after you have read a character to check if it is a vovel, the file pointer will already point to the next character, ready for the next read operation so to speak.
think of it this way:
if you type a character in an editor the caret will then be placed after the character you just typed (analogous to the char read from a file). if you want to overwrite that char again (without using backspace) you would move the caret 1 position to the left so it is positioned before the char, then activate insert mode in your editor and type the new character like ‘#’ for example.
ThAnKs 🙂
Hi, Alex.
In the vowel example, at line 45, when I use your code
it works perfectly fine. But when I try the other one
it does not. I figure out that the program just insert ‘#’ but does not overwrite the content at the pointer position. I still don’t know why because to me, these two statement produce the same result: put the file pointer at the current position so that we can continue read the file. Thanks in advance.
I suspect some compilers see the request to seekg 0 bytes from the current position and just ignore it, causing the program not to switch modes.
Alex, perhaps a typo, after you changed the stream name from inf to iofile on that specific example:
"Although it may seem that inf.seekg(0, ios::cur) would also work" -> iofile.seekg(0, ios::cur)
Best regards.
Mauricio Mirabetti
Fixed, thanks!
thank you very much!
Hi, Alex, I want to finish learning this tutorial in one month and I finished 6 chapters, would you like to tell me which part is more important and which part I should spend more time to learn?
#1 how long could I finish learning this tutorial if I have finished 6 chapters?
#2which chapters is more important to learn as a freshman in C++?
1) I have no idea, it depends on how fast you read and internalize the information.
2) The most important chapters are 1-8. But as a freshman, assuming you’re taking a standard course load, you’ll likely cover most of the information in this tutorial (plus other things as well).
Hi Alex
You haven’t taught how to delete a record in a file having classes or structures
Generally, adding anything to or removing anything from a file directly is a bad idea.
You’re better off reading the contents of the file into memory, modifying them in memory, and then writing them out.
hii
I ‘m confusing about
How to modify the content in file by random access
Hi Alex 🙂
Is there anyway to write into a file where it does not overwrite the existing characters? For example, you search for every vowel in the text file and you want to insert a ‘$’ after the vowel.
I tried opening the file with the std::ios::app flag, but it only adds text at the end.
The best way to do this is either to:
1) Read the contents of the file into memory, modify the content in memory, and write them back out, or
2) While you’re reading the contents of the file, write them out into a new file and make any additional changes at that point.
Trying to insert bytes into the middle of files is a recipe for disaster.
Hi,
Why is it necessary to do a seek operation in order to switch between read and write?
I don’t actually know. It must have something to do with the way the streams are implemented, but I don’t have enough in depth knowledge on that topic to say.
Hi,
is it possible to delete the contents of a file between locations A and B?
I’m assuming that would be more difficult than overwriting, because we’re changing the location of the remaining content?
It’s generally a bad idea to try to add/delete content from the middle of a file. For this kind of thing, it’s easier to either:
1) Read the whole file into memory, skipping the content you want to delete, then write the contents of memory back out to disk.
2) Open a new file and copy all the parts of the source file you want to keep into the destination file. Then delete the source file and rename the destination file to the source file.
First of all, a few minor typos:
1. Before the first big example: should be "Here’s" instead of "Heres";
2. In "Reading and writing a file at the same time using fstream": "…and changes the any vowels it finds to a ‘#’ symbol" - no need for "the".
Also, this code does not work for me unless the file is opened in binary mode. There is this quote: "The only seeking you can do in a text file is to the start of the file, to the current position, or to a streampos returned from a call to tell[gp]()." - random coder on bytes.com, and as far as I looked into it, seekg and seekp in other situations whilest dealing with such text files is undefined or unreliable. Maybe you should adress that in the tutorial^^
Typos fixed. It looks like some compilers have a buggy implementation of seekg() and tellg() when used in text mode. I’ve added a note about trying binary mode in this case.
#typo
"One other bit of trickiness: Unlike istream," -> ifstream
What do I do if I want to read/write a single line?
Is there a way to seek by line, rather than by bytes?
I want to write a database file that contains variable length entries, reads them into an array for sorting/printing etc, then writes them back to the file when I’m done.
I figure that if every entry is on its own line, I can just read that entire line into each array element, and write them back the same way, but I don’t know how to do that.
To read a single line, use getline().
For what you want to do, I don’t think you need to seek by line. Just parse the entire database file upon load (using getline()), make your modifications to it in memory, and then write it back out when you’re done.
As long as the file isn’t HUGE, this should work.
Typo ("it’s" should be "its"):
* "- that is, skip around to various points in the file to read it’s contents."
* "We’re going to write a program that opens a file, reads it’s contents, and changes the any vowels it finds to a ‘#’ symbol."
Fixed. Thanks!
I need to do many manipulations on a file on a bit level. Is there a better way to do this than: (1)open the file in input mode (2)open a second file in output mode (3)read the input file as a string (4)individually convert each character of this string to its binary (ascii/utf-8) value and append/write this to the output file (5)do manipulations on the output file (6) manually convert the output file back by reading the output file as a string 8 “boolean” characters at a time and turning it into an ascii/utf-8 value.
hi..I want to find whether a string is present or not in a file which i have already written and its contents are being displayed…can someone help me with the code…
Is it possible to pass fstream objects as parameters? I am writing a simple function that calculates the size of a file stream passed to it. Here is the code.
Any help is appreciated.
Also, is there a tutorial on advanced operations with binary files? I’m writing a file archive application and I need a little help with the C++ filestream objects.
I would like to weave files together byte by byte. I would also like to write variables, like the number and names of files in the archive, to the top of the file for easy reference.
Thank you for these tutorials. They have been a great resource for me.
I figured out my problem. The ios namespace lives inside std, so to use ios, I must also use std. My fstream parameters were being declared outside of their scope.
Man, this stuff gets confusing!
Hello
why this code is true?
Hello
why this code is true?
[code]
#include
#include
using namespace std;
int main()
{
int b[4]={1,1,5,1};
int c[4]={0};
fstream A(“File.txt”,ios::binary|ios::in|ios::out);
if(!A)
{
return 1;
}
A.write((char *) (&b),sizeof(b));//(&b) must be (b) but ?????!!!!!
A.seekg(0);
A.read((char *) (c),sizeof(c));
cout<<c[2];
return 0;
}
I modified your code slightly to get it to work on my win2k machine.
using dev-cpp
I added :
Thanks, was wondering why this code snippet didn’t work for me but with binary mode it worked flawlessly:)
Hi, I tried your vowel replacement program. It works for single-line text files, but if there’s more than 1 line, the program seems to break down. Even the first line isn’t “translated” properly.
Input:
This is line One.
This is line Two.
Output:
This#is#lin# On#.
Thi# i# li#e#Two#
This only thing I changed was the .dat extension to .txt
Seems like the ‘new line’ character screws up the either the get or put pointer, although this doesn’t explain why even the first line isn’t working properly.
Hi Sam/Alex,
My results were not quite the same as Sam’s but the code did not work.
The problem I believe lies in the seekg(0, ios::cur) call after we output the ‘#’. As far as I can make out this is not good enough to convince the io system that something has happened (internally it optimises away the call to fseek and I think that means its idea of where we are is not correct).
Change that line for these two:
iofile.seekg( -1, ios::cur );
iofile.seekg( 1, ios::cur );
That worked for me.
What I can’t understand is why we don’t need to use seekp (instead of seekg) before the write…
Grant
I think you can also use:
iofile.seekg( iofile.tellg(), ios::beg);
Hmmm, that is interesting about seekg(0, ios::cur) not working for you. It worked fine for me, but maybe it is being optimized away in your case as you suggest.
I added a short blurb about tellg() and tellp() to the tutorial and also changed the example to use iofile.seekg(iofile.tellg(), ios::beg);, though I do have some concerns about the performance ramificaitons of doing such a thing (not sure if it’s smart enough to convert that into a relative position, or whether it’s going back to the beginning of the file each time and then counting it out).
iofile.seekg(iofile.tellg(), ios::beg);
As for the seekg()/seekp() difference, as far as I can tell with fstream they appear to be identical.
Name (required)
Website | http://www.learncpp.com/cpp-tutorial/187-random-file-io/ | CC-MAIN-2017-30 | refinedweb | 2,257 | 71.04 |
Previous article: Friday Q&A 2018-04-27: Generating Text With Markov Chains in Swift
Tags: debugging fridayqna
Debugging a complex problem is tough, and it can be especially difficult when it's not obvious which chunk of code is responsible. It's common to attempt to produce a reduced test case in order to narrow it down. It's tedious to do this manually, but it's also the sort of thing computers are really good at. C-Reduce is a program which automatically takes programs and pares them down to produce a reduced test case. Let's take a look at how to use it.
Overview
C-Reduce is based on two main ideas.
First, there's the idea of a reduction pass. This is a transformation performed on some source code which produces a reduced version of that code. C-Reduce has a bunch of different passes, including things like deleting lines or renaming tokens to shorter versions.
Second, there's the idea of an interestingness test. The reduction passes are blind, and often produce programs which no longer contain the bug, or which don't compile at all. When you use C-Reduce, you provide not only a program to reduce but also a small script which tests whether a reduced program is "interesting." Exactly what "interesting" means is up to you. If you're trying to isolate a bug, then "interesting" would mean that the bug still occurs in the program. You can define it to mean whatever you want, as long as you can script it. Whatever test you provide, C-Reduce will try to provide a reduced version of the program that still passes the test.
Installation
C-Reduce has a lot of dependencies and can be difficult to install. Thankfully, Homebrow has it, so you can let it take care of things:
brew install creduce
If you'd rather do it yourself, take a look at C-Reduce's INSTALL file.
Simple Example
It's difficult to come up with small examples for C-Reduce, since its whole purpose is to start from something large and produce a small example, but we'll give it our best try. Here's a simple C program that produces a somewhat cryptic warning:
$ cat test.c #include <stdio.h> struct Stuff { char *name; int age; } main(int argc, char **argv) { printf("Hello, world!\n"); } $ clang test.c test.c:3:1: warning: return type of 'main' is not 'int' [-Wmain-return-type] struct Stuff { ^ test.c:3:1: note: change return type to 'int' struct Stuff { ^~~~~~~~~~~~ int test.c:10:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ 2 warnings generated.
Somehow our
struct is messing with
main! How could that be? Maybe reducing it would help us figure it out.
We need an interestingness test. We'll write a small shell script to compile this program and check for the warning in the output. C-Reduce is eager to please and can easily reduce a program far beyond what we really want. To keep it under control, we'll write a script that not only checks for the warning, but also rejects any program that produces an error, and requires
struct Stuff to be somewhere in the compiler output. Here's the script:
#!/bin/bash clang test.c &> output.txt grep error output.txt && exit 1 grep "warning: return type of 'main' is not 'int'" output.txt && grep "struct Stuff" output.txt
First, it compiles the program and saves the compiler output into
output.txt. If the output contains the text "error" then it immediately signals that this program is not interesting by exiting with error code 1. Otherwise it checks for both the warning and for
struct Stuff in the output.
grep exits with code
0 if it finds a match, so the result is that this script exits with code
0 if both of those match, and code
1 if either one fails. Exit code
0 signals to C-Reduce that the reduced program is interesting, while code
1 signals that it's not interesting and should be discarded.
Now we have enough to run C-Reduce:
$ creduce interestingness.sh test.c ===< 4907 >=== running 3 interestingness tests in parallel ===< pass_includes :: 0 >=== (14.6 %, 111 bytes) ...lots of output... ===< pass_clex :: rename-toks >=== ===< pass_clex :: delete-string >=== ===< pass_indent :: final >=== (78.5 %, 28 bytes) ===================== done ==================== pass statistics: method pass_balanced :: parens-inside worked 1 times and failed 0 times method pass_includes :: 0 worked 1 times and failed 0 times method pass_blank :: 0 worked 1 times and failed 0 times method pass_indent :: final worked 1 times and failed 0 times method pass_indent :: regular worked 2 times and failed 0 times method pass_lines :: 3 worked 3 times and failed 30 times method pass_lines :: 8 worked 3 times and failed 30 times method pass_lines :: 10 worked 3 times and failed 30 times method pass_lines :: 6 worked 3 times and failed 30 times method pass_lines :: 2 worked 3 times and failed 30 times method pass_lines :: 4 worked 3 times and failed 30 times method pass_lines :: 0 worked 4 times and failed 20 times method pass_balanced :: curly-inside worked 4 times and failed 0 times method pass_lines :: 1 worked 6 times and failed 33 times ******** .../test.c ******** struct Stuff { } main() { }
At the end, it outputs the reduced version of the program that it came up with. It also saves the reduced version into the original file. Beware of this when working on real code! Be sure to run C-Reduce on a copy of the code (or on a file that's already checked into version control), not on an irreplaceable original.
This reduced version makes the problem more apparent: we forgot the semicolon at the end of the declaration of
struct Stuff, and we forgot the return type on
main, which causes the compiler to interpret
struct Stuff as the return type to
main. This is bad, because
main has to return
int, thus the warning.
Xcode Projects
That's fine for something we've already reduced to a single file, but what about something more complex? Most of us have Xcode projects, so what if we want to reduce one of those?
This gets awkward because of the way C-Reduce works. It copies the file to reduce into a new directory, then runs your interestingness script there. This allows it to run a lot of tests in parallel, but this breaks if you need other stuff for it to work. Since your interestingness script can run arbitrary commands, you can work around this by copying the rest of the project into the temporary directory.
I created a standard Cocoa Objective-C app project in Xcode and then modified the
AppDelegate.m file like so:
#import "AppDelegate.h" @interface AppDelegate () { NSWindow *win; } @property (weak) IBOutlet NSWindow *window; @end @implementation AppDelegate - (void)applicationDidFinishLaunching: (NSRect)visibleRect { NSLog(@"Starting up"); visibleRect = NSInsetRect(visibleRect, 10, 10); visibleRect.size.height *= 2.0/3.0; win = [[NSWindow alloc] initWithContentRect: NSMakeRect(0, 0, 100, 100) styleMask:NSWindowStyleMaskTitled backing:NSBackingStoreBuffered defer:NO]; [win makeKeyAndOrderFront: nil]; NSLog(@"Off we go"); } @end
This strange code crashes the app on startup:
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT) * frame #0: 0x00007fff3ab3bf2d CoreFoundation`__CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 13
This is not a very informative backtrace. We could try to debug (or just notice the problem), but instead let's reduce!
The interestingness test needs to do some more work here. Let's start with a helper to run the app with a timeout. We're looking for a crash, and if the app doesn't crash it'll just stay open, so we need to kill it after a few seconds. I found this handy perl snippet repeated all over the internet:
function timeout() { perl -e 'alarm shift; exec @ARGV' "$@"; }
Next, we need to copy the Xcode project over:
cp -a ~/Development/creduce-examples/Crasher .
The
AppDelegate.m file isn't automatically placed in the appropriate location, so copy that across. (Note: C-Reduce will copy the file back if it finds a reduction, so be sure to use
cp here rather than
mv. Using
mv will result in a cryptic fatal error.)
cp AppDelegate.m Crasher/Crasher
Then switch into the
Crasher directory and build the project, exiting on failure:
cd Crasher xcodebuild || exit 1
If it worked, run the app with a timeout. My system is configured so that
xcodebuild places the build result in a local
build directory. Yours may be configured differently, so check first. Note that if your configuration builds to a shared build directory, you'll want to disable C-Reduce's parallel builds by adding
--n 1 to the command line when invoking it.
timeout 5 ./build/Release/Crasher.app/Contents/MacOS/Crasher
If it crashes, it'll exit with the special code
139. Translate that into an exit code of
0, and in all other cases exit with code
1:
if [ $? -eq 139 ]; then exit 0 else exit 1 fi
Now we're ready to run C-Reduce:
$ creduce interestingness.sh Crasher/AppDelegate.m ... (78.1 %, 151 bytes) ===================== done ==================== pass statistics: method pass_ints :: a worked 1 times and failed 2 times method pass_balanced :: curly worked 1 times and failed 3 times method pass_clex :: rm-toks-7 worked 1 times and failed 74 times method pass_clex :: rename-toks worked 1 times and failed 24 times method pass_clex :: delete-string worked 1 times and failed 3 times method pass_blank :: 0 worked 1 times and failed 1 times method pass_comments :: 0 worked 1 times and failed 0 times method pass_indent :: final worked 1 times and failed 0 times method pass_indent :: regular worked 2 times and failed 0 times method pass_lines :: 8 worked 3 times and failed 43 times method pass_lines :: 2 worked 3 times and failed 43 times method pass_lines :: 6 worked 3 times and failed 43 times method pass_lines :: 10 worked 3 times and failed 43 times method pass_lines :: 4 worked 3 times and failed 43 times method pass_lines :: 3 worked 3 times and failed 43 times method pass_lines :: 0 worked 4 times and failed 23 times method pass_lines :: 1 worked 6 times and failed 45 times ******** /Users/mikeash/Development/creduce-examples/Crasher/Crasher/AppDelegate.m ******** #import "AppDelegate.h" @implementation AppDelegate - (void)applicationDidFinishLaunching:(NSRect)a { a = NSInsetRect(a, 0, 10); NSLog(@""); } @end
That's a lot shorter! The
NSLog line looks harmless, although it must be part of the crash if C-Reduce didn't remove it. The
a = NSInsetRect(a, 0, 10); line is the only other thing that actually does something. Where does
a come from and why would writing to it do something bad? It's just the parameter to
applicationDidFinishLaunching: which... is not an
NSRect.
- (void)applicationDidFinishLaunching:(NSNotification *)notification;
Oops! The parameter type mismatch resulted in stack corruption that caused the uninformative crash.
C-Reduce took a long time to run on this example, because building an Xcode project takes longer than compiling a single file, and because a lot of the test cases hit the five-second timeout when running. C-Reduce copies the reduced file back to the original directory on every success, so you can leave it open in a text editor to watch it at work. If you think it's gone far enough, you can ^C it and you'll be left with the partially-reduced file. If you decide you want to run it some more, re-run it and it will continue from there.
Swift
What if you're using Swift and want to reduce a problem? Given the name, I originally thought that C-Reduce only worked on C (and maybe C++, since so many tools do both).
Thankfully, I was wrong. C-Reduce does have some C-specific reduction passes, but it has a lot of others that are relatively language agnostic. It may be less effective, but as long as you can write an interestingness test for your problem, C-Reduce can probably work on it no matter what language you're using.
Let's try it. I found a nice compiler bug on bugs.swift.org. It's already been fixed, but Xcode 9.3's Swift crashes on it and I happen to have that version handy. Here's a slightly modified version of the example from that bug:
import Foundation func crash() { let blah = ProblematicEnum.problematicCase.problematicMethod() NSLog("\(blah)") } enum ProblematicEnum { case first, second, problematicCase func problematicMethod() -> SomeClass { let someVariable: SomeClass switch self { case .first: someVariable = SomeClass() case .second: someVariable = SomeClass() case .problematicCase: someVariable = SomeClass(someParameter: NSObject()) _ = NSObject().description return someVariable // EXC_BAD_ACCESS (simulator: EXC_I386_GPFLT, device: code=1) } let _ = [someVariable] return SomeClass(someParameter: NSObject()) } } class SomeClass: NSObject { override init() {} init(someParameter: NSObject) {} } crash()
Let's try running it with optimizations enabled:
$ swift -O test.swift <unknown>:0: error: fatal error encountered during compilation; please file a bug report with your project and the crash log <unknown>:0: note: Program used external function '__T04test15ProblematicEnumON' which could not be resolved! ...
The interestingness test is fairly simple for this one. Run that command and check the exit code:
swift -O test.swift if [ $? -eq 134 ]; then exit 0 else exit 1 fi
Running C-Reduce on this, it produces the following example:
enum a { case b, c, d func e() -> f { switch self { case .b: 0 case .c: 0 case .d: 0 } return f() } } class f{}
Diving into the actual compiler bug is beyond the scope of this article, but this reduction would be really handy if we actually set out to fix it. We have a considerably simpler test case to work with. We can also infer that there's some interaction between the swift statement and the instantiation of the class, since C-Reduce probably would have removed one of them if it were unnecessary. This would give us some good hints about what might be happening in the compiler to cause this crash.
Conclusion
Blind reduction of a test case is not a very sophisticated debugging technique, but the ability to automate it can make it extremely useful. C-Reduce can be a fantastic addition to your debugging toolbox. It's not suitable for everything, but what is? For problems where it's useful, it can help enormously. It can be a bit tricky to get it to work with multi-file test cases, but some cleverness with the interestingness script solves the problem. Despite the name, it works out of the box on Swift and many other languages, so don't give up on it just because you're not working in C.
That's it for today. Check back next time for more fun, games, and code. Friday Q&A is driven by reader ideas, so if you have something you'd like to see covered here next time or some other time, please send it in!
BTW there’s a typo before the conclusion, “swift statement” should say “switch statement”.
Pom Oak Productions
I am a regular visitor of your blog and appreciate you taking the time to maintain the excellent site.
thanks for the sharing this article
Add your thoughts, post a comment:
Spam and off-topic posts will be deleted without notice. Culprits may be publicly humiliated at my sole discretion.
In fact I spent many hours to single out the bug from our app, C-Make might have helped to cut down that time significantly. | https://www.mikeash.com/pyblog/friday-qa-2018-06-29-debugging-with-c-reduce.html | CC-MAIN-2018-30 | refinedweb | 2,568 | 61.97 |
17155/types-of-ioc-containers-in-spring-framework
How many types of IOC containers are there in spring?
There are basically two types of IOC Containers in Spring:
As you might be knowing, all these ...READ MORE
import java.util.Arrays;
import java.util.Collections;
import org.apache.commons.lang.ArrayUtils;
public class MinMaxValue {
...READ MORE
Here, I have listed down few differences. ...READ MORE
@Component: This marks a java class as a bean. ...READ MORE
StringBuilder we 4 as mentioned below. READ MORE
The different types of Memory in JVM ...READ MORE
Configuration metadata can be provided to Spring container in ...READ MORE
The @Autowired annotation provides more accurate control over where ...READ MORE
When you create more than one bean ...READ MORE
Well there are basically to types of ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/17155/types-of-ioc-containers-in-spring-framework?show=17158 | CC-MAIN-2021-10 | refinedweb | 140 | 63.05 |
I am trying to place a friend function of a template class named sort in its own header file. I keep getting errors when compiling my test program. I have tried different things to fix it, but nothing works. My experience with template classes is limited as is my knowledge about the finer points of header files. Can anyone see what the problem is? Thanks in advance.
Here are the errors from g++:
In file included from UList.h:15,
from test.cpp:1:
sortBS.h:15: error: variable or field âsortâ declared void
sortBS.h:15: error: âUListâ was not declared in this scope
sortBS.h:15: error: expected primary-expression before â>â token
sortBS.h:15: error: â::UListâ has not been declared
sortBS.h:15: error: expected primary-expression before â>â token
sortBS.h:15: error: âobjâ was not declared in this scope
The code below is what I currently have after trying to fix the problem.
Here is the code from the header file, UList.h, for the template class:
#include <vector> #include <iostream> #include "sortBS.h" template <class T> class UList{ public: template <class U> friend std::ostream& operator << (std::ostream&, const UList<U>&); template <class U> friend void sort (UList<U>&); UList (size_t=10); virtual ~UList(); void insert (const T&); bool erase (const T&); bool find (const T&) const; size_t size() const; bool empty() const; protected: std::vector<T> items; //list of items };
Here is the code from the header file, sortBS.h, that contains the definition of sort:
#include "UList.h" template <class U> void sort (UList<U>::UList<U>& obj){ U temp; for (int i = 1; i < obj.items.size(); ++i){ for (int index = 0; index < obj.items.size() - i; ++index){ if (obj[index] > obj[index +1]){ temp = obj[index]; obj[index] = obj[index +1]; obj[index +1] = temp; } } } } | https://www.daniweb.com/programming/software-development/threads/275091/problem-with-friend-template-function-in-separate-header-file | CC-MAIN-2017-34 | refinedweb | 309 | 67.15 |
I've found it pretty easy to get distinct objects from a resultset using a
generator:
# generator that returns distinct elements from a sorted iterable
def distinct_iter(seq):
previous = None
for item in seq:
if item!=previous:
yield item
previous = item
result = distinct_iter( klass.select(constraint, orderBy=klass._idName) )
for obj in result:
# do something with obj
However, it bothered me that there was a large upfront cost (the time
between the user clicking a button and the data being displayed) if I wanted
to know the number of distinct objects before iterating through the
resultset (e.g. to get the fraction to increment a progress bar.)
So I came up with a "cheat" using _connection.queryAll():
def selectIDs(klass, constraint, distinct=False):
connection = klass._connection
table = klass._table
idName = klass._idName
fromstr = ', '.join( constraint.tablesUsed() )
if distinct:
template = 'select distinct %s.%s from %s where %s'
else:
template = 'select %s.%s from %s where %s'
sql = template % (table, idName, fromstr, constraint)
return connection.queryAll(sql)
rows = selectIDs(klass, constraint, distinct=True)
count = len(rows)
# do something with count
for row in rows:
obj = klass(*row)
# do something with obj
For my data the upfront cost for the second method is less than a 10th of
that for the first method. Also, the total time for the second method is
much shorter for cached objects, whereas the first method doesn't seem to
get any faster.
Dave Cook | http://sourceforge.net/p/sqlobject/mailman/message/7523024/ | CC-MAIN-2014-42 | refinedweb | 238 | 54.73 |
Go Straight To The Source with CTrace
It is easy to become dependent on debuggers. They allow us to examine variables and to control flow execution as our applications run. But what can you do when your application isn't working and the debugger can't help you? Some applications, by nature, make using a debugger impractical; multithreaded applications fall into this category. Leroy Xavier, the author of LinuxThreads says, "Debuggers are not really effective for subtle concurrency problems, because they disrupt the program too much."
Consider a computer telephony application with a separate thread running for each telephone call. Telephone protocols have timeout conditions associated with them. When you pick up your telephone receiver, you get a dial tone and then you start dialing the number. But, if you fail to dial within a specified timeframe, the call is made null and void by the protocol, and your dial tone is history. This could happen easily if you set a debugger breakpoint directly after the dial tone begins. In this situation, your use of the debugger has caused an error condition.
In addition, you likely want to isolate and examine a particular telephone call, in this case a separately running thread, from the others. A trace/debug library can be a useful tool in cases such as these, allowing variable contents and program flow to be output at runtime. It also would be nice for the trace library to incur minimal overhead at times when the application is not being traced.
My article details the use of an open-source multithreaded trace/debug library called CTrace. It also presents a method of remotely tracing a running application by using the SSH protocol.
CTrace provides a set of trace output macros to help with debugging C applications. A brief description of the trace output macros follows:
TRC_ENTER - display procedure/function entry along with argument values
TRC_VOID_RETURN - display and perform procedure return
TRC_RETURN - display and perform function return
TRC_PRINT - equivalent to printf
TRC_ERROR - printf error message
The first three macros provide a primitive simple stack trace; the fourth is equivalent to printf; and the fifth is equivalent to printf with ERROR pre-pended. So why not use printf then?
CTrace also provides a number of layers of abstraction that can be configured dynamically to control which traces are written to the trace output stream. This enables you to :
distinguish program threads.
distinguish user-defined software units.
distinguish user-defined levels.
enable and disable tracing dynamically, in effect removing the runtime overhead of CTrace.
change the output stream the trace calls are written to.
All of the above can be done at runtime.
When you call one of the trace output macros from your code, you provide it with both explicit and implicit information about the trace. Let's look at an example. The following function foo is given an integer argument n and prints "Hello world" n times:
1 int foo(int n) 2 { 3 TRC_ENTER(UNIT_FOO, TRC1, ("n=%d", n)); 4 5 if(n <= 0){ 6 TRC_ERROR(UNIT_FOO, ("Invalid argument: %d", n)); 7 TRC_RETURN(UNIT_FOO, TRC1, ("%d", 1), 1); 8 } 9 10 int i; 11 for(i=n; i>0; i--){ 12 sleep(1); 13 TRC_PRINT(UNIT_FOO, TRC0, ("Hello world[%d]", i)); 14 } 15 16 TRC_RETURN(UNIT_FOO, TRC1, ("%d", 0), 0); 17 }
Look at the TRC_ENTER trace on line 3. The TRC_ENTER macro's job is to display entry into a function along with the function arguments. This particular trace belongs to the logical software unit UNIT_FOO. In fact, all of the trace calls in this function belong to UNIT_FOO. If tracing is turned on within CTrace for UNIT_FOO, all of these traces are written to the trace output stream.
The TRC_ENTER trace on line 3 also belongs to trace level TRC1, as do both TRC_RETURN traces (lines 7 and 16). The TRC_RETURN macro's job is to display and perform function return. Combining TRC_ENTER and TRC_RETURN gives you a primitive stack trace. I am using a level scheme whereby TRC1 is reserved for the stack trace. Here is the output from running foo with n=5 and trace level TRC1 set:
[calsa@trapper foo]$ ./foo 5 07/09/04-16:24:31:main.c:10:./foo: enter foo(n=5) 07/09/04-16:24:32:main.c:23:./foo: return foo(0) [calsa@trapper foo]$
I have reserved trace level TRC0 for TRC_PRINT traces, as you can see from line 13. Here is the output from running foo with n=5 and trace level TRC0 set:
[calsa@trapper foo]$ ./foo 5 07/09/04-16:28:17:main.c:20:./foo: Hello world[5] 07/09/04-16:28:18:main.c:20:./foo: Hello world[4] 07/09/04-16:28:19:main.c:20:./foo: Hello world[3] 07/09/04-16:28:20:main.c:20:./foo: Hello world[2] 07/09/04-16:28:21:main.c:20:./foo: Hello world[1] [calsa@trapper foo]$
Finally, here is the output of running foo with all trace levels set (level=TRC_ALL):
[calsa@trapper foo]$ ./foo 5 07/09/04-16:30:10:main.c:10:./foo: enter foo(n=5) 07/09/04-16:30:11:main.c:20:./foo: Hello world[5] 07/09/04-16:30:12:main.c:20:./foo: Hello world[4] 07/09/04-16:30:13:main.c:20:./foo: Hello world[3] 07/09/04-16:30:14:main.c:20:./foo: Hello world[2] 07/09/04-16:30:15:main.c:20:./foo: Hello world[1] 07/09/04-16:30:15:main.c:23:./foo: return foo(0) [calsa@trapper foo]$
The TRC_ERROR macro on line 6 implicitly belongs to level TRC_ERR, so it doesn't require you to pass a level argument to it. Once your program starts to become stable, you might set TRC_ERR as the only trace level, so you are reporting errors only. To be complete, I set TRC_ERR as the sole trace level and run the program with a negative argument:
[calsa@trapper foo]$ ./foo -5 07/09/04-16:40:12:main.c:13:./foo: ERROR in fn foo: Invalid argument: -5 [calsa@trapper foo]$
The trace output in the above examples mostly is self-explanatory. Here is the field by field breakdown:
<date>-<time>:<source>:<line>:<thread> <indented trace output>
Where did CTrace come up with the thread name foo? I have added the main thread of this single-threaded program to CTrace and named it after the executable foo(argv[0]). CTrace manages a collection of data structures representing each program thread. CTrace can identify the calling thread of the trace by calling pthread_self().
Let's convert foo into a multithreaded program by adding a new thread, bar. The bar thread mimics foo but prints "Goodbye cruel world" instead of "Hello world". I want bar to belong to a new logical software unit UNIT_BAR, which I will define. The two threads run concurrently:
#include <pthread.h> #include <ctrace.h> #define UNIT_MAX 100 #define UNIT_FOO 1 #define UNIT_BAR 2 int foo(int n) { TRC_ENTER(UNIT_FOO, TRC1, ("n=%d", n)); if(n <= 0){ TRC_ERROR(UNIT_FOO, ("Invalid argument: %d", n)); TRC_RETURN(UNIT_FOO, TRC1, ("%d", 1), 1); } int i; for(i=n; i>0; i--){ sleep(1); TRC_PRINT(UNIT_FOO, TRC0, ("Hello world[%d]", i)); } TRC_RETURN(UNIT_FOO, TRC1, ("%d", 0), 0); } void *bar(void *arg) { int n = *(int *)arg; TRC_ADD_THREAD("bar", 0); TRC_ENTER(UNIT_BAR, TRC1, ("arg=0x%x", arg)); if(n <= 0){ TRC_ERROR(UNIT_BAR, ("Invalid argument: %d", n)); TRC_RETURN(UNIT_BAR, TRC1, ("0x%x", NULL), NULL); } int i; for(i=n; i>0; i--){ sleep(1); TRC_PRINT(UNIT_BAR, TRC0, ("Goodbye cruel world[%d]", i)); } TRC_RETURN(UNIT_BAR, TRC1, ("0x%x", NULL), NULL); } int main(int argc, char **argv) { int n; pthread_t tid; if(argc < 2){ printf("usage: foo <num_msgs>\n"); exit(1); } TRC_INIT(NULL, TRC_ENABLED, TRC_ON, TRC_ALL, UNIT_MAX, 0); TRC_ADD_THREAD(argv[0], 0); n = atoi(argv[1]); pthread_create(&tid, NULL, bar, &n); foo(n); pthread_join(tid, 0); TRC_REMOVE_THREAD(tid); TRC_REMOVE_THREAD(pthread_self()); TRC_END(); }
As you can see, you need to initialize CTrace with TRC_INIT before using it. You also need to add each thread you want traced with the TRC_ADD_THREAD macro, including the main() thread of the program. If you don't add a thread to CTrace, it never will output traces for that thread. Don't forget to remove any thread you add after you have finished with it. Use TRC_REMOVE_THREAD to do this. Here is the output of foo now that we have added bar:
[calsa@trapper foobar]$ ./foo 5 07/09/04-17:05:49:main.c:31:bar: enter bar(arg=0xbfffe794) 07/09/04-17:05:49:main.c:10:./foo: enter foo(n=5) 07/09/04-17:05:50:main.c:20:./foo: Hello world[5] 07/09/04-17:05:50:main.c:41:bar: Goodbye cruel world[5] 07/09/04-17:05:51:main.c:20:./foo: Hello world[4] 07/09/04-17:05:51:main.c:41:bar: Goodbye cruel world[4] 07/09/04-17:05:52:main.c:20:./foo: Hello world[3] 07/09/04-17:05:52:main.c:41:bar: Goodbye cruel world[3] 07/09/04-17:05:53:main.c:20:./foo: Hello world[2] 07/09/04-17:05:53:main.c:41:bar: Goodbye cruel world[2] 07/09/04-17:05:54:main.c:20:./foo: Hello world[1] 07/09/04-17:05:54:main.c:23:./foo: return foo(0) 07/09/04-17:05:54:main.c:41:bar: Goodbye cruel world[1] 07/09/04-17:05:54:main.c:44:bar: return bar(0x0) [calsa@trapper foobar]$
The output shows how easy it is to distinguish between threads.
GNU nana
What are the key difference(s) with GNU
nana?
(GNU nana may not work anymore with recent gcc compilers.)
Xavier
It's Xavier Leroy, not the other way around.
Re: Xavier
Sorry, will try and get that corrected. | http://www.linuxjournal.com/article/7686 | CC-MAIN-2017-39 | refinedweb | 1,670 | 57.67 |
App-specific Settings
I'm sure there's quite a few tutorials on this, and I'm sure what I have to say is nothing new to many people, but here is the pattern I follow, now, for app-specific settings in my distributable Django apps.
Basically, I use a settings.py in my app, in which it tries to grab its values from django.conf.settings using getattr.
The advantages of this approach are:
- Self documenting code Anyone wants to know what settings my app has, and their defaults, they're clearly apparent.
- Central definitions. You only have to look at one place to find all the settings.
- Prevents circular imports. This is what actually triggered me to move to this in the first place.
Here is an example taken from gnocchi-cms:
from django.conf import settings import os.path # For Views PAGE_CACHE = getattr(settings, 'PAGE_CACHE', 60*60*24) DEFAULT_TEMPLATE = getattr(settings, 'DEFAULT_TEMPLATE', 'default.html') # For Admin STATIC_URL = getattr(settings, 'STATIC_URL', settings.MEDIA_URL) CODEMIRROR = getattr(settings, 'CODEMIRROR', os.path.join(STATIC_URL, 'codemirror')) WYMEDITOR = getattr(settings, 'WYMEDITOR', os.path.join(STATIC_URL, 'wymeditor')) # For Context Variables CV_NAMESPACE = getattr(settings, 'CV_NAMESPACE', 'cv') CV_CACHE_TIME = getattr(settings, 'CV_CACHE_TIME', 24*60*60) # Sites Middleware IGNORE_WWW_ZONE = getattr(settings, 'IGNORE_WWW_ZONE', True) IGNORE_SERVER_PORT = getattr(settings, 'IGNORE_SERVER_PORT', True)
Now, just looking at this code it's immediately clear what all the settings are, their defaults, and even which parts of the code they're for.
Addendum and Caveat
I discovered today, in my first Django 1.3 site, that settings can trick you.
It looks like it will return None to getattr if it's an attribute it knows (the one that got me was STATIC_URL).
My work around in gnocchi-cms has been:
STATIC_URL = getattr(settings, 'STATIC_URL') if STATIC_URL is None: STATIC_URL = settings.MEDIA_URL | http://musings.tinbrain.net/blog/2011/mar/17/app-specific-settings/ | CC-MAIN-2018-22 | refinedweb | 297 | 57.77 |
To deploy a Next.js application under a sub-path of a domain you can use the
basePath config option.
basePath allows you to set a path prefix for the application. For example, to use
/docs instead of
/ (the default), open
next.config.js and add the
basePath config:
module.exports = { basePath: '/docs', }
Note: this value must be set at build time and can not be changed without re-building as the value is inlined in the client-side bundles.
When linking to other pages using
next/link and
next/router the
basePath will be automatically applied.
For example, using
/about will automatically become
/docs/about when
basePath is set to
/docs.
export default function HomePage() { return ( <> <Link href="/about"> <a>About Page</a> </Link> </> ) }
Output html:
<a href="/docs/about">About Page</a>
This makes sure that you don't have to change all links in your application when changing the
basePath value.
When using the
next/image component, you will need to add the
basePath in front of
src.
For example, using
/docs/me.png will properly serve your image when
basePath is set to
/docs.
import Image from 'next/image' function Home() { return ( <> <h1>My Homepage</h1> <Image src="/docs/me.png" alt="Picture of the author" width={500} height={500} /> <p>Welcome to my homepage!</p> </> ) } export default Home | https://nextjs.org/docs/api-reference/next.config.js/basepath | CC-MAIN-2022-40 | refinedweb | 223 | 58.58 |
Sending Emails with RSendAs
From Nokia Developer Wiki
Article Metadata
Tested with
Devices(s): Nokia E61i, Nokia C7-00Platform Security
Capabilities: NetworkServicesArticle
Keywords: messages
Created: everyourgokul (24 Apr 2007)
Reviewed: Rostgrm
Last edited: hamishwillee (14 Jun 2013)
Code for sending emails
For sending emails the following code can be used.
Warnings
- Common:
- setup at least one email account or you will have KErrNotFound leaves
- Devices:
- sends well since 9.1 without capabilities
- on Belle devices and above the message is saved to Drafts. If you add NetworkServices it will be sent succesfully.
- Emulator prerequisites
- be sure you have the file at <SDK_PATH>\Epoc32\winscw\c\splash.bmp
Header Required:
#include <rsendas.h>
#include <rsendasmessage.h>
#include <senduiconsts.h>
Library needed:
LIBRARY sendas2.lib
Source File:
RSendAs session;
User::LeaveIfError( session.Connect() );
CleanupClosePushL( session );
RSendAsMessage sendAsMessage;
sendAsMessage.CreateL( session, KSenduiMtmSmtpUid );
CleanupClosePushL( sendAsMessage );
sendAsMessage.SetSubjectL( _L("Welcome back to symbian") );
//adding 'TO' field
sendAsMessage.AddRecipientL( _L("you@me.com"), RSendAsMessage::ESendAsRecipientTo );
sendAsMessage.SetBodyTextL( _L("somebody@world.com") );
TRequestStatus status;
// adding attachments. Be sure file exists :)
sendAsMessage.AddAttachment( _L("c:\\splash.bmp"), status );
User::WaitForRequest( status );
sendAsMessage.SendMessageAndCloseL();
CleanupStack::Pop( &sendAsMessage );
CleanupStack::PopAndDestroy( &session );
The example in original paper was remade from book 'Symbian OS Communications Programming', author Iain Campbell, published in 2007
Vishal@Aggarwal - Getting System error(-1)
Hello, I am getting System error(-1) while running this code on Emulator. I am using carbide c++ v2.7 with s60_3rd edition sdk .. plz help me how to so that.thanks
Vishal@Aggarwal 09:49, 5 July 2012 (EEST)
Hamishwillee - DebuggingThis could mean virtually anything - its a not found error. You need to find out what isn't found at runtime. As this was written in 2007 it is likely the author won't respond, so for further help I suggest you check the developer library and also ask for help on the discussion boards.
hamishwillee 05:06, 6 August 2012 (EEST)
Hamishwillee - Note, this has been updated by rostgrm and should now be compatible
RegardsHamish
hamishwillee 02:19, 21 January 2013 (EET)
Aryan549 - No errors and no output
Hi,
I have configured an email account and applied above code but, couldn't succeeded. I didn't get any exceptions code ran successfully but no output. Is there any code which i need to add to create a session. Please help us in configuring this email setup.
Thanks in advance.
aryan549 (talk) 08:41, 24 October 2013 (EEST)
Hamishwillee - Suggest you cross link to discussion board
Post on wiki are seen by smaller number of people than discussion boards. I'd post request on symbian C++ board with link here.
RegardsHamish
hamishwillee (talk) 09:35, 28 October 2013 (EET) | http://developer.nokia.com/community/wiki/Sending_Emails_with_RSendAs | CC-MAIN-2014-15 | refinedweb | 446 | 50.43 |
SVN - Review Changes
Jerry already added array.c file to the repository. Tom also checks out the latest code and starts working.
[tom@CentOS ~]$ svn co --username=tom
Above command will produce the following result.
A project_repo/trunk A project_repo/trunk/array.c A project_repo/branches A project_repo/tags Checked out revision 2.
But, he found that someone has already added the code. So he is curious about who did that and he checks the log message to see more details using the following command:
[tom@CentOS trunk]$ svn log
Above command will produce the following result.
------------------------------------------------------------------------ r2 | jerry | 2013-08-17 20:40:43 +0530 (Sat, 17 Aug 2013) | 1 line Initial commit ------------------------------------------------------------------------ r1 | jerry | 2013-08-04 23:43:08 +0530 (Sun, 04 Aug 2013) | 1 line Create trunk, branches, tags directory structure ------------------------------------------------------------------------
When Tom observes Jerry’s code, he immediately notices a bug in that. Jerry was not checking for array overflow, which could cause serious problems. So Tom decides to fix this problem. After modification, array.c will look like this.
#include <stdio.h> #define MAX 16 int main(void) { int i, n, arr[MAX]; printf("Enter the total number of elements: "); scanf("%d", &n); /* handle array overflow condition */ if (n > MAX) { fprintf(stderr, "Number of elements must be less than %d\n", MAX); return 1; } printf("Enter the elements\n"); for (i = 0; i < n; ++i) scanf("%d", &arr[i]); printf("Array has following elements\n"); for (i = 0; i < n; ++i) printf("|%d| ", arr[i]); printf("\n"); return 0; }
Tom wants to use the status operation to see the pending change-list.
[tom@CentOS trunk]$ svn status M array.c
array.c file is modified, that's why Subversion shows M letter before file name. Next Tom compiles and tests his code and it is working fine. Before committing changes, he wants to double-check it by reviewing the changes that he made.
[tom@CentOS trunk]$ svn diff Index: array.c =================================================================== --- array.c (revision 2) +++ array.c (working copy) @@ -9,6 +9,11 @@ printf("Enter the total number of elements: "); scanf("%d", &n); + if (n > MAX) { + fprintf(stderr, "Number of elements must be less than %d\n", MAX); + return 1; + } + printf("Enter the elements\n"); for (i = 0; i < n; ++i)
Tom has added a few lines in the array.c file, that's why Subversion shows + sign before new lines. Now he is ready to commit his changes.
[tom@CentOS trunk]$ svn commit -m "Fix array overflow problem"
The above command will produce the following result.
Sending trunk/array.c Transmitting file data . Committed revision 3.
Tom's changes are successfully committed to the repository. | https://www.tutorialspoint.com/svn/svn_review_changes.htm | CC-MAIN-2017-43 | refinedweb | 442 | 65.62 |
The software concept of “raising the level of abstraction” has improved my skill and creativity in cooking, by teaching me to think about recipe components in terms of their properties and functions. Practicing abstraction-raising in cooking feeds back to help me with coding; for example, keeping me from going astray the other day with the Template Method pattern. This post is more about coding than cooking. The cooking’s a metaphor. (The cake is a lie.)
Abstract Cooking
My skill with cooking grew from rote recipe following to intuitive creation when I started to think of it in terms borrowed from software: raising the level of abstraction.
Consider a week-night skillet dinner. If I told you to heat canola oil in a cast-iron skillet, saute slices of onion and chunks of chicken seasoned with salt and pepper, and toss in bell peppers cut into strips, you could probably follow along and make exactly that. But that’s pretty limiting. If instead I described the process as using a fat to conduct heat for sauteing a savory root, a seasoned protein, and some vegetables, then you could use that as a template, and make a week of dinners without repeating yourself.
Let’s dive into that step of using a fat for conduction, because it is a cool and useful bit of food science. To cook, you need to get heat onto food. The medium can be air, liquid, or fat. Each creates different results, hence the terms baking, boiling, and frying. When you toss cut-up bits of food in a skillet with oil and repeatedly jostle them, you’re sauteing (“saute” means “to jump”), and that oil is playing the role of the fat, which is conducting the heat. If you’ll pardon the metaphor, CanolaOil implements the IFat interface.
It’s useful to think of cooking this way, because if you know the properties of the various cooking fats, you can choose the right IFat implementation for the job. Canola oil is heart-healthy and stands up well to stove-top heat. Olive oil has wonderful health benefits, a bold flavor, and an intriguing green color, but those attributes are pretty much obliterated by heat, so save your expensive EVOO for raw applications like salads and dips. Butter makes everything taste better, browns up beautifully, but is harder on the heart and will burn at a low temperature; temper it with an oil like canola to keep it from burning. Peanut oil stands up to heat like a champ, so it’s popular for deep frying. Armed with this kind of knowledge, I don’t need to check a recipe when I’m cooking; I just think about what I’m trying to accomplish, and choose the right implementation.
Pam Anderson’s How to Cook Without a Book got me thinking about food this way, and Harold McGee’s On Food and Cooking provides a feast of food geekery to fill in all the particulars.
Template Coding
Thinking about food this way, raising the level of abstraction, guides my thinking about code. My meal preparation follows the Template Method pattern, as does a class my teammate and I needed to modify recently.
In this example, our application sends instructions to various external systems. The specifics of how those systems like to hold their conversations vary between systems. However, the series of steps, when phrased in our core business terms, remain the same. You do A, then you do B, then you do C, in whatever way a particular instance likes to do A, B, and C.
Here’s my class with its template method, translated back to the dinner metaphor:
3 public abstract class SkilletDinner
4 {
5 public void Cook()
6 {
7 HeatFat();
8 SauteSavoryRoot();
9 SauteProtein();
10 SauteVegetables();
11 }
12
13 protected abstract void HeatFat();
14 protected abstract void SauteSavoryRoot();
15 protected abstract void SauteProtein();
16 protected abstract void SauteVegetables();
17 }
But lo, I encountered an external system that needed to do one extra little thing. I needed a special step, just for that one instance. Like dinner the other night, where the vegetable was asparagus, the fat was bacon (oh ho!), and the final step was to toss some panko breadcrumbs into the pan to brown and toast and soak up the bacony love.
How do I extend my template method to accommodate an instance-specific step?
One idea that floated by was to make the method virtual, so that we could override it in our special instance. But we still wanted the rest of the steps, so we’d have to copy the whole method into the new instance, just to add a few lines. Also, anybody else could override that template, too, so that when they were told to do A, B, and C, they could totally fib and do nothing of the sort.
3 public abstract class SkilletDinner
4 {
5 public virtual void Cook()
6 {
7 //Note: The Cook template method is now virtual,
8 //and can be overridden in deriving classes.
9 //That’s not good.
10 HeatFat();
11 SauteSavoryRoot();
12 SauteProtein();
13 SauteVegetables();
14 }
15 protected abstract void HeatFat();
16 protected abstract void SauteSavoryRoot();
17 protected abstract void SauteProtein();
18 protected abstract void SauteVegetables();
19 }
20
21 public class LazyDinner : SkilletDinner
22 {
23 public override void Cook()
24 {
25 OrderPizza();
26 //We’re overriding the template and *cheating*!
27 //Although, if it’s Austin’s Pizza,
28 //maybe that’s okay…
29 }
30
31 private void OrderPizza()
32 {
33 //With extra garlic!
34 }
35
36 protected override void HeatFat() { }
37 protected override void SauteSavoryRoot() { }
38 protected override void SauteProtein() { }
39 protected override void SauteVegetables() { }
40 }
That LazyDinner class isn’t really a SkilletDinner at all; its behavior is completely different. No, that option flouts the whole point of the Template Method pattern.
Our better idea was to make one small change to the template method, adding an extension point. That is, a call to a virtual method which in the base implementation does nothing, and can be overridden and told to do stuff in specific cases.
Back to dinner:
3 public abstract class SkilletDinner
4 {
5 public void Cook()
6 {
7 HeatFat();
8 SauteSavoryRoot();
9 SauteProtein();
10 SauteVegetables();
11 AddFinishingTouches(); //Here’s the hook.
12 }
13
14 protected virtual void AddFinishingTouches()
15 {
16 //By default, do nothing.
17 }
18
19 protected abstract void HeatFat();
20 protected abstract void SauteSavoryRoot();
21 protected abstract void SauteProtein();
22 protected abstract void SauteVegetables();
23 }
24
25 public class FancyBaconPankoDinner : SkilletDinner
26 {
27 protected override void AddFinishingTouches()
28 {
29 //In this case, override this extensibility hook:
30 ToastBreadcrumbs();
31 }
32
33 private void ToastBreadcrumbs()
34 {
35 //Toss in the bacon fat; keep ‘em moving.
36 }
37
38 protected override void HeatFat()
39 {
40 //Cook bacon, set aside, drain off some fat.
41 }
42
43 protected override void SauteSavoryRoot()
44 {
45 //Minced garlic, until soft but before browning
46 }
47
48 protected override void SauteProtein()
49 {
50 //How about… tofu that tastes like bacon?
51 }
52
53 protected override void SauteVegetables()
54 {
55 //Asparagus, cut into sections.
56 //Make it bright green and a little crispy.
57 }
58 }
This maintains the contract of the template method, while allowing for special cases. With the right extensibility hooks in place, my dinner preparation happily follows the Open-Closed Principle—open for extension, but closed for modification.
I enjoy the way my various hobbies feed into and reflect upon each other. I hope this post has given you some useful insight into the Template Method pattern, or dinner preparation, or both. Look for synergies amongst your own varied interests; it can be the springboard for some truly breakthrough ideas.
Mmm, bacon… | https://lostechies.com/sharoncichelli/2009/08/29/cooking-up-a-good-template-method/ | CC-MAIN-2017-30 | refinedweb | 1,284 | 57.61 |
Have You Seen This?
I upgraded to the latest IntelliJ IDEA 6.0.5 today. And I'm seeing some weird error messages when I ran JUnit 4.3.1 unit tests:
Exception in thread "main" java.lang.NoSuchMethodError: org.junit.internal.runners.MethodValidator.validateAllM ethods()Ljava/til/ist;
This is with JDK 1.5.0_11 on Windows XP Por SP2.
Is it just me, or are you seeing something similar?
Can I Turn The Stupid XMLHttpRequest Thing Off
Enough of this AJaX non-sense!
See what it did to my browser:
... all because some webapp (which shall remain nameless) decided it's a good idea to hog the UI thread waiting for some AJaXy response from the server, for more than three minutes!
Oh, Isn't It Cute?
(This post is an exercise in using Creative Commons licensed Free Art.)
Mozilla Developer Center beta: His.
I've been using Mozilla® and Firefox® for many years, on many platforms, and free of charge. It is natural that I would like to help them spreading the "Use Open Standards" message.
[*] Mozilla is a registered trademark of the Mozilla Foundation.
[*] Firefox is a registered trademark of the Mozilla Foundation.
[*] The above art is created by Sean Martell and licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported license.
Printer-Hostile Web Pages? OpenOffice.org To The Rescue
A while back (840 days ago, to be exact) I blogged about the NVU HTML authoring tool and its use in reformatting and printing printer-hostile web pages.
Well, NVU 1.0 was released in June 2005. And I have been using it to good effect. However, for some reason, NVU development stopped, and the download site still shows a binary for Fedora Core 3, even though the current version of Fedora Core is FC6. And I couldn't find NVU in FC6's own yum repository. I could download the distribution-agnostic version, but that would mean I have to untar it and then create a launcher icon for it by myself. It's no big deal, but like recompiling Linux kernels (which I did back in the pre-1.0 days), it gets old really fast.
So, I'm looking for an easier way. Just by luck, I clicked on the OpenOffice.org Writer icon on my quick launch bar, which has always been there since the pre-Fedora days, and pasted some random HTML into it. And guess what? It took it.
So until every web site out there becomes printer-friendly, I'll be using OpenOffice.org Writer to compensate. Here's OOo Writer in action, formatting this page:
Public Alpha Of The Next Big Thing
A nameless Adobe PR firm person: .
Google Guice 1.0 Release Sparks Discussions
(Neal Gafter started the trend of referring to him as "crazy" Bob Lee.)
After hearing about his DI framework 57 days ago, in the context of my "erase erasure" discussion, I'm very glad to see "crazy" Bob Lee release Google Guice 1.0 as open source:
"crazy" Bob Lee:.
The release sparked immediate discussions in the Java blogging community:
- InfoQ
- TheServerSide
- Stuff That Happens
- crazybob.org
- OnJava.com
- SpringFramework Support Forum
- Stephen Colebourne's Weblog
- The BileBlog
- Google Code Blog
It also sparked several rounds of discussions among colleagues in the office here. I would imagine similar discussions went on in hundreds of other places where Java developers try to solve real world problems.
Here are some aspects of the discussions that I found interesting:
The Nature of Dependency Injection
I've heard all of the following opinions:
- DI is just the Strategy pattern.
- DI is just the Factory pattern.
- DI is a pattern on its own, useful with several other patterns.
- I prefer the practice of following the DI pattern without using a third party framework.
- I use Spring's DI framework and like the result very much.
Wiring is the term that people use liberally in DI related discussions without too much discrimination. Everybody assumes that everybody else knows what it means, and that its meaning is clear-cut and uniformly agreed.
However, depending on the Who, When, Why, What, How of the wiring, it could be made to mean different things:
- internal component assembly (a Swing programmer wiring a custom Model to a GUI component)
- component configuration (setting logging appenders and levels for components)
- application configuration (selecting to use MySQL vs. Oracle database drivers)
- performance tuning (setting the size of the connection pool)
- end-user preferences (disabling JavaScript in the browser)
While one can do all of the above using Guice (or some other framework like Spring), it is not always productive to mix all these different kinds of wiring together and treat them as the same thing.
Embracing Java 5 Features
Two and a half years after the release of Java 5, I think it is time for everybody but the most conservative ("my system is done and making me tons of money, I'm not touching it") to move on to the new platform.
I also think it is time for new products and new versions of existing products to exhibit their commitment to the new platform by taking advantage of new language features like annotations and generics.
Those organizations and developers who still operate in the 1.4 platform need to feel all the pressure that associating with a legacy platform engenders.
That's why I'm really glad that Guice is unabashedly embracing annotations and generics and all the rest of the Java 5 language features.
Already, the source code of Guice is the bed for innovative idioms of Java generics, as Neal Gafter pointed out here and here.
Other surprises and a-ha's as I was browsing through Guice source include an implementation of an annotation, the use of String.format(), and the <T> T notNull(T t) template.
Guice Is Extremely Easy To Use
Guice does its job with a straightforward user interface. The User's Guide prints out to about 30 pages of easy to follow recipes. My experience with API design tells me that this is not an accident.
I'm not surprised that Eric was able to swap Guice in to replace Spring IOC in a day.
At this point I must confess that I haven't worked on a so called Spring/Hibernate project that was so prevalent in the last couple of years. And my previous experience with an DI/IOC framework consists of me working out the Pico container tutorial (boy.kiss(girl);, cute) and promptly forgot about the details; and multiple attempts at learning the Spring IOC framework, and feeling like hitting a brick wall when it comes to actually write the @#$%ed xml configuration file.
With Guice there is no XML configuration files, and that's a big win in my book.
Guice, Or Something Like It, Should Be In The JDK
About the only thing that makes feel a little bit uncomfortable is the use of Guice specific imports in my application code.
Wouldn't it be nice if I don't have to do a
import com.google.inject.*;but do a
import java.lang.inject.*;instead?
That, and the fear of the dueling frameworks scenario, where I want to use both frameworks A and B while framework A requires Guice 10.7.19 and framework B requires an incompatible Guice 10.8.1, prompt me to suggest that maybe Guice, or something like it, ought to be in the JDK.
"Keep an eye on JSR 299." was the answer.
So here's my eye on JSR 299—Web Beans. It's scope is Java EE rather than Java SE. But features of Java EE have known to migrate to Java SE.
Relation Between DI and Other Patterns
Although Dependency Injection is officially a design pattern now ("says who?" you ask. "Wikipedia. If it's on Wikipedia, it must be true, right?") in many ways it doesn't feel like a design pattern, although I can't pin down exactly in what way this pattern is not like the others.
Viewed in the narrow sense, DI is a mechanism that links a client that makes use of an interface with a concrete implementation of that interface. Many of the GoF patterns uses the abstract interface/concrete implementation paradigm. Yet I don't think it make sense to say that these patterns depend on (or include) a dependency injection subpattern.
Another clue that dependency injection is different is that it is rarely called dependency injection when done without the assistance of an DI/IOC container—it's just initialization code. It's almost like the situation with garbage collection—when done by hand, it's called memory management.
If you stop to think about it, there is a duality between a dependency injector and a garbage collector: one is where objects come from to live, the other is where objects go to die.
How About That Logo?
If you are familiar with my cute logo theory, you should recognize the genius of announcing Guice with the kid with a glass image. Although not an official logo of the project, the picture is attractive enough that it may very well serve the purpose of a logo.
"The first thing I noticed was the image. It's so cute, I have to try the Guice. I took one sip and am hooked!"
This Is A DST Test
The time now is 10:23am CDT.
Eric
Love The New My Yahoo! Beta
Quote Of The Day
Dr. Heinz M. Kabutz (in The Java Specialists' Newsletter): Migrating is migrane and grating rolled into one!
Erasure To Be Erased In Java 7
Peter Ahé: Alex and I are on stage at EclipseCon 2007 waiting to give our presentation, unfortunately this means that we are missing Scott Adams who we can hear from the conference room next to ours.
The "presentation" is about Java SE 7 Language Features. I'm delighted to see that my call to "Erase Erasure" 45 days ago made it into the Java 7 feature list (page 12):
Today.
Java News Brief (JNB): Introduction to Grails
The March issue of the Java News Brief (JNB) is out. Jeff Brown gives an introduction to Grails—the "framework for agile web development using Groovy and Java."
Jeff Brown: Grails is a remarkably flexible framework while maintaining an ease of use that is unparalleled by other web application frameworks targeted for the JVM. Use the exercises in this article as a jump start with Grails and have fun exploring the possibilities.
Software Update: I Don't Want To Restart Now!
Both Firefox and Thunderbird have instituted automatic software update features whereby the browser and email reader periodically check their home sites to download any updates. This is a good thing, in my opinion.
However, both programs has also gotten into the habit of popping up a message box, the moment the download is complete, asking "Hey, new updates are here. You need to restart?" And I can choose either "Now" or "Later."
This is annoying. Just think about it: I've been using the old version of your product for months, and things are just fine. And I start and quit Firefox and Thunderbird all the time, at least several times a day. (With Firefox routinely taking up 200M of memory, and not to mention the AJaX induced CPU load, nowadays, I have to close it down from time to time.) Can the newly downloaded stuff wait quietly for just a few more hours for one of my restarts?
And what if I accidentally clicked on the "Now" button? I'd loose my "place"—all the tabs I have opened, and the form I'm in the middle of filling out will be gone. That'll make me very angry. Enough to write a blog entry bashing your feature. | http://www.weiqigao.com/blog/2007/03.html | crawl-001 | refinedweb | 1,983 | 64.2 |
>>
Print all words occurring in a sentence exactly K times
When it is required to print all the words occurring in a sentence exactly K times, a method is defined that uses the ‘split’ method, ‘remove’ method and the ‘count’ methods. The method is called by passing the required parameters and output is displayed.
Example
Below is a demonstration of the same
def key_freq_words(my_string, K): my_list = list(my_string.split(" ")) for i in my_list: if my_list.count(i) == K: print(i) my_list.remove(i) my_string = "hi there how are you, how are u" K = 2 print("The string is :") print(my_string) print"The repeated words with frequency", " are :" key_freq_words(my_string, K)
Output
The string is : hi there how are you, how are u The repeated words with frequency 2 are : how are
Explanation
A method named ‘key_freq_words’ is defined that takes a string and a key as parameter.
The string is split based on spaces, and assigned to a list.
This list is iterated over, and if the count of an element is equal to the key values, it is displayed on the console.
Once it has been printed, it is removed from the list.
Outside the method, a string is defined, and is displayed on the console.
The value for key is defined.
The method is called by passing the string and the key.
The output is displayed on the console.
- Related Questions & Answers
- Reverse all the words of sentence JavaScript
- Python - Generate all possible permutations of words in a Sentence
- Count substrings with each character occurring at most k times in C++
- Count of Numbers in a Range where digit d occurs exactly K times in C++
- Print all funny words in a string in C++
- C# program to remove all duplicates words from a given sentence
- Java program to remove all duplicates words from a given sentence
- Rearrange Words in a Sentence in C++
- Replace all occurrence of specific words in a sentence based on an array of words in JavaScript
- Count words in a sentence in Python program
- Count palindrome words in a sentence in C++
- Python program to count words in a sentence
- Program to find number of sublists that contains exactly k different words in Python
- Java Program to Print all unique words of a String
- Counting number of words in a sentence in JavaScript | https://www.tutorialspoint.com/print-all-words-occurring-in-a-sentence-exactly-k-times | CC-MAIN-2022-27 | refinedweb | 391 | 60.89 |
01-31-2010 07:51 AM - last edited on 02-01-2010 04:12 AM
I've watched many people struggle with the concept of running a network process, or any blocking process in the Background while still holding up the Ui, and displaying a progress indicator.
There seem to be two common issues
a) you can't block the Event Thread so how do you get the UI processing to stall
b) You need to update the UI with notifications from the background Thread, so you need to have some sort of Observer Pattern or Interface.
To help explain both of these concepts, I have prepared the following sample UiApplication.
This is sample code designed to demonstrate how to overcome these two issues.. What I expect you to do is to create a sample project, add this code to it, compile the project and try the Application. Then play with it, for example, add real networking code. When you understand how this Application resolves the two common issues, then apply these principles to your own program.
Note that this is not simple, and believe me I tried to make it clear as I could. But I also wanted to demonstrate a real situation, that you might actually see in your program. The two don't sit well together, so I have tried to produce something useful rather than something easy to explain but completely useless in the real world.
This is not production code, please do not copy this code, then complain that it does not work in your program.
That said, I am very happy for you to point out bugs and make suggestions for improvement. In fact I would encourage you to do this. If we have a good sample of this, everyone will benefit.
There are 4 classes:
1) PleaseWaitDemo - is just the UiApplication and starts the PleaseWaitDemoScreen Screen, which is included in the source. This is just the test 'rig' so that you run the Sample code.
The popup screen is invoked from a FeildChange Listener - it could just as easily be in a Menu run method.
2) PleaseWaitPopupScreen - this is the core of the processing, this starts the Background Thread as well as displaying the 'Please Wait Popup Screen with progress bar.
There are three 'tricks' in here:
a) When the cancel button is pressed, this immediately tells the network thread to stop, and also then tells the Observer that the Network Thread has been stopped. So while you might think that the Cancel button also should tidy up the popup screen, it does not need to, because the failure of the network thread has told it to do this anyway.
b) The invalidate against the screen will cause the entire screen (the popup screen) to be repainted.
c) Becuase each of the 'observer' methods is being invoked by the Background Thread, they need to use something like
UiApplication.getUiApplication().invokeLater
to 'move' the processing onto the Event Thread. There are other ways of doing this, but I would recommend that you use invokeLater unless you have good reason to do otherwise.
3) NetworkThread - this is the Background Thread that is run. You can change this to vary how long this Thread blocks, and the whether the Thread completes with an error or with a valid response. This simulates a network (http) interaction, because that is what most people try to do.
You could argue that this code does not need the observer... methods because the places that call these methods could invoke the Observer directly. However I would encourage you to leave these in. In a more general implementaion of the Observer Interface, you would use a method like this, as it would have to do this processing for all the Observers (and there could be none).
4) ObserverInterface - this is the interface the Background Thread uses to tell the listening Object (the PleaseWaitPopupScreen) that something has happened. This is not the 'standard Observer Pattern, it is a variation on this. This Interface is specifically designed for a Network Background Thread, so I really should have called this NetworkObserverInterface.
I hope the above, coupled with the comments in the code are enough. If not, please ask for more explanation, I'm happy to give it. As noted, I'll also be interested in improvements or bug fixes. But please don't ask questions like "I used your code in my program and I'm getting ...." If you have a problem with this code, please demonstrate the problem using just this code. If you can't, then that would suggest to me that the problem is in your implementation and you should be able to fix that. Hope that is OK.
Enjoy
/** * Test UiApplication */ import net.rim.device.api.system.*; import net.rim.device.api.ui.*; import net.rim.device.api.ui.component.*; import net.rim.device.api.ui.container.*; public class PleaseWaitDemo extends UiApplication { public static void main(String[] args) { PleaseWaitDemo app = new PleaseWaitDemo(); app.enterEventDispatcher(); } //constructor public PleaseWaitDemo() { //Create a new screen for the application pushScreen(new PleaseWaitDemoScreen()); } } class PleaseWaitDemoScreen extends MainScreen { PleaseWaitPopupScreen _waitScreen = null; LabelField _resultField = new LabelField("Result in here"); public PleaseWaitDemoScreen() { this.setTitle("PleaseWaitDemo"); ButtonField startButton = new ButtonField("Start", ButtonField.FIELD_HCENTER | ButtonField.CONSUME_CLICK); startButton.setChangeListener( new FieldChangeListener() { public void fieldChanged(Field field, int context) { _waitScreen = new PleaseWaitPopupScreen("Please wait", "Wating for test Thread", "dummy URL, not actually used"); int result = _waitScreen.show(); _resultField.setText("Result: " + Integer.toString(result)); } }); this.add(startButton); this.add(_resultField); } }
import net.rim.device.api.ui.UiApplication; import net.rim.device.api.ui.component.*; import net.rim.device.api.ui.container.*; import net.rim.device.api.ui.*; /** * Wait for some Background processing to complete. * Meanwhile, display this popup screen and status updates as supplied. * Also allow the user to Cancel the Thread we are waiting on. */ public class PleaseWaitPopupScreen extends PopupScreen implements ObserverInterface { private String _title; // Title line for Popup private GaugeField _gaugeField = null; // Indicator to user that things are happening private ButtonField _cancelButton = null; // Button user can use to get out private LabelField _statusText = null; private NetworkThread _requestThread = null; private String _requestURL = null; private byte [] response; private int _returnCode = ObserverInterface.CANCELLED; public PleaseWaitPopupScreen(String title, String text, String requestURL) { super(new VerticalFieldManager()); this.add(new LabelField(title, LabelField.FIELD_HCENTER)); this.add(new SeparatorField()); this.add(new RichTextField(text, Field.READONLY)); _gaugeField = new GaugeField(null, 1, 100, 1, GaugeField.NO_TEXT); this.add(_gaugeField); this.add(new SeparatorField()); _cancelButton = new ButtonField("Cancel", ButtonField.FIELD_HCENTER | ButtonField.CONSUME_CLICK); _cancelButton.setChangeListener( new FieldChangeListener() { public void fieldChanged(Field field, int context) { if ( _requestThread != null ) { if ( _requestThread.isAlive() ) { _requestThread.stop(); // This will send us a 'failure' notification } } else { // Something has gone really wrong?! throw new RuntimeException("Oppsss"); } } }); this.add(_cancelButton); _cancelButton.setFocus(); _statusText = new LabelField("Starting"); this.add(_statusText); _requestURL = requestURL; } public int show() { _requestThread = new NetworkThread(_requestURL, this); _requestThread.start(); UiApplication.getUiApplication().pushModalScreen(t
his);his); aitait aitPopupScreen.this); } }); } }aitPopupScreen.this); } }); } }
/** * NetworkThread * * This is just a Thread which the Main UI must wait for. * * In most applications this will be a networking Thread * so this sample has been designed to simulate that. * It is created with a URL and returns a byte array. * But this general 'pattern' applies anywhere the UI needs to wait. * */ import net.rim.device.api.ui.*; import net.rim.device.api.ui.component.*; import net.rim.device.api.ui.container.*; import net.rim.device.api.system.*; import javax.microedition.io.*; import java.io.*; public class NetworkThread extends Thread { private ObserverInterface _ourObserver; private String _targetURL; private boolean _stopRequest = false; public NetworkThread(String requestURL, ObserverInterface observer) { super(); _targetURL = requestURL; _ourObserver = observer; } /** * stop is called if the processing should not continue. */ public void stop() { // Tell our observer observerError(ObserverInterface.CANCELLED, "Cancelled by User"); _stopRequest = true; // Will no longer tell Observer anything this.interrupt(); // Give our Thread a kick } private void observerStatusUpdate(final int status, final String statusString) { if ( !_stopRequest ) { _ourObserver.processStatusUpdate(status, statusString); } } private void observerError(int errorCode, String errorMessage) { if ( !_stopRequest ) { _ourObserver.processError(errorCode, errorMessage); } } private void observerResponse(byte [] reply) { if ( !_stopRequest ) { _ourObserver.processResponse(reply); } } /** * Process the long running or blocking operation in this Thread * Update the Observer as required using * - processStatus, whenever desired * and then one of: * - processError, if there was a problem * - processResponse, if the data was obtained OK */ public void run () { // Following are just test variables, hopefully self eplanatory long stallTime = 10000; // Will stall for this time boolean willFail = false; // false - will finshed OK, true - will fail! long updateInterval = 1000; // Update Observer every second long startTime = System.currentTimeMillis(); long finishTime = startTime + stallTime; long currentTime = startTime; observerStatusUpdate(1, "Started"); // Tell user we have started if ( willFail ) { // Fail 1/2 way through finishTime = startTime + stallTime / 2; } while ( !_stopRequest && currentTime < finishTime ) { // THis just simulates the fact that we can update the Observer (Ui) from time to time try { Thread.sleep(updateInterval); } catch (Exception e) { } // Calculate a percentage to tel the Observer currentTime = System.currentTimeMillis(); int percentageFinished = (int) (((currentTime - startTime) * 100l) / stallTime); percentageFinished = Math.min(percentageFinished, 99); // Just so we don't exceed guage value. observerStatusUpdate(percentageFinished, "Processing: " + Integer.toString(percentageFinished)); } observerStatusUpdate(100, "Finished"); // Tell Observer we have finished // Did we finish OK or badly if ( willFail ) { observerError(ObserverInterface.ERROR, "Failed"); } else { observerResponse("Succeeded".getBytes()); } // Make sure we do nothing else _stopRequest = true; _ourObserver = null; } }
/* * ObserverInterface.java * * Please do not think this is an approved implemenation of the Observer Pattern, * It's not. it is just something I have made up. * */ public interface ObserverInterface { public void processStatusUpdate(int status, String statusString); // Observer can be notified by Observed as it is going along, with regular // status updates // Could be used to pass messages back to be displayed, for example a sequence like: // finding Server, logging in, logged in, requested update, update received, logging out.... // Could also be used as in this sample, to pass back a % complete public void processResponse(byte [] responseBytes); // If the processing is successful, response is passed to called using this. public void processError(int errorCode, String errorMessage); // If there is an error, an error indication is passed back using this public static int CANCELLED = -1; public static int ERROR = -2; public static int OK = 0; // These are all ,= 0. We grubbily also use the }
01-31-2010 08:30 AM
Excellent stuff Peter, thanks for sharing. I've just implemented a progress bar in my application, not as cleanly as you do here so I think I will revisit it.
01-31-2010 09:35 AM
Great job Peter, this definitely demonstrates how to make a Please Wait/Progress Bar pop-up. I couldn't find any errors just by looking at it so good job.
02-01-2010 04:17 AM
Thank you guys.
I've updated the text to expland on some of the tricker points in the processing. Hope that hasn't made it more confusing.
02-03-2010 11:28 PM
Hey Peter! Congratulations for the great explanation.
I usually management my important samples and production code using different workspaces, for sure I have to create another workspace called "Peter's made up"
Anyway, on the ObserverInface file you wrote that is not an approved implementation of the Observer Pattern. So, how could you implement the same using the observer pattern restrictedly?
Thank you so much!
02-04-2010 06:18 AM
"how could you implement the same using the observer pattern restrictedly?"
Good question.
I'm not well learned on this, so I think people who more actively use these sorts of things would be better able to answer this.
However for me, there are these differences:
1) ObserverInterface usually involves only 1 method, typically update(), so the Observed (in this case the Network Thread) will only ever call update(). That makes the interface very general. However in this case I know that I'm only ever going to be called back in three cases, [(1) progress report, (2) completed and failed, (3) completed OK], so I have three call backs.
2) There is a debate as to whether you push the data to the Observer (as I do) or tell the Observer that something has changed and that they should pull the data that they want. Again, because I know what data I want in each case, I push the data.
3) Not really an interface issue, but related. The Observed (Network Thread) only supports one Observer, so there is no 'register Observer' method.
4) The update method usually indicates the object being Observed, so that the Observer can use the same update() method for multiple Observed objects. I know that the only object that my Observer is watching is the Network Thread, so I don't need to supply that.
Sorry I'm not going to convert this to use the normal Observer/ObserverInterface. It doesn't exist in J2ME and so you have to create your own anyway. I have chosen to create a specific implementation that suits my needs, and would not change it to use the true Observer pattern. Hope that is OK.
If you would like to try re-writing this to use a normal ObserverInterface then please post it. But I suspect it will be longer and more complicated!
04-19-2010 11:07 PM
I know this is an old thread... but THANK YOU PETER!
If I hadn't read through your example I would never have realized my thread was blocking because I was trying to update the UI from the background thread without even thinking about it (by doing a set text).
You are one of a number of helpful people on this board that make it possible to do this by myself.. thanks to you and the many people like you who are providing help.
07-15-2010 07:58 PM
Good explanation.
Thanks for this post. Kudos to you.
09-08-2010 07:39 AM
There is updated and better explained similar code in the KB now, see here:
08-09-2011 08:40 AM
Thanks Peter,
But can you explain how this code I can use for BlueTooth searching process..
And in progreess will show how many devices found etc..... | http://supportforums.blackberry.com/t5/Java-Development/Sample-Please-Wait-or-Progress-Bar-Code/m-p/436688 | crawl-003 | refinedweb | 2,342 | 54.83 |
Import modules properly
From HaskellWiki
- This page addresses an aspect of Haskell style, which is to some extent a matter of taste. Just pick what you find appropriate for you and ignore the rest..
1 Exception from the rule
Since the Prelude is intended to be fixed for the future,it should be safe to use the
Actually if you do not mention Prelude it will be imported anonymously.
2 Clashing of abbreviations
In Haskell it is possible to use the same abbreviation for different modules:
import qualified Data.List as List import qualified Data.List.Extra as List
This is discouraged for the same reasons as above:
Stylistic reason:The identifier
You have to check these modules in order to find it out.
Compatibility reason:The function
3 Counter-arguments. | http://www.haskell.org/haskellwiki/index.php?title=Import_modules_properly&oldid=28603 | CC-MAIN-2013-48 | refinedweb | 131 | 53.51 |
Created on 2010-08-10 02:03 by denversc, last changed 2010-11-01 14:22 by bethard. This issue is now closed.
If the COLUMNS environment variable is set to a value other than 80 then test_argparse.py yields 80 failures. The value of the COLUMNS environment variable affects line wrapping of the help output and the test cases assume line wraps at 80 columns. So setting COLUMNS=160, for example, then running the tests will reproduce the failures. The fix is to invoke: os.environ["COLUMNS"] = "80".
A proposed patch for py3k/Lib/test/test_argparse.py is attached (test_argparse.py.COLUMNS.patch)
Shouldn't the tests calculate line wrapping based on what is set, rather than brute forcing it to be 80?
The best solution would be to make sure that a few different column widths are tested. However, in the meantime, the tests do assume 80 columns, so I think it's correct to specify that using os.environ as suggested.
One problem with the proposed patch is that it makes the change globally, and we should be restoring the original setting after the end of the argparse tests.
There's a handy utility for this in test.support: EnvironmentVarGuard.
I agree with Steven: for the current tests we should specify (and restore) 80 columns. We might want to add additional tests at different column widths.
That is a very good point, bethard, that setting os.environ["COLUMNS"] in my suggested patch (test_argparse.py.COLUMNS.patch) is global and should be test-local. I've attached an updated patch (test_argparse.py.COLUMNS.update1.patch) which uses setUp() and tearDown() to prepare and restore the COLUMNS environment variable. The one difference from my previous patch is that instead of setting the COLUMNS environment variable to 80 I just unset it.
I also considered EnvironmentVarGuard, as suggested by r.david.murray, but I'm not sure it's designed for global setting of environment variables. EnvironmentVarGuard appears to have been designed to be used as a context manager for an individual test, but the COLUMNS environment variable needs to be adjusted for *every* test.
Your code is fine (though to my tastes a bit verbose...if it were me I'd just put the code in the setUp and tearDown methods and hardcode 'COLUMNS' (it isn't like the name COLUMNS is going to change)...but that's just personal style).
The EnviormentVarGuard version would look like this (untested):
def setUp(self):
self.guard = EnvironmentVarGuard()
self.environ = self.guard.__enter__()
# Current tests expect 80 column terminal width.
self.environ['COLUMNS'] = 80
def tearDown(self):
self.guard.__exit__(None, None, None)
You could of course delete COLUMNS as you did, but I thought setting it to 80 would be more explicit.
Another comment about the patch: by inspection it appears that adding setUp and tearDown to TestCase isn't enough, since subclasses and mixins define those without calling the superclass versions.
Thanks for the input, r.david.murray. I've updated my patch and attached it to take into consideration your comments: test_argparse.py.COLUMNS.update2.patch. The updated patch uses EnviormentVarGuard as suggested, except that it slightly tweaks EnviormentVarGuard so the context manager protocol methods don't have to be invoked directly.
It was also pointed out that "adding setUp and tearDown to TestCase isn't enough, since subclasses and mixins define those without calling the superclass versions", which is true. However, the tests that override setUp() happen to be those that don't depend on the COLUMNS environment variable.
I don't think it is worthwhile to jump through hoops to avoid calling the special methods. Your patch also creates an unnecessary dependency on the internal implementation of EnvironmentVarGuard (ie: the fact that currently __enter__ happens to return self...this is *not* part of EnvironmentVarGuard's interface contract).
As noted in issue10235, this is responsible for buildbot failures:
Fixed with a variant of Denver's last patch in r86080 for 3.X and r86083 for 2.7. | https://bugs.python.org/issue9553 | CC-MAIN-2021-49 | refinedweb | 669 | 57.98 |
I thought I would share some handy little Math utilities to help with your projects.
The first one I will present is based on the Sigmoid curve and is very handy for all sorts of applications. It is better known as ease in and ease out.
If you are wanting to transition from one effect to another you want it to start slowly, pick up speed and then slow down and eventually stop. CSS animations use it all the time but it is nice to have it in code as well.
So to the function.
function easeInOut(x,p){ var xx = Math.pow(x,p); return (xx/(xx+Math.pow((1-x),p); }
That's it. Simple and fast.
x is in a range from 0 <= x <= 1
p is the power and can be any number except 0. It controls the rate of change or the rate at which to ease in and out.
p = 1 creates a linear change over time.
p > 1 ease in slowly and ease out slowly. The greater P the slower it eases in and out. p=2 is what the CSS ease is I believe.
0 < p < 1 starts fast then slows to the half way point and then speeds up to the end.
Negative p reverses the function
Graph of function with x left to right, the return on the y axis, shown different values of p.
The function.
So how to use this function.
Say you want to scroll from one part of the page to the next using this function.
var offsetStart = window.pageYOffset; // current location. var offsetEnd = 512; // new location var timeToMove = 2; // time in seconds to make the move. This can be a constant. As I use setTimeout instead of setInterval the actual time to do the transition may be out by a few micro seconds. use setInterval if you need more accurate timing. var animationInterval = 16; // time in micro seconds. this can be a constant var transitionprogress = 0; function autoScrollPage(){ transitionprogress += 1/((timeToMove*1000)/animationInterval); // I always use a constant here. timeToMove could also be in micro seconds. if(transitionprogress >= 1){ transitionprogress = 1; }else{ setTimeout(autoScrollPage,animationInterval); // set up the next callback if we have not reached the end } var y = easeInOut(transitionprogress,2); // in this case we use 2 can be any value except 0 // y is in the range 0-1 inclusive so we need to convert it to the scale we need. y = (offsetEnd -offsetStart) *y) +offsetStart; // get the differance between start and end multiply by y and then add the start to give the new screen location. y = Math.Round(y); // make it a whole number as most browsers prefer an integer window.scrollTo(0, y); // set the window page offset and your done. } // call this function to start function startSmoothScroll(destination){ offsetStart = window.pageYOffset; // get current scoll pos transitionprogress = 0; // reset the progress setTimeout(autoScrollPage,animationInterval); // start the animation }
This code is just an example of how to use it. it can be used in many ways. I sometime use it in gradients for fill styles to give a better looking gradient. It adds a very nice natural feel to animations, transitions, counters, your imagination is the only limit.
A modification on this function is the ease to bounce. This creates a cartoon like bounce at the end, or if reversed that step back then rush forward cartoon style motion.
The power is set to 2 for this one as it does not adapt to the constraints of being at 0 for x = 0 and not at 1 for x = 1 without the power being two.
function EaseInOutBounce(x) { return(0.9*(Math.pow(1.1*x,2)/(Math.pow(x,2)+Math.pow(1-1.3*x,2)))); };
So I hope some of you have found this useful. Please let me know and I will continue to provide a series of helpful math functions for you applications. | http://www.dreamincode.net/forums/topic/342967-ease-in-out-simple-function-for-animations-example/ | CC-MAIN-2018-05 | refinedweb | 656 | 74.39 |
lora.nvram_restore() is not working
- Harish kumar last edited by robert-hh
Hi Everyone !
I tried to implement lora.nvram_save() and lora.nvram_restore() in my program but lora.nvram_save() is working properly but I have some problem with the lora.nvram_restore(). My program stops after lora.nvram_restore() command and then it's not sending uplink. I am using lopy as a repeater. If someone knows, please help me out. I will post my program here.
from network import LoRa import socket import time import binascii store =["26012FF8", "f82f0126", "f82f0136" ] a = time.localtime() print(a) hour = a[1] minutes = a[2] lora = LoRa(mode=LoRa.LORAWAN, region=LoRa.EU868) #create an OTAA authentication connection app_eui = binascii.unhexlify('0004A30000000000') app_key = binascii.unhexlify('DE71C5CA71E05F92F000000000000000') #Join a network using Lora lora.join(activation=LoRa.OTAA, auth=(app_eui, app_key), timeout=0) #wait until it joins while not lora.has_joined(): time.sleep(2.5) print('Not yet joined...') #create a lora socket s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) #set the lorawan datarate s.setsockopt(socket.SOL_LORA, socket.SO_DR, 5) #make the socket blocking #(waits for the data to be sent and for the 2 receive windows to expire) s.setblocking(True) #time.sleep(3000) s.send(bytes([0x01, 0x02, 0x03])) time.sleep(3) s.send(bytes([0x01, 0x02, 0x03])) lora.nvram_save() print('saved') while True: lora = LoRa(mode=LoRa.LORA, region=LoRa.EU868, sf = 7, frequency = 868500000) # create a raw LoRa socket s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) s.setblocking(True) rx_data = s.recv(64) dev_addr = rx_data[1:5] print("Dev-Addr: ", binascii.hexlify(dev_addr)) x = 0 for x in range(0, 3): if(binascii.unhexlify(store[x]) == dev_addr): print('OK') # get any data received... #rx_data = s.recv(64) print(rx_data) # sending data to server with sf = 12 lora = LoRa(mode=LoRa.LORAWAN, region=LoRa.EU868, sf = 12) # send some data s.send(rx_data) print('data sent') else: print('Not OK') if hour == 5 and minutes == 0: print('{0}:{1}'.format(hour, minutes)) lora = LoRa(mode=LoRa.LORAWAN) lora.nvram_restore() # doing uplink s.send(bytes([0x01, 0x02, 0x03])) #doing downlink data = s.recv(64) first4 = data[0:4] store.append(first4) second4 = data[5:9] store.append(second4) print(data)
- Harish kumar last edited by
@robert-hh Thank you so much for your time. I made some changes and now my program works fine.
@harish-kumar Hello Harish Kumar,
First of all, I try to understand what you want to achieve. As far as I can guess form the code and the board history of your questions, you want to implement a repeater between your nodes and a LoRaWAN gateway.
a) I assume, that these nodes are configured in Raw LoRa mode.
b) I assume, that you keep a list of known nodes in the repeater, and that this can be extended through downlink messages from the server.
c) For the LoRa Server, all nodes appear under the ID of the repeater, and the ID of the nodes is part of the payload, which is supplied.
I have a few comments to you code:
a) Join has to be performed only once, if you keep the transaction parameters. That is to be done with nvram_save() and nvram_restore().
b) You switch between LoRa Raw and LoRa WAN mode. I do not know, whether the WAN parameters are kept when switching to RAW mode. So better do the following:
- call nvram_save() before switching to Raw mode (like at line 48), and
- nvram_restore() like after line 57. The problem is, that this will wear our flash after a while. But once the code works, you may consider better strategies for save and restore.
c) the s.recv() will block until you receive a packet. I do not know whether that is intentional. You could also set a timeout and check, whether you received something.
d) the code block after "if hour == 5 and minutes == 0:" wil most likely never executed. First of all, you get the time at the start of your script. Unless you start that at exactly 5:00, the number are wrong. Secondly, even if you get the actual time before the if, since you wait forever in the s.recv(), it is highly unlikely that this will happen at the right time. And last not least, the mechanism is useless. The LoRaWAN server will send scheduled downlink messages after any uplink message. So it is much easier to check after any uplink message, if a downlink message arrives with configuration data. Still, you back-end system may provide this information once a day, but it dan be sent down at any time.
And since downlink messages may get lost, you either sent a bunch of them and implement some mechanism to avoid duplicates, or implement another protocol for confirmation of receipt. | https://forum.pycom.io/topic/3571/lora-nvram_restore-is-not-working/ | CC-MAIN-2020-50 | refinedweb | 798 | 60.72 |
.21 of the GNU C Library.
Next:.
Next: Standards and Portability, Up: Introduction [Contents][Index].
Next:.
Next: POSIX, Up: Standards and Portability [Contents][Index] C.
Next: Berkeley Unix, Previous: ISO C, Up: Standards and Portability [Contents][Index]).
Next:: Conditionally Safe Features, Previous: POSIX Safety Concepts, Up: POSIX [Contents][Index]
Functions that are unsafe to call in certain contexts are annotated with keywords that document their features that make them unsafe to call. AS-Unsafe features in this section indicate the functions are never safe to call when asynchronous signals are enabled. AC-Unsafe features indicate they are never safe to call when asynchronous cancellation is enabled. There are no MT-Unsafe marks in this section.
lock
Functions marked with
lock as an AS-Unsafe feature may be
interrupted by a signal while holding a non-recursive lock. If the
signal handler calls another such function that takes the same lock, the
result is a deadlock.
Functions annotated with
lock as an AC-Unsafe feature may, if
cancelled asynchronously, fail to release a lock that would have been
released if their execution had not been interrupted by asynchronous
thread cancellation. Once a lock is left taken, attempts to take that
lock will block indefinitely.
corrupt
Functions marked with
corrupt as an AS-Unsafe feature may corrupt
data structures and misbehave when they interrupt, or are interrupted
by, another such function. Unlike functions marked with
lock,
these take recursive locks to avoid MT-Safety problems, but this is not
enough to stop a signal handler from observing a partially-updated data
structure. Further corruption may arise from the interrupted function’s
failure to notice updates made by signal handlers.
Functions marked with
corrupt as an AC-Unsafe feature may leave
data structures in a corrupt, partially updated state. Subsequent uses
of the data structure may misbehave.
heap
Functions marked with
heap may call heap memory management
functions from the
malloc/
free family of functions and are
only as safe as those functions. This note is thus equivalent to:
| AS-Unsafe lock | AC-Unsafe lock fd mem |
dlopen
Functions marked with
dlopen use the dynamic loader to load
shared libraries into the current execution image. This involves
opening files, mapping them into memory, allocating additional memory,
resolving symbols, applying relocations and more, all of this while
holding internal dynamic loader locks.
The locks are enough for these functions to be AS- and AC-Unsafe, but
other issues may arise. At present this is a placeholder for all
potential safety issues raised by
dlopen.
plugin
Functions annotated with
plugin may run code from plugins that
may be external to the GNU C Library. Such plugin functions are assumed to be
MT-Safe, AS-Unsafe and AC-Unsafe. Examples of such plugins are stack
unwinding libraries, name service switch (NSS) and character set
conversion (iconv) back-ends.
Although the plugins mentioned as examples are all brought in by means
of dlopen, the
plugin keyword does not imply any direct
involvement of the dynamic loader or the
libdl interfaces, those
are covered by
dlopen. For example, if one function loads a
module and finds the addresses of some of its functions, while another
just calls those already-resolved functions, the former will be marked
with
dlopen, whereas the latter will get the
plugin. When
a single function takes all of these actions, then it gets both marks.
i18n
Functions marked with
i18n may call internationalization
functions of the
gettext family and will be only as safe as those
functions. This note is thus equivalent to:
| MT-Safe env | AS-Unsafe corrupt heap dlopen | AC-Unsafe corrupt |
timer
Functions marked with
timer use the
alarm function or
similar to set a time-out for a system call or a long-running operation.
In a multi-threaded program, there is a risk that the time-out signal
will be delivered to a different thread, thus failing to interrupt the
intended thread. Besides being MT-Unsafe, such functions are always
AS-Unsafe, because calling them in signal handlers may interfere with
timers set in the interrupted code, and AC-Unsafe, because there is no
safe way to guarantee an earlier timer will be reset in case of
asynchronous cancellation.:.
Next:).
Next: XPG, Previous: Berkeley Unix, Up: Standards and Portability [Contents][Index]
The System V Interface Description (SVID) is a document describing the AT&T Unix System V operating system. It is to some extent a superset of the POSIX standard (see POSIX).
The GNU C Library defines most of the facilities required by the SVID that are not also required by the ISO C.)
The supported facilities from System V include the methods for
inter-process communication and shared memory, the
hsearch and
drand48 families of functions,
fmtmsg and several of the
mathematical functions.
Previous: SVID, Up: Standards and Portability [Contents][Index]
The X/Open Portability Guide, published by the X/Open Company, Ltd., is a more general standard than POSIX. X/Open owns the Unix copyright and the XPG specifies the requirements for systems which are intended to be a Unix system.
The GNU C Library complies to the X/Open Portability Guide, Issue 4.2, with all extensions common to XSI (X/Open System Interface) compliant systems and also all X/Open UNIX extensions.
The additions on top of POSIX are mainly derived from functionality available in System V and BSD systems. Some of the really bad mistakes in System V systems were corrected, though. Since fulfilling the XPG standard with the Unix extensions is a precondition for getting the Unix brand chances are good that the functionality is available on commercial systems.
Next: Roadmap to the Manual, Previous: Standards and Portability, Up: Introduction [Contents][Index]
This section describes some of the practical issues involved in using the GNU C Library.
Next: Macro Definitions, Up: Using the Library [Contents][Index].
Next:: Feature Test Macros, Previous: Macro Definitions, Up: Using the Library [Contents][Index].
Previous:.
Next:.
Next: Error Codes, Up: Error Reporting [Contents][Index]
Most library functions return a special value to indicate that they have
failed. The special value is typically
-1, a null pointer, or a
constant such as
EOF that is defined for that purpose. But this
return value tells you only that an error has occurred. To find out
what kind of error it was, you need to look at the error code stored in the
variable
errno. This variable is declared in the header file
errno.h.
The variable
errno contains the system error number. You can
change the value of
errno.
Since
errno is declared
volatile, it might be changed
asynchronously by a signal handler; see Defining Handlers.
However, a properly written signal handler saves and restores the value
of
errno, so you generally do not need to worry about this
possibility except when writing signal handlers.
The initial value of
errno at program startup is zero. Many
library functions are guaranteed to set it to certain nonzero values
when they encounter certain kinds of errors. These error conditions are
listed for each function. These functions do not change
errno
when they succeed; thus, the value of
errno after a successful
call is not necessarily zero, and you should not use
errno to
determine whether a call failed. The proper way to do that is
documented for each function. If the call failed, you can
examine
errno.
Many library functions can set
errno to a nonzero value as a
result of calling other library functions which might fail. You should
assume that any library function might alter
errno when the
function returns an error.
Portability Note: ISO C specifies
errno as a
“modifiable lvalue” rather than as a variable, permitting it to be
implemented as a macro. For example, its expansion might involve a
function call, like
*__errno_location (). In fact, that is
what it is
on GNU/Linux and GNU/Hurd systems. The GNU C Library, on each system, does
whatever is right for the particular system.
There are a few library functions, like
sqrt and
atan,
that return a perfectly legitimate value in case of an error, but also
set
errno. For these functions, if you want to check to see
whether an error occurred, the recommended method is to set
errno
to zero before calling the function, and then check its value afterward.
All the error codes have symbolic names; they are macros defined in errno.h. The names start with ‘E’ and an upper-case letter or digit; you should consider names of this form to be reserved names. See Reserved Names.
The error code values are all positive integers and are all distinct,
with one exception:
EWOULDBLOCK and
EAGAIN are the same.
Since the values are distinct, you can use them as labels in a
switch statement; just don’t use both
EWOULDBLOCK and
EAGAIN. Your program should not make any other assumptions about
the specific values of these symbolic constants.
The value of
errno doesn’t necessarily have to correspond to any
of these macros, since some library functions might return other error
codes of their own for other situations. The only values that are
guaranteed to be meaningful for a particular library function are the
ones that this manual lists for that function.
Except on GNU/Hurd systems, almost any system call can return
EFAULT if
it is given an invalid pointer as an argument. Since this could only
happen as a result of a bug in your program, and since it will not
happen on GNU/Hurd systems, we have saved space by not mentioning
EFAULT in the descriptions of individual functions.
In some Unix systems, many system calls can also return
EFAULT if
given as an argument a pointer into the stack, and the kernel for some
obscure reason fails in its attempt to extend the stack. If this ever
happens, you should probably try using statically or dynamically
allocated memory instead of stack memory on that system.
Next:.
Previous: Error Codes, Up: Error Reporting [Contents][Index]
The library has functions and variables designed to make it easy for
your program to report informative error messages in the customary
format about the failure of a library call. The functions
strerror and
perror give you the standard error message
for a given error code; the variable
program_invocation_short_name gives you convenient access to the
name of the program that encountered the error.
Preliminary: | MT-Unsafe race:strerror | AS-Unsafe heap i18n | AC-Unsafe mem | See POSIX Safety Concepts.
The
strerror function maps the error code (see Checking for Errors) specified by the errnum argument to a descriptive error
message string. The return value is a pointer to this string.
The value errnum normally comes from the variable
errno.
You should not modify the string returned by
strerror. Also, if
you make subsequent calls to
strerror, the string might be
overwritten. (But it’s guaranteed that no library function ever calls
strerror behind your back.)
The function
strerror is declared in string.h.
Preliminary: | MT-Safe | AS-Unsafe i18n | AC-Unsafe | See POSIX Safety Concepts.
The
strerror_r function works like
strerror but instead of
returning the error message in a statically allocated buffer shared by
all threads in the process, it returns a private copy for the
thread. This might be either some permanent global data or a message
string in the user supplied buffer starting at buf with the
length of n bytes.
At most n characters are written (including the NUL byte) so it is up to the user to select the buffer large enough.
This function should always be used in multi-threaded programs since
there is no way to guarantee the string returned by
strerror
really belongs to the last call of the current thread.
This function
strerror_r is a GNU extension and it is declared in
string.
If you call
perror with a message that is either a null
pointer or an empty string,
perror just prints the error message
corresponding to
errno, adding a trailing newline.
If you supply a non-null message argument, then
perror
prefixes its output with this string. It adds a colon and a space
character to separate the message from the error string corresponding
to
errno.
The function
perror is declared in stdio.h.
strerror and
perror produce the exact same message for any
given error code; the precise text varies from system to system. With
the GNU C Library, the messages are fairly short; there are no multi-line
messages or embedded newlines. Each error message begins with a capital
letter and does not include any terminating punctuation.
Compatibility Note: The
strerror function was introduced
in ISO C89. Many older C systems do not support this function yet.
Many programs that don’t read input from the terminal are designed to
exit if any system call fails. By convention, the error message from
such a program should start with the program’s name, sans directories.
You can find that name in the variable
program_invocation_short_name; the full file name is stored the
variable
program_invocation_name.
This variable’s value is the name that was used to invoke the program
running in the current process. It is the same as
argv[0]. Note
that this is not necessarily a useful file name; often it contains no
directory names. See Program Arguments.
This variable’s value is the name that was used to invoke the program
running in the current process, with directory names removed. (That is
to say, it is the same as
program_invocation_name minus
everything up to the last slash, if any.)
The library initialization code sets up both of these variables before
calling
main.
Portability Note: These two variables are GNU extensions. If
you want your program to work with non-GNU libraries, you must save the
value of
argv[0] in
main, and then strip off the directory
names yourself. We added these extensions to make it possible to write
self-contained error-reporting subroutines that require no explicit
cooperation from
main.
Here is an example showing how to handle failure to open a file
correctly. The function
open_sesame tries to open the named file
for reading and returns a stream if successful. The
fopen
library function returns a null pointer if it couldn’t open the file for
some reason. In that situation,
open_sesame constructs an
appropriate error message using the
strerror function, and
terminates the program. If we were going to make some other library
calls before passing the error code to
strerror, we’d have to
save it in a local variable instead, because those other library
functions might overwrite
errno in the meantime.
#include <errno.h> #include <stdio.h> #include <stdlib.h> #include <string.h> FILE * open_sesame (char *name) { FILE *stream; errno = 0; stream = fopen (name, "r"); if (stream == NULL) { fprintf (stderr, "%s: Couldn't open file %s; %s\n", program_invocation_short_name, name, strerror (errno)); exit (EXIT_FAILURE); } else return stream; }
Using
perror has the advantage that the function is portable and
available on all systems implementing ISO C. But often the text
perror generates is not what is wanted and there is no way to
extend or change what
perror does. The GNU coding standard, for
instance, requires error messages to be preceded by the program name and
programs which read some input files should is
used. The program name is followed by a colon and a space which in turn
is followed by the output produced by the format string. If the
errnum parameter is non-zero the format string output is followed
by a colon and a space, followed by the error message for the error code
errnum. In any case is the output terminated with a newline.
The output is directed to the
stderr that between the program name and the string
generated by the format string additional text is inserted.
Directly following the program name a colon, followed by the file name pointer to by fname, another colon, and a value of lineno is printed.
This additional output of course is meant to be used to locate an error in an input file (like a programming language source code file etc).
If the global variable
error_one_per_line track of the last
file name and line number for which an error was reported and avoid
directly following messages for the same file and line. This variable
is global and shared by all threads.
A program which read some input file and reports errors in it could look like this:
{ char *line = NULL; size_t len = 0; unsigned int lineno = 0; error_message_count = 0; while (! feof_unlocked (fp)) { ssize_t n = getline (&line, &len, fp); if (n <= 0) /* End of file or error. */ break; ++lineno; /* Process the line. */ … if (Detect error in line) error_at_line (0, errval, filename, lineno, "some error text %s", some_variable); } if (error_message_count != 0) error (EXIT_FAILURE, 0, "%u errors found", error_message_count); }
error and
error_at_line are clearly the functions of
choice and enable the programmer to write applications which follow the
GNU coding standard. The GNU: Unconstrained Allocation, Up: Memory Allocation [Contents][Index]; }
Next:).
Next: Malloc Examples, Up: Unconstrained Allocation [Contents][Index]
To allocate a block of memory, call
malloc. The prototype for
this function is in stdlib.h.
Preliminary: | MT-Safe | AS-Unsafe lock | AC-Unsafe lock fd mem | See POSIX Safety Concepts..
Next: Freeing after Malloc, Previous: Basic Allocation, Up: Unconstrained Allocation [Contents][Index]) { char *value = (char *) xmalloc (len + 1); value[len] = '\0'; return (char *) memcpy (value, ptr, len); }
The block that
malloc gives you is guaranteed to be aligned so
that it can hold any type of data. On GNU systems, the address is
always a multiple of eight on 32-bit systems, and a multiple of 16 on
64-bit systems. Only rarely is any higher boundary (such as a page
boundary) necessary; for those cases, use
aligned_alloc or
posix_memalign ).
Next:: Allocating Cleared Space, Previous: Freeing after Malloc, Up: Unconstrained Allocation [Contents][Index].
Preliminary: | MT-Safe | AS-Unsafe lock | AC-Unsafe lock fd mem | See POSIX Safety Concepts. ‘malloc .
Next:.
Next: Aligned Memory Blocks, Previous: Allocating Cleared Space, Up: Unconstrained Allocation [Contents][Index].:: Hooks for Malloc, Previous: Malloc Tunable Parameters, Up: Unconstrained Allocation [Contents][Index]
You can ask
malloc to check the consistency of dynamic memory by
using the
mcheck function. This function is a GNU extension,
declared in mcheck.h.
Preliminary: | MT-Unsafe race:mcheck const:malloc_hooks | AS-Unsafe corrupt | AC-Unsafe corrupt | See POSIX Safety Concepts. ‘ functions since
mcheck
must be called before the first such function.
Preliminary: | MT-Unsafe race:mcheck const:malloc_hooks | AS-Unsafe corrupt | AC-Unsafe corrupt | See POSIX Safety Concepts..
Next: Statistics of Malloc, Previous: Heap Consistency Checking, Up: Unconstrained Allocation [Contents][Index]:]:
#include <mcheck.h> int main (int argc, char *argv[]) { #ifdef DEBUGGING mtrace (); #endif … }).
Next:.
Previous: Tips for the Memory Debugger, Up: Allocation Debugging [Contents][Index].
Next: Variable Size Automatic, Previous: Allocation Debugging, Up: Memory Allocation [Contents][Index].
Next: Preparing for Obstacks, Up: Obstacks [Contents][Index]..:
char * obstack_savestring (char *addr, int size) { return obstack_copy0 (&myobstack, addr, size); }
Contrast this with the previous example of
savestring using
malloc (see Basic Allocation).
Next:.
Next:...
You can use
obstack_blank with a negative size argument to make
the current object smaller. Just don’t try to shrink it beyond zero
length—there’s no telling what will happen if you do that..:
void add_string (struct obstack *obstack,::
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.: Summary of Obstacks, Previous: Obstacks Data Alignment, Up: Obstacks [Contents][Index] Creating Obstacks).
Most often they are defined as macros like this:
#define obstack_chunk_alloc malloc #define obstack_chunk_free free.
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.:
if (obstack_chunk_size (obstack_ptr) < new-chunk-size) obstack_chunk_size (obstack_ptr) = new-chunk-size;].
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.).
Next: Advantages of Alloca, Up: Variable Size Automatic [Contents][Index]
allocaExample
As an example of the use of
alloca, here is a function that opens
a file name made from concatenating two argument strings, and returns a
file descriptor or minus one signifying failure:
int open2 (char *str1, char *str2, int flags, int mode) { char *name = (char *) alloca (strlen (str1) + strlen (str2) + 1); stpcpy (stpcpy (name, str1), str2); return open (name, flags, mode); }
Here is how you would get the same results with
malloc and
free:
int open2 (char *str1, char *str2, int flags, int mode) { char *name = (char *) malloc (strlen (str1) + strlen (str2) + 1); int desc; if (name == 0) fatal ("virtual memory exceeded");:.
Next: Locking Pages, Previous: Memory Allocation, Up: Memory [Contents][Index]
The symbols in this section are declared in unistd.h.
You will not normally use the functions in this section, because the functions described in Memory Allocation are easier to use. Those are interfaces to a GNU C Library memory allocator that uses the functions below itself. The functions below are simple interfaces to system calls.
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts. Limits on Resources)..
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts. ‘sbrk(0)’ to find out what the current end of the data segment is.
Previous: Resizing the Data Segment, Up: Memory [Contents][Index].
Next:.
Next: Page Lock Functions, Previous: Why Lock Pages, Up: Locking Pages [Contents][Index].
Previous: Locked Memory Details, Up: Locking Pages [Contents][Index].]).
Next: Case Conversion, Up: Character Handling [Contents][Index]..
Returns true if c is an alphabetic character (a letter). If
islower or
isupper is true of a character, then
isalpha is also true.
In some locales, there may be additional characters for which
isalpha is true—letters which are neither upper case nor lower
case. But in the standard
"C" locale, there are no such
additional characters.].
Previous:.
Next: Character Set Handling, Previous: Character Handling, Up: Top [Contents][Index]
Operations on strings (or arrays of characters) are an important part of
many programs. The GNU C Library provides an extensive set of string
utility functions, including functions for copying, concatenating,
comparing, and searching strings. Many of these functions can also
operate on arbitrary regions of storage; for example, the
memcpy
function can be used to copy the contents of any kind of array.
It’s fairly common for beginning C programmers to “reinvent the wheel” by duplicating this functionality in their own code, but it pays to become familiar with the library functions and to make use of them, since this offers benefits in maintenance, efficiency, and portability.
For instance, you could easily compare one string to another in two
lines of C code, but if you use the built-in
strcmp function,
you’re less likely to make a mistake. And, since these library
functions are typically highly optimized, your program may run faster
too.:
char string[32] = "hello, world"; sizeof (string) ⇒ 32 strlen (string) ⇒ 12
But beware, this will not work unless string is the character array itself, not a pointer to it. For example:
char string[32] = "hello, world"; char *ptr = string; sizeof (string) ⇒ 32 sizeof (ptr) ⇒ 4 /* (on a machine with 4 byte pointers) */
This is an easy mistake to make when you are working with functions that take string arguments; those arguments are always pointers, not arrays.
It must also be noted that for multibyte encoded strings the return
value does not have to correspond to the number of characters in the
string. To get this value the string can be converted to wide
characters and
wcslen can be used or something like the following
code can be used:
/* The input is in
string. The length is expected in
n. */ { mbstate_t t; char *scopy = string; /* In initial state. */ memset (&t, '\0', sizeof (t)); /* Determine number of characters. */ n = mbsrtowcs (NULL, &scopy, strlen (scopy), &t); }
This is cumbersome to do so if the number of characters (as opposed to bytes) is needed often it is better to work with wide characters.
The wide character equivalent is declared in wchar.h.
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.
The
wcslen function is the wide character equivalent to
strlen. The return value is the number of wide characters in the
wide character string pointed to by ws (this is also the offset of
the terminating null wide character of ws).
Since there are no multi wide character sequences making up one character the return value is not only the offset in the array, it is also the number of wide characters.
This function was introduced in Amendment 1 to ISO C90.
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.
The
strnlen function returns the length of the string s in
bytes if this length is smaller than maxlen bytes. Otherwise it
returns maxlen. Therefore this function is equivalent to
(strlen (s) < maxlen ? strlen (s) : maxlen)
but it
is more efficient and works even if the string s is not
null-terminated.
char string[32] = "hello, world"; strnlen (string, 32) ⇒ 12 strnlen (string, 5) ⇒ 5
This function is a GNU extension and is declared in string.h.
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.
wcsnlen.h.
A helpful way to remember the ordering of the arguments to the functions in this section is that it corresponds to an assignment expression, with the destination array specified to the left of the source array. All of these functions return the address of the destination array.
Most of these functions do not work properly if the source and destination arrays overlap. For example, if the beginning of the destination array overlaps the end of the source array, the original contents of that part of the source array may get overwritten before it is copied. Even worse, in the case of the string functions, the null character marking the end of the string may be lost, and the copy function might get stuck in a loop trashing all the memory allocated to your program.
All functions that have problems copying between overlapping arrays are
explicitly identified in this manual. In addition to functions in this
section, there are a few others like
sprintf (see Formatted Output Functions) and
scanf (see Formatted Input Functions).
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.
The
memcpy function copies size bytes from the object
beginning at from into the object beginning at to. The
behavior of this function is undefined if the two arrays to and
from overlap; use
memmove instead if overlapping is possible.
The value returned by
memcpy is the value of to.
Here is an example of how you might use
memcpy to copy the
contents of.
memmove copies the size bytes at from into the
size bytes at to, even if those two blocks of space
overlap. In the case of overlap,
memmove is careful to copy the
original values of the bytes in the block at from, including those
bytes which also belong to the block at to..
This function copies no more than size bytes from from to to, stopping if a byte matching c is found. The return value is a pointer into to one byte past where c was copied, or a null pointer if no byte matching c appeared in the first size bytes of from. C standard.
The behavior of
strncpy is undefined if the strings overlap.
Using
strncpy as opposed to
strcpy is a way to avoid bugs
relating to writing past the end of the allocated space for to.
However, it can also make your program much slower in one common case:
copying a string which is probably small into a potentially large buffer.
In this case, size may be large, and when it is,
strncpy will
waste a considerable amount of time copying null characters..
The
strcat function is similar to
strcpy, except that the
characters from from are concatenated or appended to the end of
to, instead of overwriting it. That is, the first character from
from overwrites the null character marking the end of to.
An equivalent definition for
strcat would be:
char * strcat (char .
Programmers using the
strcat or
wcscat function (or the
following
strncat or
wcsncar functions for that matter)
can easily be recognized as lazy and reckless. In almost all situations
the lengths of the participating strings are known (it better should be
since how can one otherwise ensure the allocated size of the buffer is
sufficient?) Or at least, one could know them if one keeps track of the
results of the various function calls. But then it is very inefficient
to use
strcat/
wcscat. A lot of time is wasted finding the
end of the destination string so that the actual copying can start..
This function is like
strcat except that not more than size
characters from from are appended to the end of to. A
single null character is also always appended to to, so the total
allocated size of to must be at least
size + 1 bytes
longer than its initial length. Searching and Sorting, for an example of this.
Unlike most comparison operations in C, the string comparison functions return a nonzero value if the strings are not equivalent rather than if they are. The sign of the value indicates the relative ordering of the first characters in the strings that are not equivalent: a negative value indicates that the first string is “less” than the second, while a positive value indicates that the first string is “greater”..
The function
memcmp compares the size bytes of memory
beginning at a1 against the size bytes of memory beginning
at a2. The value returned has the same sign as the difference
between the first differing pair of bytes (interpreted as
unsigned
char objects, then promoted to
int).
If the contents of the two blocks are equal,
memcmp returns
0.cmp returns
0.
On arbitrary arrays, the
memcmp function is mostly useful for
testing equality. It usually isn’t meaningful to do byte-wise ordering
comparisons on arrays of things other than bytes. For example, a
byte-wise comparison on the bytes that make up floating-point numbers
isn’t likely to tell you anything about the relationship between the
values of the floating-point numbers.
wmemcmp is really only useful to compare arrays of type
wchar_t since the function looks at
sizeof (wchar_t) bytes
at a time and this number of bytes is system dependent.
You should also be careful about using
memcmp to compare objects
that can contain “holes”, such as the padding inserted into structure
objects to enforce alignment requirements, extra space at the end of
unions, and extra characters at the ends of strings whose length is less
than their allocated size. The contents of these “holes” are
indeterminate and may cause strange behavior when performing byte-wise
comparisons. For more predictable results, perform an explicit
component-wise comparison.
For example, given a structure type definition like:
struct foo { unsigned char tag; union { double f; long i; char *p; } value; };
you are better off writing a specialized comparison function to compare
struct foo objects instead of comparing them with
memcmp.
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.
The
strcmp function compares the string s1 against
s2, returning a value that has the same sign as the difference
between the first differing pair of characters (interpreted as
unsigned char objects, then promoted to
int).
If the two strings are equal,
strcmp returns
0.
A consequence of the ordering used by
strcmp is that if s1
is an initial substring of s2, then s1 is considered to be
“less than” s2.]
In some locales, the conventions for lexicographic ordering differ from the strict numeric ordering of character codes. For example, in Spanish most glyphs with diacritical marks such as accents are not considered distinct letters for the purposes of collation. On the other hand, the two-character sequence.
Effectively, the way these functions work is by applying a mapping to transform the characters in a string to a byte sequence that represents the string’s position in the collating sequence of the current locale. Comparing two such byte sequences in a simple fashion is equivalent to comparing the strings with the locale’s collating sequence.
The Array Sort Function). The job of the
code shown here is to say how to compare the strings while sorting them.
(Later on in this section, we will show a way to do this more
efficiently using
strxfrm.)
/* This is the comparison function used with
qsort. */ int compare_elements return value is the length of the entire transformed string. This
value is not affected by the value of size, but if it is greater
or equal than size, it means that the transformed string did not
entirely fit in the array to. In this case, only as much of the
string as actually fits was stored. To get the whole transformed
string, call
strxfrm again with a bigger output array.
The transformed string may be longer than the original string, and it may also be shorter.
If size is zero, no characters are stored in to. In this
case,
strxfrm simply returns the number of characters that would
be the length of the transformed string. This is useful for determining
what size may even be a null pointer.
Here is an example of how you can use
strxfrm when
you plan to do many comparisons. It does the same thing as the previous
example, but much faster, because it has to transform each string only
once, no matter how many times it is compared with other strings. Even
the time needed to allocate and free storage is much less than the time
we save, when there are many strings.
struct sorter { char *input; char *transformed; }; /* This is the comparison function used with
qsortto sort an array of
struct sorter. */ int compare_elements (const void *v1, const void *v2) { const struct sorter *p1 = v1; const struct sorter *p2 = v2; return strcmp (p1->transformed, p2->transformed); } /* This is the entry point—the function to sort strings using the locale’s collating sequence. */ void sort_strings_fast (char **array, int nstrings) { struct sorter temp_array[nstrings]; int i; /* Set up
temp_array. Each element contains one input string and its transformed string. */ for (i = 0; i < nstrings; i++) { size_t length = strlen (array[i]) * 2;.
The
strchr function finds the first occurrence of the character
c (converted to a
char) in the null-terminated string
beginning at string. The return value is a pointer to the located
character, or a null pointer if no match was found.
For example,
strchr ("hello, world", 'l').
This is like
strchr, except that it searches haystack for a
substring needle rather than just a single character. It
returns a pointer into the string haystack that is the first
character of the substring, or a null pointer if no match was found. If
needle is an empty string, the function returns haystack.
For example,
strstr ("hello, world", "l").
The
strspn (“string span”) function returns the length of the
initial substring of string that consists entirely of characters that
are members of the set specified by the string skipset. The order
of the characters in skipset is not important.
For example,
strspn ("hello, world", "abcdefghijklmnopqrstuvwxyz".
The
strcspn (“string complement span”) function returns the length
of the initial substring of string that consists entirely of characters
that are not members of the set specified by the string stopset.
(In other words, it returns the offset of the first character in string
that is a member of the set stopset.)
For example,
strcspn ("hello, world", " \t\n,.;!?".
The
strpbrk (“string pointer break”) function is related to
strcspn, except that it returns a pointer to the first character
in string that is a member of the set stopset instead of the
length of the initial substring. It returns a null pointer if no such
character from stopset is found.
For example,
strpbrk ("hello, world", " \t\n,.;!?")
strtok.
The string to be split up is passed as the newstring argument on
the first call only. The
strtok function uses this to set up
some internal state information. Subsequent calls to get additional
tokens from the same string are indicated by passing a null pointer as
the newstring argument. Calling
strtok with another
non-null newstring argument reinitializes the state information.
It is guaranteed that no other library function ever calls
strtok
behind your back (which would mess up this internal state information).
The delimiters argument is a string that specifies a set of delimiters that may surround the token being extracted. All the initial characters that are members of this set are discarded. The first character that is not a member of this set of delimiters marks the beginning of the next token. The end of the token is found by looking for the next character that is a member of the delimiter set. This character in the original string newstring is overwritten by a null character, and the pointer to the beginning of the token in newstring is returned.
On the next call to
strtok, the searching begins at the next
character beyond the one that marked the end of the previous token.
Note that the set of delimiters delimiters do not have to be the
same on every call in a series of calls to
strtok.
If the end of the string newstring is reached, or if the remainder of
string consists only of delimiter characters,
strtok returns
a null pointer.., delimiters); /* token => "words" */ token = strtok (NULL, delimiters); /* token => "separated" */ token = strtok (NULL, delimiters); /* token => "by" */ token = strtok (NULL, delimiters); /* token => "spaces" */ token = strtok (NULL, delimiters); /* token => "and" */ token = strtok (NULL, delimiters); /* token => "punctuation" */ token = strtok (NULL, delimiters); /* token => NULL */: Trivial Encryption, Previous: Finding Tokens in a String, Up: String and Array Utilities [Contents][Index]
The function below addresses the perennial programming quandary: “How do
I take good data in string form and painlessly turn it into garbage?”
This is actually a fairly simple task for C programmers who do not use
the GNU C Library string functions, but for programs based on the GNU C Library,
the
strfry function is the preferred method for
destroying string data.
The prototype for this function is in string.h.
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.
strfry creates a pseudorandom anagram of a string, replacing the
input with the anagram in place. For each position in the string,
strfry swaps it with a position in the string selected at random
(from a uniform distribution). The two positions may be the same.
The return value of
strfry is always string.
Portability Note: This function is unique to the GNU C Library.
Next:.
Next: Argz and Envz Vectors, Previous: Trivial Encryption, Up: String and Array Utilities [Contents][Index]
To store or transfer binary data in environments which only support text one has to encode the binary data by mapping the input bytes to characters in the range allowed for storing or transferring. SVID systems (and nowadays XPG compliant systems) provide minimal support for this task.
Preliminary: | MT-Unsafe race:l64a | AS-Unsafe | AC-Safe | See POSIX Safety Concepts.
This function encodes a 32-bit input value using characters from the
basic character set. It returns a pointer to a 7 character characters of
this string, and decodes the characters it finds according to the table
below. It stops decoding when it finds a character not in the table,
rather like
atoi; if you have a buffer which has been broken into
lines, you must be careful to skip over the end-of-line characters.
The decoded number is returned as a
long int value.
The
l64a and
a64l functions use a base 64 encoding, in
which each character..
Next:. affected operating operating ‘a’).
Next: Restartable multibyte conversion, Previous: Extended Char Intro, Up: Character Set Handling [Contents][Index].
Next:: Converting a Character, Previous: Selecting the Conversion, Up: Restartable multibyte conversion [Contents][Index] can contain all the information
about the shift state needed from one call to a conversion
function to another.
mbstate_t.
Preliminary: | MT-Safe | AS-Safe | AC-Safe | See POSIX Safety Concepts.
The
mbsinit function determines whether the state object pointed
to by ps is in the initial state. If ps is a null pointer or
the object is in the initial state the return value is nonzero. Otherwise
it is zero.
mbsinit was with
the GNU C Library.
Next: Converting Strings, Previous: Keeping the state, Up: Restartable multibyte conversion [Contents][Index] function (“byte to wide character”) converts a valid
single byte character c in the initial shift state into the wide
character equivalent using the conversion rules from the currently
selected locale of the
LC_CTYPE category.
If
(unsigned char) c is no valid single byte multibyte
character or if c is
EOF, the function returns
WEOF.
Please note the restriction of c being tested for validity only in
the initial shift state. No
mbstate_t object is used from
which the state information is taken, and the function also does not use
any static state.
The
btowc function function ( was function ( category. function and
no value is stored. Please note that this can happen even if n
has a value greater than or equal to
MB_CUR_MAX since the input
might contain redundant shift sequences.
If the first
n bytes of the multibyte string cannot possibly form
a valid multibyte character, no value is stored, the global variable
errno is set to the value
EILSEQ, and the function returns
(size_t) -1. The conversion state is afterwards undefined.
mbrtowc was function ( is used.
mbrlen was function (_t
object) to the initial state. This can also be achieved by a call like
this:
wcrtombs (temp_buf, L'\0', ps)
since, if s is a null pointer,
wcrtomb performs as if it
writes into an internal buffer, which is guaranteed to be large enough.
If wc is the NUL wide character,
wcrtomb emits, category). If wc is no
valid wide character, nothing is stored in the strings s,
errno bytes available since
this is the maximum length of any byte sequence representing a single
character. So the caller has to make sure that there is enough space
available, otherwise buffer overruns can occur.
wcrtomb was.
Next:._t nwrite; char *inp = buffer; wchar_t outbuf[BUFSIZ]; wchar_t *outp = outbuf; /* Fill up the buffer from the input file. */ nread = read (input, buffer + filled, BUFSIZ); if (nread < 0) { perror ("read"); return 0; } /* If we reach end of file, make a note to read no more. */ if (nread == 0) eof = 1; /*
filledis now the number of bytes in
buffer. */ filled += nread; /* Convert those bytes to wide characters–as many as we can. */ while (1) {; /* Advance past this character. */ inp += thislen; filled -= thislen; ++outp; } /* Write the wide characters we just made. */ nwrite = write (output, outbuf, (outp - outbuf) * sizeof (wchar_t)); if (nwrite < 0) { perror ("write"); return 0; } /* See if we have a real invalid character. */ if ((eof && filled > 0) || filled >= MB_CUR_MAX) { error .
The
mbtowc (“multibyte to wide character”) function when called
with non-null string converts the first multibyte character
beginning at string to its corresponding wide character code. It
stores the result in
*result.
mbtowc never examines more than size bytes. (The idea is
to supply for size the number of bytes of data you have in hand.)
mbtowc with non-null string distinguishes three
possibilities: the first size bytes at string start with
valid multibyte characters, they start with an invalid byte sequence or
just part of a character, or string points to an empty string (a
null character).
For a valid multibyte character,
mbtowc converts it to a wide
character and stores that in
*result, and returns the
number of bytes in that character (always at least 1 and never
more than size).
For an invalid byte sequence,
mbtowc returns -1. For an
empty string, it returns 0, also storing
'\0' in
*result.
If the multibyte character code uses shift characters, then
mbtowc maintains and updates a shift state as it scans. If you
call
mbtowc with a null pointer for string, that
initializes the shift state to its standard initial value. It also
returns nonzero if the multibyte character code in use actually has a
shift state. See Shift State.
Preliminary: | MT-Unsafe race | AS-Unsafe corrupt heap lock dlopen | AC-Unsafe corrupt lock mem fd | See POSIX Safety Concepts.
The
wctomb (“wide character to multibyte”) function converts
the wide character code wchar to its corresponding multibyte
character sequence, and stores the result in bytes starting at
string. At most
MB_CUR_MAX characters are stored.
wctomb with non-null string distinguishes three
possibilities for wchar: a valid wide character code (one that can
be translated to a multibyte character), an invalid code, and
L'\0'.
Given a valid code,
wctomb converts it to a multibyte character,
storing the bytes starting at string. Then it returns the number
of bytes in that character (always at least 1 and never more
than
MB_CUR_MAX).
If wchar is an invalid wide character code,
wctomb returns
-1. If wchar is
L'\0', it returns
0, also
storing
'\0' in
*string.
If the multibyte character code uses shift characters, then
wctomb maintains and updates a shift state as it scans. If you
call
wctomb with a null pointer for string, that
initializes the shift state to its standard initial value. It also
returns nonzero if the multibyte character code in use actually has a
shift state. See returns
the number of bytes that make up the multibyte character beginning at
string, never examining more than size bytes. (The idea is
to supply for size the number of bytes of data you have in hand.)
The return value of
mblen distinguishes three possibilities: the
first size bytes at string start with valid multibyte
characters, they start with an invalid byte sequence or just part of a
character, or string points to an empty string (a null character).
For a valid multibyte character,
mblen returns the number of
bytes in that character (always at least
1 and never more than
size). For an invalid byte sequence,
mblen returns
-1. For an empty string, it returns 0.
If the multibyte character code uses shift characters, then
mblen
maintains and updates a shift state as it scans. If you call
mblen with a null pointer for string, that initializes the
shift state to its standard initial value. It also returns (“multibyte string to wide character string”)
function converts the null-terminated string of multibyte characters
string to an array of wide character codes, storing not more than
size wide characters into the array beginning at wstring.
The terminating null character counts towards the size, so if size
is less than the actual number of wide characters resulting from
string, no terminating null character is stored.
The conversion of characters from string begins in the initial shift state.
If an invalid multibyte character sequence is found, the
mbstowcs
function returns a value of -1. Otherwise, it returns the number
of wide characters stored in the array wstring. This number does
not include the terminating null character, which is present if the
number is less than size.
Here is an example showing how to convert a string of multibyte characters, allocating enough space for the result.
wchar_t * mbstowcs_alloc (“wide character string to multibyte string”)
function converts the null-terminated wide character array wstring
into a string containing multibyte characters, storing not more than
size bytes starting at string, followed by a terminating
null character if there is room. The conversion of characters begins in
the initial shift state.
The terminating null character counts towards the size, so if size is less than or equal to the number of bytes needed in wstring, no terminating null character is stored.
If a code that does not correspond to a valid multibyte character is
found, the
wcstombs function returns a value of -1.
Otherwise, the return value is the number of bytes stored in the array
string. This number does not include the terminating null character,
which is present if the number is less than size.
Previous: Non-reentrant String Conversion, Up: Non-reentrant Conversion [Contents][Index]
In some multibyte character codes, the meaning of any particular byte sequence is not fixed; it depends on what other sequences have come earlier in the same string. Typically there are just a few sequences that can change the meaning of other sequences; these few are called shift sequences and we say that they set the shift state for other sequences that follow.
To illustrate shift state and shift sequences, suppose we decide that
the sequence
0200 (just one byte) enters Japanese mode, in which
pairs of bytes in the range from
0240 to
0377 are single
characters, while
0201 enters Latin-1 mode, in which single bytes
in the range from
0240 to
0377 are characters, and
interpreted according to the ISO Latin-1 character set. This is a
multibyte code that has two alternative shift states (“Japanese mode”
and “Latin-1 mode”), and two shift sequences that specify particular
shift states.
When the multibyte character code in use has shift states, then
mblen,
mbtowc, and
wctomb must maintain and update
the current shift state as they scan the string. To make this work
properly, you must follow these rules:
mblen (NULL, 0). This initializes the shift state to its standard initial value.
Here is an example of using
mblen following these rules:
void scan_string (char *s) { int length = strlen (s); /* Initialize shift state. */ mblen (NULL, 0); while (1) { int thischar = mblen (s, length); /* Deal with end of string and invalid characters. */ if (thischar == 0) break; if (thischar == -1) { error ("invalid multibyte character"); break; } /* Advance past this character. */ s += thischar; length -= thischar; } }
The functions
mblen,
mbtowc and
wctomb are not
reentrant when using a multibyte code that uses a shift state. However,
no other library functions call these functions, so you don’t have to
worry that the shift state will be changed mysteriously.
Previous: Non-reentrant Conversion, Up: Character Set Handling [Contents][Index]_CTYPE.
Next:: Other iconv Implementations, Previous: Generic Conversion Interface, Up: Generic Charset Conversion [Contents][Index]
iconvexample
The example below features a solution for a common problem. Given that
one knows the internal encoding used by the system for
wchar_t
strings, one often is in the position to read text from a file and store
it in wide character buffers. One can do this using
mbsrtowcs,
but then we run into the problems discussed above.
int file2wcs (int fd, const char *charset, wchar_t *outbuf, size_t avail) { char inbuf[BUFSIZ]; size_t insize = 0; char *wrptr = (char *) outbuf; int result = 0; iconv_t cd; cd = iconv_open ("WCHAR_T", charset); if (cd == (iconv_t) -1) { /* Something went wrong. */ if (errno == EINVAL) error (0, 0, "conversion from '%s' to wchar_t not available", charset); else perror ("iconv_open"); /* Terminate the output string. */ *outbuf = L'\0'; return -1; } while (avail > 0) { size_t nread; size_t nconv; char *inptr = inbuf; /* Read more input. */ nread = read (fd, inbuf + insize, sizeof (inbuf) - insize); if (nread == 0) { /* When we come here the file is completely read. This still could mean there are some unused characters in the
inbuf. Put them back. */ if (lseek (fd, -insize, SEEK_CUR) == -1) result = -1; /* Now write out the byte sequence to get into the initial state if this is necessary. */ iconv (cd, NULL, NULL, &wrptr, &avail); break; } insize += nread; /* Do the conversion. */ nconv = iconv (cd, &inptr, &insize, &wrptr, &avail); if (nconv == (size_t) -1) { /* Not everything went right. It might only be an unfinished byte sequence at the end of the buffer. Or it is a real problem. */ if (errno == EINVAL) /* This is harmless. Simply move the unused bytes to the beginning of the buffer so that they can be used in the next round. */ memmove (inbuf, inptr, insize); else { /* It is a real problem. Maybe we ran out of space in the output buffer or we have invalid input. In any case back the file pointer to the position of the last processed byte. */ lseek (fd, -insize, SEEK_CUR); result = -1; break; } } } /* Terminate the output string. */ if (avail >= sizeof (wchar_t)) *((wchar_t *) wrptr) = L'\0'; if (iconv_close (cd) != 0) perror ("iconv_close"); return (wchar_t *) wrptr - outbuf; }
This example shows the most important aspects of using the
iconv
functions. It shows how successive calls to
iconv can be used to
convert large amounts of text. The user does not have to care about
stateful encodings as the functions take care of everything.
An interesting point is the case where
iconv returns an error and
errno is set to
EINVAL. This is not really an error in the
transformation. It can happen whenever the input character set contains
byte sequences of more than one byte for some character and texts are not
processed in one piece. In this case there is a chance that a multibyte
sequence is cut. The caller can then simply read the remainder of the
takes and feed the offending bytes together with new character from the
input to
iconv and continue the work. The internal state kept in
the descriptor is not unspecified after such an event as is the
case with the conversion functions from the ISO C standard.
The example also shows the problem of using wide character strings with
iconv. As explained in the description of the
iconv
function above, the function always takes a pointer to a
char
array and the available space is measured in bytes. In the example, the
output buffer is a wide character buffer; therefore, we use a local
variable wrptr of type
char *, which is used in the
iconv calls.
This looks rather innocent but can lead to problems on platforms that
have tight restriction on alignment. Therefore the caller of
iconv
has to make sure that the pointers passed are suitable for access of
characters from the appropriate character set. Since, in the
above case, the input parameter to the function is a
wchar_t
pointer, this is the case (unless the user violates alignment when
computing the parameter). But in other situations, especially when
writing generic functions where one does not know what type of character
set one uses and, therefore, treats text as a sequence of bytes, it might
become tricky.
Next: glibc iconv Implementation, Previous: iconv Examples, Up: Generic Charset Conversion [Contents][Index]
iconvImplementations
This is not really the place to discuss the
iconv implementation
of other systems but it is necessary to know a bit about them to write
portable programs. The above mentioned problems with the specification
of the
iconv functions can lead to portability issues.
The first thing to notice is that, due to the large number of character sets in use, it is certainly not practical to encode the conversions directly in the C library. Therefore, the conversion information must come from files outside the C library. This is usually done in one or both of the following ways:
This solution is problematic as it requires a great deal of effort to apply to all character sets (potentially an infinite set). The differences in the structure of the different character sets is so large that many different variants of the table-processing functions must be developed. In addition, the generic nature of these functions make them slower than specifically implemented functions.
This solution provides much more flexibility. The C library itself contains only very little code and therefore reduces the general memory footprint. Also, with a documented interface between the C library and the loadable modules it is possible for third parties to extend the set of available conversion modules. A drawback of this solution is that dynamic loading must be available.
Some implementations in commercial Unices implement a mixture of these possibilities; the majority implement only the second solution. Using loadable modules moves the code out of the library itself and keeps the door open for extensions and improvements, but this design is also limiting on some platforms since not many platforms support dynamic loading in statically linked programs. On platforms without this capability it is therefore not possible to use this interface in statically linked programs. The GNU C Library has, on ELF platforms, no problems with dynamic loading in these situations; therefore, this point is moot. The danger is that one gets acquainted with this situation and forgets about the restrictions on other systems.
A second thing to know about other
iconv implementations is that
the number of available conversions is often very limited. Some
implementations provide, in the standard release (not special
international or developer releases), at most 100 to 200 conversion
possibilities. This does not mean 200 different character sets are
supported; for example, conversions from one character set to a set of 10
others might count as 10 conversions. Together with the other direction
this makes 20 conversion possibilities used up by one character set. One
can imagine the thin coverage these platform provide. Some Unix vendors
even provide only a handful of conversions, which renders them useless for
almost all uses.
This directly leads to a third and probably the most problematic point.
The way the
iconv conversion functions are implemented on all
known Unix systems and the availability of the conversion functions from
character set A to B and the conversion from
B to C does not imply that the
conversion from A to C is available.
This might not seem unreasonable and problematic at first, but it is a quite big problem as one will notice shortly after hitting it. To show the problem we assume to write a program that has to convert from A to C. A call like
cd = iconv_open ("C", "A");
fails according to the assumption above. But what does the program do now? The conversion is necessary; therefore, simply giving up is not an option.
This is a nuisance. The
iconv function should take care of this.
But how should the program proceed from here on? If it tries to convert
to character set B, first the two
iconv_open
calls
cd1 = iconv_open ("B", "A");
and
cd2 = iconv_open ("C", "B");
will succeed, but how to find B?
Unfortunately, the answer is: there is no general solution. On some systems guessing might help. On those systems most character sets can convert to and from UTF-8 encoded ISO 10646 or Unicode text. Beside this only some very system-specific methods can help. Since the conversion functions come from loadable modules and these modules must be stored somewhere in the filesystem, one could try to find them and determine from the available file which conversions are available and whether there is an indirect route from A to C.
This example shows one of the design errors of
iconv mentioned
above. It should at least be possible to determine the list of available
conversion programmatically so that if
iconv_open says there is no
such conversion, one could make sure this also is true for indirect
routes.
Previous:.
Next:: Locale Categories, Previous: Effects of Locale, Up: Locales [Contents][Index]
The simplest way for the user to choose a locale is to set the
environment variable
LANG. This specifies a single locale to use
for all purposes. For example, a user could specify a hypothetical
locale named ‘espana-castellano’ to use the standard conventions of
most of Spain.
The set of locales supported depends on the operating system you are using, and so do their names, except that the standard locale called ‘C’ or ‘POSIX’ always exist. See Locale Names.
In order to force the system to always use the default locale, the
user can set the
LC_ALL environment variable to ‘C’.
A user also has the option of specifying different locales for different purposes—in effect, choosing a mixture of multiple locales. See Locale Categories.
For example, the user might specify the locale ‘espana-castellano’ for most purposes, but specify the locale ‘usa-english’ for currency formatting. This might make sense if the user is a Spanish-speaking American, working in Spanish, but representing monetary amounts in US dollars.
Note that both locales ‘espana-castellano’ and ‘usa-english’, like all locales, would include conventions for all of the purposes to which locales apply. However, the user can choose to use each locale for a particular subset of those purposes.:: The Elegant and Fast Way, Up: Locale Information [Contents][Index].
Preliminary: | MT-Unsafe race:localeconv locale | AS-Unsafe | AC-Safe | See POSIX Safety Concepts..
Next: Currency Symbol, Up: The Lame Way to Locale Data [Contents][Index]
These are the standard members of
struct lconv; there may be
others.
char *decimal_point
char *mon_decimal_point
These are the decimal-point separators used in formatting non-monetary
and monetary quantities, respectively. In the ‘C’ ‘C’!)
Next: Sign of Money Amount, Previous: General Numeric, Up: The Lame Way to Locale Data [Contents][Index]
These members of the
struct lconv structure specify how to print
the symbol to identify a monetary value—the international analog of
‘$’ ‘C’ ‘C’ ‘C’ ‘C’).
Previous:: The Lame Way to Locale Data, Up: Locale Information [Contents].
Preliminary: | MT-Safe locale | AS-Safe | AC-Safe | See POSIX Safety Concepts.:: Yes-or-No Questions, Previous: Locale Information, Up: Locales [Contents][Index].
Preliminary: | MT-Safe locale | AS-Unsafe heap | AC-Unsafe mem | See POSIX Safety Concepts. implementation in the GNU C Library
allows an optional ‘L’ or
long double, depending on the presence of the
modifier ‘L’..
Previous: Formatting Numbers, Up: Locales .
Next: Searching and Sorting, Previous: Locales, Up: Top [Contents][Index]
The program’s interface with the user should be designed to ease the user’s task. One way to ease the user’s task is to use messages in whatever language the user prefers.
Printing messages in different languages can be implemented in different ways. One could add all the different languages in the source code and choose among the variants every time a message has to be printed. This is certainly not a good solution since extending the set of languages is cumbersome (the code must be changed) and the code itself can become really big with dozens of message sets.
A better solution is to keep the message sets for each language in separate files which are loaded at runtime depending on the language selection of the user.
The GNU C Library provides two different sets of functions to support
message translation. The problem is that neither of the interfaces is
officially defined by the POSIX standard. The
catgets family of
functions is defined in the X/Open standard but this is derived from
industry decisions and therefore not necessarily based on reasonable
decisions.
As mentioned above the message catalog handling provides easy extendibility by using external data files which contain the message translations. I.e., these files contain for each of the messages used in the program a translation for the appropriate language. So the tasks of the message handling functions are
The two approaches mainly differ in the implementation of this last step. Decisions made in the last step influence the rest of the design.
Next: The Uniforum approach, Up: Message Translation [Contents][Index].
Next: gencat program, Previous: The catgets Functions, Up: Message catalogs a la X/Open [Contents][Index]
The only reasonable way the
$quote. If no non-whitespace character is present before the line ends quoting is disable. messages left away and in this case the message with the identifier
twowould loose its leading whitespace.
While this file format is pretty easy it is not the best possible for
use in a running program. The
catopen function would have to
parser: Common Usage, Previous: The message catalog files, Up: Message catalogs a la X/Open [Contents][Index]
The
gencat program is specified in the X/Open standard and the
GNU implementation follows this specification and so processes
all correctly formed input files. Additionally some extension are
implemented which help to work in a more reasonable way with the
catgets functions.
The
gencat program can be invoked in two ways:
`gencat [Option]… [Output-File [Input-File]…]`
This is the interface defined in the X/Open standard. If no Input-File parameter is given input will be read from standard input. Multiple input files will be read as if they are concatenated. If Output-File is also missing, the output will be written to standard output. To provide the interface one is used to from other programs a second interface is provided.
`gencat [Option]… -o Output-File [Input-File]…`
The option ‘-o’ is used to specify the output file and all file arguments are used as input files.
Beside this one can use - or /dev/stdin for Input-File to denote the standard input. Corresponding one can use - and /dev/stdout for Output-File to denote standard output. Using - as a file name is allowed in X/Open while using the device names is a GNU extension.
The
gencat program works by concatenating all input files and
then merge the resulting collection of message sets with a
possibly existing output file. This is done by removing all messages
with set/message number tuples matching any of the generated messages
from the output file and then adding all the new messages. To
regenerate a catalog file while ignoring the old contents therefore
requires to remove the output file if it exists. If the output is
written to standard output no merging takes place.
The following table shows the options understood by the
gencat
program. The X/Open standard does not specify any option for the
program so all of these are GNU extensions.
#defines to associate a name with a
number.
Please note that the generated file only contains the symbols from the input files. If the output is merged with the previous content of the output file the possibly existing symbols from the file(s) which generated the old output files are not in the generated header file.
Previous: The gencat program, Up: Message catalogs a la X/Open [Contents][Index]):
% gencat -H msgnrs.h -o hello.cat hello.msg % cat msgnrs.h #define MainSet 0x1 /* hello.msg:4 */ #define MainHello 0x1 /* hello.msg:5 */ % gcc -o hello hello.c -I. % cp hello.cat /usr/share/locale/de/LC_MESSAGES % echo $LC_ALL de % ./hello Hallo, Welt! %.:: Advanced gettext functions, Previous: Translation with gettext, Up: Message catalogs with gettext [Contents][Index].
Preliminary: | MT-Safe | AS-Unsafe lock heap | AC-Unsafe lock mem | See POSIX Safety Concepts.
The
textdomain function sets the default domain, which is used in
all future
gettext calls, to domainname. Please note that
dgettext and
dcgettext calls are not influenced if the
domainname parameter of these functions is not the null pointer.
Before the first call to
textdomain the default domain is
messages. This is the name specified in the specification of
the
gettext API. and the global variable errno is set to
ENOMEM.
Despite the return value type being
char * the return string must
not be changed. It is allocated internally by the
textdomain
function.
really never should be used.
Preliminary: | MT-Safe | AS-Unsafe heap | AC-Unsafe mem | See POSIX Safety Concepts.
The
bindtextdomain function call to bind the domain for the current program to
this directory. So it is made sure the catalogs are found. A correctly
running program does not depend on the user setting an environment
variable.
The
bindtextdomain function can be used several times and if the
domainname argument is different the previously bound domains
will not be overwritten.
If the program which wish to use
bindtextdomain at some point of
time use the
chdir function to change the current working
directory it is important that the dirname strings ought to be an
absolute pathname. Otherwise the addressed directory might vary with
the time.
If the dirname parameter is the null pointer
bindtextdomain
returns the currently selected directory for the domain with the name
domainname.
The
bindtextdomain function returns a pointer to a string
containing the name of the selected directory name. The string is
allocated internally in the function and must not be changed by the
user. If the system went out of core during the execution of
bindtextdomain the return value is
NULL and the global
variable errno is set accordingly.
Next: | https://www.gnu.org/software/libtool/manual/libc.html | CC-MAIN-2015-32 | refinedweb | 11,308 | 61.16 |
Kestrel implementation report: CSS Snapshot 2007
Monday, 22. October 2007, 06:41:02
The W3C has just recently published a working draft of their first annual CSS snapshot. As this came about in a meeting in Beijing, I'll name it the Beijing report. Peter just beat me to the punch in covering what is included in the Beijing report on the CSS3.info website, so I'll let you read his post for an overview of what is in the draft. I personally think that this is a great step forward from the W3C, and is a sign that they are listening to criticism that CSS3 is taking too long. Having a snapshot like this allows user agent vendors, such as ourselves, to know what modules, or properties are considered stable enough to implement, without too much worry that the specs will change significantly.
We'd all love to see many of the things in the Backgrounds & Borders module, but this is not considered stable enough to be included in the snapshot, nor any property from the module. Looking at properties such as
border-radius where more than one rendering engine supports supports it, they do differ in syntax and implementation. Opera does implement
background-size as
-o-background-size as it was needed for the UI of a certain customer delivery, and it was best to use an experimental implementation of a CSS3 property than invent our own vendor specific property.
Of the contents of the working draft, how far is Opera along in supporting these modules and specs? Using the latest public weekly of Kestrel as the subject, I've gone through the draft and noted what is and isn't supported.
CSS Level 2, Revision 1
This spec has been a long time coming and draws ever closer to completion. The spec is too large to cover everything that is supported in detail here, but if you take a loo at Opera Merlin's (9.0 - < 9.5) official spec sheet, you'll notice that
Opera supports all of CSS2.1 with the exception of the . Since Merlin, Kestrel has added
visibility: collapse and
white-space: pre-line property values
white-space: pre-line support. This can be tested at PPK's Quirksmode site. The spec has been updated recently, so I'm not sure if that information still holds true, so if anyone knows differently then please let me know. I couldn't find a changelog detailing what the recent changes were when it moved to Candidate Recommendation in July. Even if we only lack one value of one property, we still have bugs that have to be ironed out. The last I checked the test suite also had bugs. I would assume that Opera Kestrel currently has the most complete CSS2.1 support.
CSS Selectors Level 3
I've already wrote a lot about selectors on this blog, so regular readers will know that we pass all tests on the css3.info selectors test. The tests are not exhaustive, so there will be bugs, but it gives a good indication of what we support. It reports that we support all CSS selectors. This isn't quite true as it doesn't test the
::selection selector. That is the only selector that Kestrel doesn't currently fully support. Just like our CSS2.1 support, Opera has first class support for this spec, and is close to completion, apart from the seemingly never ending bug squashing that is a familiar part of any software development.
CSS Namespaces
CSS Namespaces allow is most useful in XML documents, and allows ocuments with mixed namespaces to be styled individually. For example, a
p element in one namespace can be styled, without the
p elements in another namespace being effected. If you declare the namespace
@namespace xhtml ""; you could style only the
p elements in this namespace using
xhtml|p.
Opera already supported CSS3 Namespaces in Merlin, and it is now supported in other browsers such as Safari and Firefox. There are five testcases for namespaces in CSS found here, of which Kestrel currently fails one.
CSS Colour Level 3
Of all the specs and modules in this draft, the colour module is the least supported by Kestrel. We clearly support the colour properties for CSS2.1, but what about the extra properties and values in level 3? SVG colour keywords are supported, and in reality these have been supported by browsers for a long time, and just were not included in a spec. The
opacity property was much requested and was included in Merlin. The
currentColor value is also supported and I believe this was added in Kestrel. The
HSL colour model isn't supported yet, but shouldn't be hugely difficult to support, given that it maps to RGB. An alpha channel has been added to both
RGB and
HSL an these are both not supported yet. This differs from
opacity in that it is only applied to the property it is used on, such as the
background-color and not the whole element. This will mean that the text will not be effected in this example. ICC Colour Profiles are also not supported as far as I'm aware, and neither is the
flavor colour keyword. David Baron has written some testcases (thanks to fantasai for pointing that out) an not surprisingly Kestrel fails most of the HSL, HSLA and RGBA tests. Strangely it also fails the HTML colour keywords and SVG colour keywords tests and I'm not sure why. Safari and Firefox also both fail these tests. We also fail the flavour test, but I can't think of a single use case for the
flavor colour keyword.
Overall Kestrel is in good shape in regards to supporting the standards in this snapshot, and has either the best support or close to it for each module listed, except perhaps the Colour module. Each of these have limited features missing, and except for the continuous cycle of bug fixing they are close to completion.
Fyrd # 22. October 2007, 12:30
Apparently the reason that the color keyword tests don't work is because table rows aren't displayed when all its cells are empty. Don't know the details on whether or not that's the correct behaviour, but that's what all browsers do. Surely the author would know of that though?
Here's a version of the SVG colors page with some text in each cell to make them appear. Indeed, Opera, Firefox and Safari seem to support the color keywords.
IE6/7 get all colors right, except the "grey" alternatives (that's gotta hurt, David) although "lightgrey" does work! How odd.
Darken # 22. October 2007, 16:52
Originally posted by Fyrd:
+1 Very helpful, thanks.
dflock # 22. October 2007, 23:54
It would be really sweet to see RGBA support in Opera - this would be really cool - it's a more useful property than opacity, imho.
liorean # 23. October 2007, 18:22
Anonymous # 25. October 2007, 18:08
I've fixed the tests you mentioned. The build scripts were discarding all the named character entities, see
Please report future errors to the public-css-testsuite mailing list, thanks~ :) | http://my.opera.com/dstorey/blog/kestrel-implementation-report-css-snapshot-2007 | crawl-002 | refinedweb | 1,205 | 70.13 |
Dynamic Code: Background
Previously, I was expressing how excited I was when I discovered Python, C#, and Visual Studio integration. I wanted to save a couple examples regarding dynamic code for a follow up article… and here it is! (And yes… there is code you can copy and paste or download).
What does it mean to be dynamic? As with most things, wikipedia provides a great start. Essentially, much of the work done for type checking and signatures is performed at runtime for a dynamic language. This could mean that you can write code that calls a non-existent method and you wont get any compilation errors. However, once execution hits that line of code, you might get an exception thrown. This Stack Overflow post’s top answer does a great job of explaining it as well, so I’d recommend checking that out if you need a bit more clarification. So we have statically bound and dynamic languages. Great stuff!
So does that mean Python is dynamic? What about C#?
Well Python is certainly dynamic. The code is interpreted and functions and types are verified at run time. You won’t know about type exceptions or missing method exceptions until you go to execute the code. For what it’s worth, this isn’t to be confused with a loosely typed language. Ol’ faithful Stack Overflow has another great answer about this. The type of the variable is determined at runtime, but the variable type doesn’t magically change. If you set a variable to be an integer, it will be an integer. If you set it immediately after to be a string, it will be a string. (Dynamic, but strongly typed!)
As for C#, in C# 4 the dynamic keyword was introduced. By using the dynamic keyword, you can essentially get similar behaviour to Python. If you declare a variable of type dynamic, it will take on the type of whatever you assign to it. If I assign a string value to my dynamic variable, it will be a string. I can’t perform operations like pre/post increment (++) on the variable when it’s been assigned a string value without getting an exception. If I assign an integer value immediately after having assigned a string value, my variable will take on the integer type and my numeric operators become available.
Where does this get us with C# and Python working together then?
Example 1: A Simple Class
After trying to get some functions to execute between C# and Python, I thought I needed to take it to the next level. I know I can declare classes in Python, but how does that look when I want to access it from C#? Am I limited to only calling functions from Python with no concept of classes?
The answer to the last question is no. Most definitely not. You can do some pretty awesome things with IronPython. In this example, I wanted to show how I can instantiate an instance of a class defined within a Python script from C#. This script doesn’t have to be created in code (you can use an external file), so if you need more clarification on this check out my last Python/C# posting, but I chose to do it this way to have all the code in one spot. I figured it might be easier to show for an example.
We’ll be defining a class in Python called “MyClass” (I know, I’m not very creative, am I?). It’s going to have a single method on it called “go” that will take one input parameter and print it to the console. It’s also going to return the input string so that we can consume it in C# and use it to validate that things are actually going as planned. Here’s the code:
using System; using System.Collections.Generic; using System.Text; using Microsoft.Scripting.Hosting; using IronPython.Hosting; namespace DynamicScript { internal class Program { private static void Main(string[] args) { Console.WriteLine("Enter the text you would like the script to print!"); var input = Console.ReadLine(); var script = "class MyClass:\r\n" + " def __init__(self):\r\n" + " pass\r\n" + " def go(self, input):\r\n" + " print('From dynamic python: ' + input)\r\n" + " return input"; try { var engine = Python.CreateEngine(); var scope = engine.CreateScope(); var ops = engine.Operations; engine.Execute(script, scope); var pythonType = scope.GetVariable("MyClass"); dynamic instance = ops.CreateInstance(pythonType); var value = instance.go(input); if (!input.Equals(value)) { throw new InvalidOperationException("Odd... The return value wasn't the same as what we input!"); } } catch (Exception ex) { Console.WriteLine("Oops! There was an exception while running the script: " + ex.Message); } Console.WriteLine("Press enter to exit..."); Console.ReadLine(); } } }
Not too bad, right? The first block of code just takes some user input. It’s what we’re going to have our Python script output to the console. The next chunk of code is our Python script declaration. As I said, this script can be loaded from an external file and doesn’t necessarily have to exist entirely within our C# code files.
Within our try block, we’re going to setup our Python engine and “execute” our script. From there, we can ask Python for the type definition of “MyClass” and then ask the engine to create a new instance of it. Here’s where the magic happens though! How can we declare our variable type in C# if Python actually has the variable declaration? Well, we don’t have to worry about it! If we make it the dynamic type, then our variable will take on whatever type is assigned to it. In this case, it will be of type “MyClass”.
Afterwards, I use the return value from calling “go” so that we can verify the variable we passed in is the same as what we got back out… and it definitely is! Our C# string was passed into a Python function on a custom Python class and spat back out to C# just as it went in. How cool is that?
Some food for thought:
- What happens if we change the C# code to call “go1” instead of “go”? Do we expect it to work? If it’s not supposed to work, will it fail at compile time or runtime?
- Notice how our Python method “go” doesn’t have any type parameters specified for the argument “input”? How and why does all of this work then?!
Example 2: Dynamically Adding Properties
I was pretty excited after getting the first example working. This meant I’d be able to create my own types in Python and then leverage them directly in C#. Pretty fancy stuff. I didn’t want to stop there though. The dynamic keyword is still new to me, and so is integrating Python and C#. What more could I do?
Well, I remembered something from my earlier Python days about dynamically modifying types at run-time. To give you an example, in C# if I declare a class with method X and property Y, instances of this class are always going to have method X and property Y. In Python, I have the ability to dynamically add a property to my class. This means that if I create a Python class that has method X but is missing property Y, at runtime I can go right ahead and add property Y. That’s some pretty powerful stuff right there. Now I don’t know of any situations off the top of my head where this would be really beneficial, but the fact that it’s doable had me really interested.
So if Python lets me modify methods and properties available to instances of my type at runtime, how does C# handle this? Does the dynamic keyword support this kind of stuff?
You bet. Here’s the code for my sample application:
using System; using System.Collections.Generic; using System.Text; using Microsoft.CSharp.RuntimeBinder; using IronPython.Hosting; namespace DynamicClass { internal class Program { private static void Main(string[] args) { Console.WriteLine("Press enter to read the value of 'MyProperty' from a Python object before we actually add the dynamic property."); Console.ReadLine(); // this script was taken from this blog post: // var script = "class Properties(object):\r\n" + " def add_property(self, name, value):\r\n" + " # create local fget and fset functions\r\n" + " fget = lambda self: self._get_property(name)\r\n" + " fset = lambda self, value: self._set_property(name, value)\r\n" + "\r\n" + " # add property to self\r\n" + " setattr(self.__class__, name, property(fget, fset))\r\n" + " # add corresponding local variable\r\n" + " setattr(self, '_' + name, value)\r\n" + "\r\n" + " def _set_property(self, name, value):\r\n" + " setattr(self, '_' + name, value)\r\n" + "\r\n" + " def _get_property(self, name):\r\n" + " return getattr(self, '_' + name)\r\n"; try { var engine = Python.CreateEngine(); var scope = engine.CreateScope(); var ops = engine.Operations; engine.Execute(script, scope); var pythonType = scope.GetVariable("Properties"); dynamic instance = ops.CreateInstance(pythonType); try { Console.WriteLine(instance.MyProperty); throw new InvalidOperationException("This class doesn't have the property we want, so this should be impossible!"); } catch (RuntimeBinderException) { Console.WriteLine("We got the exception as expected!"); } Console.WriteLine(); Console.WriteLine("Press enter to add the property 'MyProperty' to our Python object and then try to read the value."); Console.ReadLine(); instance.add_property("MyProperty", "Expected value of MyProperty!"); Console.WriteLine(instance.MyProperty); } catch (Exception ex) { Console.WriteLine("Oops! There was an exception while running the script: " + ex.Message); } Console.WriteLine("Press enter to exit..."); Console.ReadLine(); } } }
Let’s start by comparing this to the first example, because some parts of the code are similar. We start off my telling the user what’s going to happen and wait for them to press enter. Nothing special here. Next, we declare our Python script (again, you can have this as an external file) which I pulled form this blog. It was one of the first hits when searching for dynamically adding properties to classes in Python, and despite having limited Python knowledge, it worked exactly as I had hoped. So thank you, Zaur Nasibov.
Inside our try block, we have the Python engine creation just like our first example. We execute our script right after too and create an instance of our type defined in Python. Again, this is all just like the first example so far. At this point, we have a reference in C# to a type declared in Python called “Properties”. I then try to print to the console the value stored inside my instances property called “MyProperty”. If you were paying attention to what’s written in the code, you’ll notice we don’t have a property called “MyProperty”! Doh! Obviously that’s going to throw an exception, so I show that in the code as well.
So where does that leave us then? Well, let’s add the property “MyProperty” ourselves! Once we add it, we should be able to ask our C# instance for the value of “MyProperty”. And… voila!
Some food for thought:
- When we added our property in Python, we never specified a type. What would happen if we tried to increment “MyProperty” after we added it? What would happen if we tried to assign an integer value of 4 to “MyProperty”?
- When might it be useful to have types in C# dynamically get new methods or properties?
Summary
With this post, we’re still just scratching the surface of what’s doable when integrating Python and C#. Historically, these languages have been viewed as very different where C# is statically bound and Python is a dynamic language. However, it’s pretty clear with a bit of IronPython magic that we can quite easily marry the two languages together. Using the “dynamic” keyword within C# really lets us get away with a lot!
Source code for these projects is available at the following locations:
October 4th, 2013 on 11:21 am
This article also appears on CodeProject:
October 4th, 2013 on 8:31 pm
hi nick,
nice article! thanks also for the PTVS plug.
i’d like to invite you to join the project as a contributor, add a feature you see missing, fix a bug, etc.
cheers,
s
October 4th, 2013 on 9:07 pm
Thanks for the comment, Sean. Feel free to shoot me an email at n[dot]b[dot]cosentino[at]gmail[dot]com.
October 7th, 2013 on 11:50 am
Just a quick word about the title – it’s a little misleading! I thought the article was going to be about, not dynamically typed programs!
In any case, I was surprised to see how easy it was to embed the IronPython runtime inside a C# program. Good read
October 7th, 2013 on 12:12 pm
Thanks for the comment.
I’ve actually been hearing that a lot more now (especially on Reddit!) but to be honest, that interpretation of the title never even crossed my mind. I feel sort of silly now, since it looks like that’s actually what most people are expecting when they come check out the article. Doh!
Anyway, glad you enjoyed it 🙂 | http://devleader.ca/2013/10/01/dynamic-python-c/ | CC-MAIN-2017-22 | refinedweb | 2,199 | 66.54 |
28 September 2012 04:38 [Source: ICIS news]
SINGAPORE (ICIS)--India's Haldia Petrochemicals Limited (HPL) has halved the production output at its cracker and derivatives polyolefin units in West Bengal due to the recent surge in upstream naphtha prices, sources close to the company said late on Thursday.
The operating rates this week were approximately 50% at its 670,000 tonne/year naphtha cracker, a 370,000 tonne/year high density PE (HDPE)/linear low density PE (LLDPE) swing plant and a standalone 330,000 tonne/year HDPE facility, sources said.
“The naphtha feed is too expensive. HPL has no choice but to reduce run rates at its cracker and polymer units,” the source added.
Spot open-spec naphtha prices were on an uptrend since late June to peak at a five-month high of above $1,000/tonne CFR (cost and freight) ?xml:namespace>
The surge in naphtha prices was ‘overboard’, thus HPL will reduce dependency on imported naphtha over the next six months, as part of its measures to cut cost and working capital requirements, the source said.
It remains unclear when HPL will ramp up its crackers and polymer units, industry sources said.
Prior to the cut in operating rates, HPL was running its plants at close | http://www.icis.com/Articles/2012/09/28/9599364/indias-haldia-petchem-halves-run-rates-at-cracker-pe-pp-units.html | CC-MAIN-2013-48 | refinedweb | 211 | 63.53 |
Important: Please read the Qt Code of Conduct -
Error ‘QChar’ was not declared in this scope
Hello,
I tried to make a program which was working last month.
But now I cannot build it anymore.
I get this error:
‘QChar’ was not declared in this scope
I use this Qt-Version:
Qt Creator 4.13.1
This application sould run remotely on an raspberry zero
The compiler is :
Thread-Modell: posix gcc-Version 4.8.3 20140303
has something changed?
- jsulm Lifetime Qt Champion last edited by
@K-Str said in Error ‘QChar’ was not declared in this scope:
Qt Creator 4.13.1
This is QtCreator version, not Qt version.
Add
#include <QChar> | https://forum.qt.io/topic/119410/error-qchar-was-not-declared-in-this-scope | CC-MAIN-2022-05 | refinedweb | 114 | 75.2 |
protected override void OnInit(EventArgs e) { //If we're loading for the first time load default page. if (Session["wuc_location"] == null) { Session["wuc_location"] = "default_wuc.ascx"; Session["wuc_ID"] = "WUC_default_wuc"; } else { //Choose which WUC to load based on button clicked! switch ((string)Request.Form["__EVENTTARGET"]) { case "LinkButton_MM_Inbox": Session["wuc_location"] = "administration_wuc.ascx"; Session["wuc_ID"] = "WUC_administration_wuc"; Session["Page_First_Load"] = true; break; case "LinkButton_MM_System_Logs": Session["wuc_location"] = "logs_main_wuc.ascx"; Session["wuc_ID"] = "WUC_logs_main_wuc"; Session["Page_First_Load"] = true; break; case "LinkButton_MM_Preferences": Session["wuc_location"] = "user_preferences_wuc.ascx"; Session["wuc_ID"] = "WUC_user_preferences_wuc"; Session["Page_First_Load"] = true; break; case "LinkButton_P20T": Session["wuc_location"] = "previous_20_transactions_wuc.ascx"; Session["wuc_ID"] = "WUC_previous_20_transactions_wuc"; Session["Page_First_Load"] = true; break; case "WUC_administration_wuc$LinkButton_ADMIN_SMS_Settings": Session["wuc_location"] = "sms_settings_wuc.ascx"; Session["wuc_ID"] = "WUC_sms_settings_wuc"; Session["Page_First_Load"] = true; break; case "WUC_administration_wuc$LinkButton_ADMIN_Department_Setup": Session["wuc_location"] = "department_setup_wuc.ascx"; Session["wuc_ID"] = "WUC_department_setup_wuc"; Session["Page_First_Load"] = true; break; case "WUC_administration_wuc$LinkButton_ADMIN_Terminal_Setup": Session["wuc_location"] = "terminal_setup_wuc.ascx"; Session["wuc_ID"] = "WUC_terminal_setup_wuc"; Session["Page_First_Load"] = true; break; default: // break; } } //Load the selected user control Main_PH.Controls.Clear(); WUC_default_wuc = new Control(); WUC_default_wuc = this.LoadControl((string)Session["wuc_location"]); WUC_default_wuc.ID = (string)Session["wuc_ID"]; Main_PH.Controls.Add(WUC_default_wuc); base.OnInit(e); }
case "WUC_administration_wuc$LinkButton_ADMIN_SMS_Settings": Session["wuc_location"] = "sms_settings_wuc.ascx"; Session["wuc_ID"] = "WUC_sms_settings_wuc"; Session["Page_First_Load"] = true; break;
case "LinkButton_MM_Preferences": Session["wuc_location"] = "user_preferences_wuc.ascx"; Session["wuc_ID"] = "WUC_user_preferences_wuc"; Session["Page_First_Load"] = true; break;
From novice to tech pro — start learning today.
Would you please check if the following link provides you some solution for your problem:
There they says for:
1. EnableEventValidation="fal
2. Set ViewState="false"
3. just redirect the page again (I too don't understand what they mean to say)
Regards,
V.S.Saini
Open in new window
The code above made no difference. I added it as below (as they was already a pages tag)
[code]
<pages controlRenderingCompatibil
enableEventValidation="fal
<controls>
<add tagPrefix="telerik" namespace="Telerik.Web.UI"
</controls>
</pages>
[/code]
Thanks for trying to help! Any other suggestions?
With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.
If some are working and some not delete all temporary files in TemporaryASP.NET folder. And let us know the result back.
Regards,
VSS
I tried deleting the files and it didn't work..... Anything else?
I've attached 3 images:
1. - A screen of sms settings (sms.jpg)
2. - A screen of user preferences (user_pref.jpg)
3. - A screen of user preferences after I swap the 2nd dropdown for a label. (user_pref1.jpg)
Screen 3 is the most important. After the user_preferences_wuc.ascx control kept throwing the viewstate error when you click to it from sms_settings_wuc.ascx, I got rid of the 2nd dropdown from user preferences and replaced it with a label:
[code]
<asp:Label</asp:Label>
[/code]
When I did this, the result was screen 3. There was no viewstate error but the label got populated with the viewstate from the previous sms_settings_wuc.ascx control..... How is this possible?????? Can somebody shed some light on what is happening??
sms.jpg
User-pref.jpg
User-pref1.jpg
After reading your whole code and having look on snapshot of your website I reached at some conclusion. Please rectify me if I am wrong at some point. What I am sharing with you is based on my theoretical knowledge of the concept.
Here we go: (Trying to understand your problem and sharing solution)
(1) All your menu items on left side are linkbutton. And you are opening particular user control dynamically on basis of click event of particular button.
(2) Since controls are being created in Page_OnInit event, so it would not create problem of saving ViewState of each dynamic controls created.
(3) On loading Administration control (User Control) the inner controls (such as links & with link click other controls) of Admin are also loading. And here is the doubt or solution.
You said you are having 10 links in Admin usercontrol. So whether these controls are also being created dynamically or are they also user control. If they are also user controls, then it means creation of user control within user control. And I think this is disturbing the tree control hierarchy of controls which is being created when your page is posted back to the server. And since your new controls created in usercontrol are now when posted back are causing problem.
Solution for this could be to clear all viewstate of these controls by method protected void ClearChildViewState ()
And for your last comment problem statement:
When I did this, the result was screen 3. There was no viewstate error but the label got populated with the viewstate from the previous sms_settings_wuc.ascx control..... How is this possible?????? Can somebody shed some light on what is happening??
So for that again my answer would be messing of viewstate of controls. Since your mobile number label is probably name as Label1 and of User preference is also name Label1 (this is my assumption). So since the value of Label1 for sms one is already saved in viewstate and when your control's (user preference) label1 value goes with updated value but there is already saved value of the label1 so it replace it with that one.
All knowledge I shared with you is based on the topic I studied at link:
Regards,
V.S.Saini
To answer your questions:
Points 1,2 and 3 are correct.
But the 10 links you mention are not created dynamically: They are created at design-time like so
[code]
<div class="admin_a">
<asp:LinkButton ID="LinkButton_ADMIN_Compa
Company Setup
</asp:LinkButton>
</div>
<div class="admin_a">
<asp:LinkButton ID="LinkButton_ADMIN_Prefe
User Preferences
</asp:LinkButton>
</div>
<div class="admin_a">
<asp:LinkButton ID="LinkButton_ADMIN_Langu
Language Setup
</asp:LinkButton>
</div>
<div class="admin_a">
<asp:LinkButton ID="LinkButton_ADMIN_Curre
Currency Setup
</asp:LinkButton>
</div>
[/code]
There is only 1 panel on the page where user controls are loaded and that is Main_PH. As you can see above (and below for reference), this panel is cleared of controls during Ajax postbacks and then the correct control is re-added:
[code]
Main_PH.Controls.Clear();
WUC_default_wuc = new Control();
WUC_default_wuc = this.LoadControl((string)S
WUC_default_wuc.ID = (string)Session["wuc_ID"];
Main_PH.Controls.Add(WUC_d
[/code]
Regarding the re-writing of the label in user_prefences_wuc.ascx when traversing from sms_settings_wuc.ascx.... the id of the Label is user pref is label1... the id of the label in sms_settings is Label_SMS_Number1....
I don't understand why this is happening. The Viewstate from the SMS user control is being loaded into the Viewstate of User Preferences control
So we are facing problem of ViewState management (overall). And all this unexpected behaviour of program is due to viewstate mixing up. So we would need to get control to ViewState information of page directly and clean it.
But before that I would like to know from your side. Is there any problem if we clear all values of viewstate explicitly. Also have look on link: Might you get some idea.
Regards,
VSS
I don't think there is any problem clearing viewstate as long as its at the correct point. I.e. When a new control is added as below:
switch ((string)Request.Form["__E
{
case "LinkButton_MM_Inbox":
Session["wuc_location"] = "administration_wuc.ascx";
Session["wuc_ID"] = "WUC_administration_wuc";
Session["Page_First_Load"]
break; ..... etc
I'll read that article this morning thank you.
Thank you very much for taking your time to help me. It is very much appreciated!
I've looked over that article but I don't see how it applies to my case? That person is looking to utlise the viewstate where as I just want to reset it / wipe it... There doesn't seem to be a method to wipe the viewstate.
There is and it is Control.ClearChildViewStat
Please check link for more details:
Regards,
VSS
I did actually try that method but it did not make any difference.
I am having some success though. User preferences and Sms settings user controls have the same number of controls inside them. I added a placeholder around the whole of the sms settings user control and it seemed to solve my problems.
For some reason ASP.net is getting confused between user controls even though the user control locations and IDs are completely different. The only thing they have in common is the number of child controls they hold. Encapsulating the sms settings child controls within a placeholder has stopped any errors....
Although this is fixed for now I'd still love to know exactly why ASP.net viewstate was getting confused??
My solution uses Ajax so redirecting is not an option. View state is needed in my solution too so that is not an option.
And EventValidation is a good thing, I don't want to disable it.
As I said its working now even though viewstate has a bug it seems. Points for being very helpful! I do appreciate your time and effort!
Just to let you know I have solved this. The problem was this. In my page OnInit method where I load the user control, I was also adding it. This is very wrong apparently as viewstate looks for the old user control after the OnInit method. Viewstate does not know that the control has changed.
To fix this problem I change the onInit method to just load the control and not add it as below:
WUC_default_wuc = new Control();
WUC_default_wuc = Page.LoadControl((string)S
WUC_default_wuc.ID = (string)Session["wuc_ID"];
The Page_onLoad method is the correct place to load the user control.
protected void Page_Load(object sender, EventArgs e)
{
Main_PH.Controls.Add(WUC_d
}
And now everything works :) :) I really hope this helps somebody out there. Its caused problems for me for a very long time!
So this is called Research & Development. And really on working this question I too learned many new concepts and codes related with AJAX and ViewState.
And yes I searched lot for it on internet but the solution there were also based on developer's experience. It will be really fruitful to anyone reading the whole discussion and finally reaching to the solution.
Happy Coding :-)
Regards,
VSS | https://www.experts-exchange.com/questions/26424727/Viewstate-Errors-when-loading-dynamic-user-control.html | CC-MAIN-2018-09 | refinedweb | 1,671 | 58.48 |
I've written and compiled the first program I've literally created without my learnin book in front of me. I have seemingly one bug in it... or one and a half i would almost say. Mostly i want to know how i did for like a 1 month beginner and if you have more ideas for programs to write to help me learn.
im sorry i dont know the coding for html to make this look neater but if you have comments/criticism on this, lemme know. thanks
pass.c
#include <stdio.h>
#include <string.h>
int main() {
char str_1[7];
char str_2[7];
strcpy(str_1, "Death");
printf("What Is The Password?\n");
scanf("%s\n", str_2);
printf("%s, contemplating.\n", str_2);
if(*str_2 == *str_1) {
printf("success\n");
printf("Change Password\n");
scanf("\%s\n", str_1);
printf("Success");
printf("Pass is now-->\t %s\n", str_1);
}
else {
printf("failure.\n");
printf("You lose.\n");
}
}
And the results seemed to be screwed by the first password as youll notice if you compile or just can read it and know whats going to be wrong. Like i said im a newbie, so hit me with as much criticism as you can. | http://forums.devshed.com/beginner-programming/828522-newb-firstprog-last-post.html | CC-MAIN-2017-13 | refinedweb | 200 | 83.56 |
Enhanced Web Server
The web server sample included in Nut/OS 4.4 does a fairly good job for simple applications. However, when large files are requested or many connections need to be handled, it may soon reach its limits. This document describes, how to enhance the sample code.
News:
Memory hole fixed.
SSI, ASP no longer cached.
New HTML layout.
Shockwave Flash removed.
Thread/timer/socket list bug fixed.
Persistent Connections
The sample code in http/httpserv.c starts four server threads:
/* * Start four server threads. */ for (i = 1; i <= 4; i++) { char thname[] = "httpd0"; thname[5] = '0' + i; NutThreadCreate(thname, Service, (void *) (uptr_t) i, NUT_THREAD_MAINSTACK); }
The HTTP Server routines accept one request per connection. If, for example, a requested HTML document contains several images, a new connection will be opened for each of them. The situation becomes worse when more than one client needs to be served: The server will soon run out of listening threads. New connections will be rejected, because the Nut/OS TCP stack doesn't provide a listening backlog. Increasing the number of server threads will probably help.
/* * Start twelve server threads. */ for (i = 1; i <= 12; i++) { char thname[] = "httpd0"; thname[5] = 'A' + i; NutThreadCreate(thname, Service, (void *) (uptr_t) i, NUT_THREAD_MAINSTACK); }
The disadvantage is, that each thread consumes memory, specifically data RAM, which is usually a scarce resource.
One problem is, that closing a connection will not immediately release the socket for new connections. If a client closes one connection and immediately tries to connect again, it may be too fast for the socket being available again. HTTP 1.0 allows to re-use existing connections for following requests. Re-using an existing connection avoids the turn around time required to move a socket from connection to listen state. To implement this on Nut/OS, several changes are required for pro/httpd.c.
Dynamic Thread Creation
Instead of initially running a large number of server threads, we can start a new thread each time when a new connection has been established. This way, the total number of connections is limited by available memory only.
Needless to say, that a thread must be terminated after the connection has been closed. Nut/OS threads can be terminated by calling NutThreadExit(). See the following simplified server thread on how this can be implemented.
THREAD(Service, arg) { TCPSOCKET *sock; /* Create a new socket. */ sock = NutTcpCreateSocket(); /* Wait for connect. */ NutTcpAccept(sock, 80); /* Connected. Start new thread. */ NutThreadCreate("httpd", Service, NULL, 1024); /* Process HTTP request. */ HttpProcessRequest(sock); /* Close socket to disconnect. */ NutTcpCloseSocket(sock); /* Exit thread. */ NutThreadExit(); }
Of course, some additional code is required to limit the number of concurrently running threads, handle errors or resource shortages, etc.
Blocked Connections
TCP clients may not always close a connection. This may be due to a software bug or simply because the line was cut before the client was able to send a FIN segment. A receiving server will normally not be able to detect the latter, but continue to wait for new data.
To avoid blocked TCP connections, Nut/OS allows to specify a socket receive timeout option. If set, the connection will be automatically closed when the timeout time elapsed without new incoming data.
Our web server should set a receive timeout immediately after the socket has been successfully created. The following code shows how to set the timeout to 500 milliseconds.
TCPSOCKET *sock; u_long tmo = 500; /* * Create a socket. */ if ((sock = NutTcpCreateSocket()) == 0) /* Error handling. */ if (NutTcpSetSockOpt(sock, SO_RCVTIMEO, &tmo, sizeof(tmo))) printf("Sockopt rx timeout failed\n");
Servers which need to support many browsers concurrently in a fast local network will probably do better with shorter timeouts.
HTTP Error Code 304
Supporting error 304 responses may significantly speed up HTTP requests, because the contents will be sent only, if it has changed since the client's last request.
Again, adding this feature requires changing the core routines in pro/httpd.c. Furthermore, the system running Nut/OS must provide a valid system time, either using a battery back-upped hardware clock or querying an SNTP server during start-up (see the sample in app/logtime/).
TCP Buffer Sizes
By default, the maximum TCP segment size (MSS) is set to 536. Adding 20 bytes for the IP header and another 20 bytes for the TCP header, this will result in a transfer unit of 576, which normally guarantees unfragmented transfers. Note, that Nut/OS doesn't support IP fragmentation.
In most networks a maximum transfer unit of 1500 can be used without the risk of fragmentation. Thus, we can set the MSS to 1460.
static u_short mss = 1460; if (NutTcpSetSockOpt(sock, TCP_MAXSEG, &mss, sizeof(mss))) printf("Sockopt MSS failed\n");
This will speed up the transfer of large files.
Increasing the TCP window size is another option, but will not significantly help with HTTP unless large amounts of data are sent to the server.
static u_short tcpbufsiz = 8760; if (NutTcpSetSockOpt(sock, SO_RCVBUF, &tcpbufsiz, sizeof(tcpbufsiz))) printf("Sockopt rxbuf failed\n");
Of course, both options will consume additional data memory. Make sure, that
#include <netinet/tcp.h>
is included in your source file when using the code above.
Cleaning Up The Code
The initial HTTP code had been created as a quick and dirty sample, just to proof that the TCP stack is working with some web browsers. Not to mention, that the original author (me!) is anything else but an HTTP expert.
During its lifetime, more than 7 years at the time of this writing, it has been one of the most often used template for many Nut/OS applications. Many developers contributed several enhancements, but tries to avoid a major re-design in order not to break existing code. As a result, the code had become larger and slower. For example, it didn't work any longer with the ICCAVR Demo, which is limited to 64kBytes code size.
The version presented here includes several clean-ups. However, the user shall be prepared, that new bugs may have been introduced. Upgraded applications should be carefully tested.
All HTML data had been reduced to a bare minimum and the Shockwave Flash Demo had been removed. As a result, the minimal server occupies 48kBytes of program memory.
Since its very early release, the three list CGIs work unreliable and may have crashed the server. The problem was, that data had been transmitted to the browser while walking along the linked lists. TCP transmissions may block the running thread and, when it was woken up again, the linked list may have changed and the current pointer may have pointed to invalid memory areas. This had been fixed by first collecting the list items and then do the TCP output.
Implementation
httpd-enhanced-20071109.zip
This package contains the following source files:
http.c
Enhanced HTTP server library. Partially replaces pro/httpd.c.
http.h
HTTP server library header file. Replaces include/pro/http.h.
httpopt.c
Enhanced HTTP server library of optional routines. Partially replaces pro/http.c. These routines had been moved to a new file in order to avoid linking them to minimal sever implementations.
rfctime.c
rfctime.h
RFC compliant date and time routines. This is a new module. The header file must be moved to directory nut/include/pro.
dencode.h
Local copy of the original file pro/dencode.h. Required because the compiler will not find it, if it's not in local directory.
httpserv.c
Enhanced HTTP sample application.
httpserv.h
Enhanced HTTP sample compile time configuration. Modify this file to enable or disable specific features or to change parameter values.
Makefile
Adds the new code to the sample application.
Intentionally we do not replace existing library files, but add any updated or new module to our application. This way, the linker will skip the old library code. If you want the add the new library code to another application, simple add httpd.c, httpopt.c and rfctime.c to the list of sources in your Makefile (or your ICCAVR project).
Unfortunately, the header file httpd.h in the source tree needs to be replaced, because some of the data structures had been modified, which are also referenced by other library parts. I'd suggest to rename the existing one to nut/include/pro/httpd44.h before moving the new header to this directory. Do not forget to move rfctime.h to this directory as well. When done, rebuild the Nut/OS libraries, either by using the Configurator or by executing 'make clean' and 'make install' on the command line.
Incompatibilities
Although care had been take to keep the new library routines compatible to previous releases, some incompatibilities were unavoidable.
If some of your routines may use of the REQUEST structure, be aware, that its size has grown. Several new items had been added to its end.
If all features are enabled, the total code size of the sample server increased by roughly 10 percent. Note, however, that new modules like SNTP may already have been included in your application, in which case the additional code may be much less than 10 percent.
The type of the req_length item in the REQUEST structure had been changed from int to long.
The server will now check for additional default index files, specifically
index.html index.htm default.html default.htm index.shtml index.xhtml index.asp default.asp
The routine NutHttpSendHeaderBot() had been marked deprecated. Specially if you want to make use of persistent connections, you should call NutHttpSendHeaderBottom() instead, which needs a pointer to the REQUEST structure as an additional parameter.
Also note the two new functions NutHttpGetOptionFlags() and NutHttpSetOptionFlags(), which must be called by the application in order to enable file date and time handling (HTTP error 304 support).
Last not least, the server will now respond with HTTP/1.1 instead of HTTP/1.0. As a result, clients may behave different.
If you're upgrading from the initial version of the enhaced HTTP server, please note a change with caching. While the first release checks the date of files with attached mime handlers (SSI, ASP etc.), the current version will ignore any file date, if a mime handler for that file had been registered.
Known Problems And Limitations
Persist connections do not always close immediately after having requested all files. They are instead closed by the server's socket timeout. I have no idea if this is normal or if I missed something important to let the browser close the connection.
The UROM file system doesn't provide file attributes like last modification date and time. In this case the server will use the compile time of the library file httpd.c. As long as the file is built with your application, this doesn't hurt. It may provide problems when later adding this module to the Nut/OS library.
In a previous version a memory hole had been detected, which is now fixed.
Good luck,
Harald Kipp
Castrop-Rauxel, 9th of November 2007 | http://www.ethernut.de/en/documents/httpd-enhanced.html | CC-MAIN-2017-51 | refinedweb | 1,831 | 57.77 |
How to do a PyQt VTK application in Python26
Hello All,
Is it possible to make a PyQt VTK application using Python26?
My first sample Qt VTK application is not running!! And my VTK compiled Bin folder is python 26 compiled dll files and Win32 bit.
The error is
Traceback (most recent call last):
File "EmbedInPyQt.py", line 5, in <module>
from PyQt4 import QtCore, QtGui
ImportError: DLL load failed: %1 is not a valid Win32 application.
Due to this bug I searched in internet for PyQt4 Installer downloading versions and those are supporting
Python34 and higher as PyQt4 exe
()
Is there any way can I run my sample QT-VTK application using python 26? OR should I go after python 34??
could anyone please help me soon?
- SGaist Lifetime Qt Champion
Hi,
You should rather contact the PyQt authors about that matter. They more likely have an answer for that question.
Dear SGaist,
Apologise to delay sending my response!!
Thanks a lot for replying, of course I am trying to find the best forum for PyQt, may be I can do with "pyqt@riverbankcomputing.com mailing list" but it is not user friendly help system My subscription of this mail list is still not approved by them, hopefully it will be active in coming days but my doubt not yet solved!!! Thanks for helping. | https://forum.qt.io/topic/51346/how-to-do-a-pyqt-vtk-application-in-python26 | CC-MAIN-2018-47 | refinedweb | 225 | 72.36 |
- Instead of .vimrc, the filename is _vimrc - because Windows doesn't like filenames beginning with a dot.
- Instead of .vim, the directory name it is looking for may be vimfiles - for the same reason.
- Depending on the configuration of Windows, it may not be looking for your files where you think it is - to find out you can do :echo &rtp
MisterPotes.gc()
A selection of garbage, collected.
Tuesday, 30 July 2013
Configuring vim in Windows
Monday, 24 June 2013
Adventures with Scala Macros - Part 4
The.
Adventures with Scala Macros - Part 3
The.
Adventures with Scala Macros - Part 2
The Adventures with Scala Macros series was published on the ScottLogic company blog. It is reproduced here.
Where next?
What you’ve probably noticed though is that with this code, the regexes are parsed each time through the block, which isn’t going to be very good for performance. What we really want is to be able to put the regular expressions in an object so that they are parsed only once. But we’re using Scala 2.10, and all we have available are def (function) macros - we can’t generate any other types (although type macros do exist in the macro paradise).
Instead we need to create a value which then gets populated by a function that returns an object, like this:
So now we’ve got two functions for our generated code - and we know how to generate function bodies. However, we have another problem - each function will be generated by a separate macro, and we’ll need to know what the variables created in one macro are called when using them in the other macro. Fortunately we can just avoid that problem - each regular expression is generated from a class name, and each class name (with its package qualifier) is unique, so we can use the class name to generate the variable names (replacing
.with
$).
Code organisationOur macro code is getting bigger, and I don’t want it to end up as a pair of unmanageble functions that go on for page after page - so I’m going to split the code up into two classes - the one that has the macro definitons in it and builds the resultant AST for the generated code, and another class that contains all the logic for analysing the compilation environment.
As documented on the def macros page on the Scala site, you can create other classes that use your macro context easily enough - you just have to give the compiler a little helping hand to work out where the context comes from. Of the two examples for doing this on that page, the second is (to me) much more readable, so we’ve got a new class:
And obviously we can initialise this just like any other class from within our macro implementations using
val helper = new Helper(c). If, for tidyness, you want to import the functions defined in Helper, you can then do
import helper.{c => cc, _}, which renames the
cvalue from Helper so that it doesn’t collide with the
cparameter from our function signature.
Moving the compilation unit processing code into the Helper, and adding some name processing so that the class name and package name are available to our macro, we end up with:
Object CreationWhen you define an object in normal Scala, you just declare it,
object myObject, because the syntactic sugar allows you to leave out a default constructor, and that it extends
scala.AnyRef. In a macro you don’t have that luxury, so to define a new object, you do the following:
We already know how to put a block of code together from part 1, so all we need to do is merge the two together, and we get:
This can then be used by declaring
val regexesUsingMacro = restapi.Generator.macroPathsin our unit test.
But wait - there’s a problem, isn’t there? That function is returning an object of type
Any, so all our other generated code will know about it is that it’s an any - it won’t know anything about the
vals we’ve added to it. Well actually, it turns out this isn’t a problem, but is intended that the return type should know more than its declaration specifies, if possible, as Eugene Burmako explains here.
Putting it all togetherNow that we’ve got a macro that can be used as a
valdefinition, we need to find that
valand use it in our match expression. To find the
valis simple enough - we just look for a
ValDefthat uses a
Selectfor our function as the right hand side. However, if the user hasn’t defined such a
val, we can’t continue - we need to tell the developer what they’ve done wrong. The macro
Contextincludes functions to provide compiler warnings and errors, so we need to emit an error advising the developer how to fix it. The structure we end up with is as follows:
When integrated with the code we had for the match block from part 1, we end up with:
So now we’ve got our pattern matching working well, in the next article we can start calling an API to produce our endpoints.
Wednesday, 27 March 2013
Adventures with Scala Macros - Part 1
Monday, 5 November 2012
Making Raspberry Jam
Ingredients.
Tuesday, 25 September 2012
Kitchen Implements
I wonder how many pasta machines, breadmakers, juicers, blenders, | http://garbage-collection.potes.org.uk/ | CC-MAIN-2019-22 | refinedweb | 908 | 61.4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.