text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
[code:3ibdqb72]
11/5/07 4.07.15 – Dev release update
- Event API – Added C wrapper. C users use fmod_event.h, C++ users use fmod_event.hpp
and fmod_event.h.
Added System::set3DRolloffCallback. When FMOD wants to calculate 3d volume for a
channel, this callback can be used to override the internal volume calculation
based on distance.
Fixed crash if quickly releasing sound while it is still in the process of loading
with FMOD_NONBLOCKING
- Wii – Fixed rare crash when playing looping GCADPCM fsb files with looppoints.
- Wii – Fixed ProLogic not working correctly.
- Fix Sound::release crash if subsound was released before parent, introduced in 4.07.15.
- Fix Sound::release memory leak if subsound was released before parent.
Fix streams not ending when stream was certain size.
Event API – Fixed a bug where oneshot sounds would retrigger if the
parameter left the sound and returned to it while it was playing
*** Event API – C++ users should now use fmod_event.hpp!
*** Event API – Event callback signature has changed! The parameter "FMOD::Event *event"
is now "FMOD_EVENT *event" so you must cast back to FMOD::Event * to get
old behaviour.
*** Event API – EVENT_xxx values are no longer namespaced! You must put "FMOD_" in
front of them!
[/code:3ibdqb72]
This thread is for discussion / bugreports for the current release.
Download the current release from the front page at
- brett asked 11 years ago
- You must login to post comments
Download links for 4.07.15 win32/win64 are invalid (file not found).
I’ve got the same problem reported [url=]here[/url:3l5247kw] with 4.07.15 Mac PPC.
- jouvieje answered 11 years ago
They were just taken down temporarily to fix a vs2005 manifest issue they are back up again.
The powerpc version is linking ok here on the examples can you verify the same. | http://www.fmod.org/questions/question/forum-22475/ | CC-MAIN-2018-39 | refinedweb | 302 | 76.22 |
Internationalization with Django, Backbone, Underscore templates, and Sass (LTR and RTL languages) · Monica
Let’s be honest: No developer wakes up in the morning and thinks, “Oh goody! Today I get to internationalize my giant website with tons of content and files. I bet supporting right-to-left languages is going to be a blast.”
However, I’m here to tell you that it’s not nearly as bad as you would expect.
In fact, Django makes it downright easy to do. Unfortunately, there’s not a lot of information on the web about internationalizing (also known as i18n) in Django besides the official documentation. Hopefully these tips and tricks will be useful for you.
What Django gives you
- Preferred language of the user, and uses the files you generate to serve translated and localized templates.
- Gives you tools for translating strings in both HTML files (i.e. templates) and Javascript files.
- Gives you helpful variables in your templates to help you serve the correct content for left-to-right and right-to-left users.
Step 1: Enabling Localization in Django
Create a folder in your site root’s directory (or elsewhere if you see fit), called
locale. This will contain a folder for each language, as well as the files used for translation themselves.
Open up your
settings.py and include or update the following settings:
# Path to locale folder LOCALE_PATHS = ( '/path/to/folder/locale', ) # The language your website is starting in LANGUAGE_CODE = 'en' # The languages you are supporting LANGUAGES = ( ('en', 'English'), # You need to include your LANGUAGE_CODE language ('fa', 'Farsi'), ('de', 'German'), ) # Use internationalization USE_I18N = True # Use localization USE_L10N = True
Also, in each of your views (e.g. in
views.py), you should be setting the request language as a session. For example:
if hasattr(request.user, 'lang'): request.session['django_language'] = request.user.lang
Step 2: Internationalizing your Django content
This is really the easy part. Chances are, you’ve got a folder in your Django app called “templates”. Inside, you’ve got HTML, some variables, and whatnot. All you have to do is go through and mark up the strings that need to be translated, like so:
{% trans "My English" %} {% trans myvar %}
You get a lot of flexibility here, as described in the documentation. Essentially what happens is that you label all of your strings that should be translated, and then Django generates a handy file that your translator can use to localize the interface.
Just make sure that at the top of any template you want localized, you actually load the i18n library.
{% load i18n %}
Test it out You only have to translate a string or two in order to see whether it’s working. Create your translation files using the following command:
$ django-admin.py makemessages --locale=de --extension=html --ignore=env --ignore=*.py
Explanation of the options:
--locale=de
Change this from de to whatever locale you’re going for.
--extension=html
Tells the django engine only to look for .html files.
--ignore=env
In my app, env/ is the folder where my virtual environment exists. I probably don’t want to localize everything that exists in this folder, so we can ignore it.
--ignore=*.py
For some reason, django keeps trying to localize some of my python files that exist at the project root. To avoid this, I explicitly ignore such files.
Once you’ve run this
django-admin.py command, you should take a look inside your
locale/directory. If your app exists at something like
/opt/app/, you’ll find a file structure like this:
/opt/app --- /locale ------ /LC_MESSAGES --------- /de ------------ django.po
And within each of these
django.po files, you’ll find pairs of a string, and then a space for a translation, as so:
# path/to/templates/blah.html:123 msgid "My English." msgstr ""
Obviously, if you’re in
/opt/app/locale/LC_MESSAGES/de/django.po you’d better provide a German translation as a
msgstr.
Now, compile your messages and we’ll see what we get!
$ django-admin.py compilemessages
Next to each
django.po file, you’ll now also have a
django.mo file. This is the binary file that Django actually uses to fetch translations in real time.
Restart uWSGI and your web server.
Add the language you just localized for to your preferred languages in your browser settings, and pull it to first place. In Chrome, this is Preferences » Advanced » Manage Languages.
When you reload your site, you should see that your string has been translated! Anything that you haven’t translate will remain visible in its original language (in my case, English).
Step 3: Translation Javascript (Javascript itself)
Open up your
urls.py. Append the following:
# 'Packages' should include the names of the app or apps you wish to localize js_info_dict = { 'packages': ('app',) }
And in your
urlpatterns, include:
url(r'^jsi8n/$', 'django.views.i18n.javascript_catalog', js_info_dict),
Now, in your base template (whichever manages loading your javascript) and place this script first:
<script type="text/javascript" src="{% url 'django.views.i18n.javascript_catalog' %}"></script>
Now you can go into any javascript file and simply place
gettext("") around any string and that string can be localized. For example:
this.$el.find('.a')[0].attr('title', gettext('Show Resources'));
Generating the Javascript messages file Just as before, when you ran the
django-admin.pycommand to gather all the strings needing translations in your html templates, you can do the same in your javascript files.
$ django-admin.py makemessages -d djangojs --locale de --ignore=env
Again, specify the locale and ignore the files inside my virtual environment. Now, look at the files you have in your
locale/ subdirectories.
/opt/app --- /locale --- /LC_MESSAGES ------ /de --------- django.po --------- django.mo --------- djangojs.po
Simply open up
djangojs.po, translate a string, and run
django-admin.py compilemessagesagain. You’ll find, as you probably expected, a new file called
djangojs.mo. As before, restart uWSGI and your server, and spin it up in the browser. Again, be sure that you’ve got your test language set as your preferred language in your browser settings.
Step 3b: Translating Javascript Templates (Underscore)
This is where things get a little more interesting. The critical point is this: We want our underscore templates to be served through Django, not through our web server directly (e.g. through Apache or Nginx). These are the steps I took to achieve this:
- Move my underscore templates out of my
static/folder, and into my
templates/folder.
- Write a urlpattern that will cause my underscore templates to be run through the django template engine first.
- Update the references to templates in my Javascript (I use RequireJS and the text plugin).
1. Move Underscore Templates Previously, my project structure was something like this:
app/ — static/ —— css/ —— js/ ———- views/ ———- templates/ ————– underscore-template.html — templates/ —— django-template.html
And I had Nginx serving everything inside of
static/, well, directly, using the following directive in my Nginx conf file:
location /static { alias /opt/app/static; }
Now, instead of this, I want Django to do its magic before Backbone and Underscore go to town on the templates. So I create a folder inside
app/templates/ called
js/. I move all my underscore templates here. So now I have:
app/ --- static/ ------ css/ ------ js/ --------- views/ --- templates/ ------ js/ --------- underscore-template.html --------- django-template.html
2. Write a urlpattern Now, I’m not positive this is the best way to do this, but it does work. Open up your
urls.py and add this line:
url(r'^templates/(?P<path>w+)', 'web.views.static'),
What happens now is that whenever Django receives a request for a URL that looks likemysite.com/templates/some/thing.html, it assigns
some/thing.html to a variable
path, and passes that to our web view. So now I open up
app/web/views.py and append this code:
def static(request, path): # Update this to use os.path directory = '/opt/app/' + request.META['REQUEST_URI']; template = loader.get_template(directory) # This allows the user to set their language if hasattr(request.user, 'lang'): request.session['django_language'] = request.user.lang # I use this email_hash to generate gravatars, incidentally context = RequestContext(request, { 'email_hash': hashlib.md5(request.user.email).hexdigest() if request.user.is_authenticated() else '' }) return HttpResponse(template.render(context))
Now, we’re taking whatever request it was, grabbing that file, and passing it through
template.render. If needed, add this folder to your
settings.py:
TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '/opt/app/templates/', '/opt/app/templates/js' )
Now you can go into any of your underscore template files and mark them up using typical django syntax. Just make sure you remember to include
{% load i18n %} at the top of your underscore templates. For example:
{% load i18n %} <!-- Page of Greek text for Reader view --> <div class="page"> <!-- Page corner, functions as navigation --> <div class="corner <%= side %>"> <a href="#" data-</a> </div> <!-- Page header --> <h1><%= work %> <small>{% trans "by" %} <%= author %>{% trans "," %} <a href="#" data-section</a></small></h1> <hr> <!-- Greek goes here! --> <span class="page-content"> <% _.each(words, function(word) { %> <% if (word.get('sentenceCTS') == cts) { %> <span lang="<%= word.get('lang') %>" data-<%= word.get('value') %></span> <% } %> <% }); %> </span> </div>
In the long run, it may be worth your time to simply switch your html templates purely to Django. However, since the syntax of Underscore and Django don’t clash, it’s a viable solution as far as I’ve experienced.
Once you’ve marked up your underscore templates, simply re-run the same
django_admin.py makemessages command as before.
Just don’t forget to go into your javascript files and change the paths where you’re importing your templates from, so they’re no longer pointing to a static directory. For example:
define(['jquery', 'underscore', 'backbone', 'text!/templates/js/underscore-template.html'], function($, _, Backbone, Template) { var View = Backbone.View.extend({ tagName: 'div', template: _.template(Template), render: function() { this.$el.html(this.template(this.model)); return this; } }); return View; });
Supporting bidirectional languages
So far, I have had great success with the techniques suggested in this blogpost: RTL CSS with Sass. I’ll just give you a couple of pointers on how to make it easy to implement this with Django.
First, I installed the set_var template tag. This is because I want to use some of the useful
get_language functions that Django makes available to me. Alternatively, you could probably clean this up by putting this logic in your
views.py.
Then, in my
app/templates/base.html, I make use of this template tag and template inheritance as so:
{% load i18n %} {% load set_var %} {% get_current_language_bidi as LANGUAGE_BIDI %} {% if LANGUAGE_BIDI %} {% set <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title> {% trans "My app" %} </title> {% block css %} <link href="/static/css/{{ css_file }}.{{ dir }}.css" rel="stylesheet"> {% endblock %} <script type="text/javascript" src="{% url 'django.views.i18n.javascript_catalog' %}"></script> <script data-</script> <script>var csrf_token = "{{ csrf_token }}"; var locale = "{{ LANGUAGE_CODE }}"; var dir = "{{ dir }}"; </script> </head> <body> {% block content %} {% endblock %} </body> </html>
What do we have here?
- We’re using Django to get the direction our page is – either ltr or rtl.
- We’re making it possible to replace the CSS file based on the page we’re on and the text direction.
- We make a couple of variables global (eek!) for use in our javascript.
Now, you can take any page which inherits from your base template, and set the css_file. For example:
{% extends "base.html" %} {# Determine which CSS file to load #} {% block css %} {% with 'generic' as css_file %} {{ block.super }} {% endwith %} {% endblock %} {% block content %} <r;!-- Content here --> {% endblock %}
Note: This assumes that you are generating your CSS files with a command such as this:
$ sass generic.scss generic.ltr.css
And that inside of
generic.scss you’ve got an
@import "directional" wherein you switch the direction between LTR and RTL in order to generate your sets of CSS.
And that’s a wrap!
It’s essentially everything you need to internationalize your Django website and get django to do a first pass over your underscore templates. If you’ve got suggestions for improving this work flow, by all means, pass them my way! I hope this helps give you some ideas on how to use Django’s built in internationalization and localization tools to make your life easier
| http://jacob-yo.net/wp_jacob_main/2016/01/ | CC-MAIN-2017-39 | refinedweb | 2,067 | 58.48 |
In this article, we’ll implement Django caching. We will learn what cache is, why to use them, and then finally, we will code and implement caching in our Web Application.
So let’s get started !!
What is Caching?
Caching is the process of saving the result of a time-consuming calculation so that next time in the future, when required, you can have the result ready in hand.
Even computer CPU store cache files in memory so that those files can be shown faster next time, thus saving a lot of processing time. Most of the websites like FB, WhatsApp also use caching to improve website speeds.
Django Framework has a set of pre-built options which can be used to cache the websites.
The need for caching
Every time you visit dynamic websites (websites containing dynamic elements like templates, views, data in the server, etc.), the server needs to load the template, view, and retrieve data from the server before displaying it. All this processing requires time.
But in today’s era, every user wants his request to be responded quickly, and even a delay of milliseconds can’t be afforded. So to make the websites faster, we can either do the following:
- Improve CPU hardware
- Improve server software
- Improve Databases
Or we could simply use the method of caching !!
Storing the Cache information
Django cache framework also offers different ways to store the cache information:
- Storing cache in DB
- Storing cache in a file
- Storing cache in the memory
We will now look at each of them individually
1) Storing cache in a DB
Here all the cache data is stored inside the database in a separate table just like the model tables.
Hence we need to tell Django to store the cache in DB. To do that, add the following code in the settings.py
CACHES = { 'default':{ 'BACKEND': 'django.core.cache.backends.db.DatabaseCache', 'LOCATION': 'my_cache_table', } }
To store cache in a table, we also need to create a table. Hence in the console, run the code
python manage.py createcachetable
Django now creates the cache table in the DB with the name given in the settings.py – “my_cache_table”
This method is the most used, here the cache speed is dependent on the type of the DB. If you have fast DBs, then this option is the most viable.
2) Storing cache in a file
Here we store the cache as a file in our system. To store the cache as file, add the following code in the settings.py :
CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache', 'LOCATION': 'Absolute_path_to_the_directory', } }
Here all the cache files are stored in a folder/directory set in the LOCATION attribute.
Note:
- The server should have access to the directory
- The location should exist before hand.
- Only the absolute path of the Folder/Directory should be mentioned.
This method is the slowest of all options. But here you don’t need to upgrade your hardware since it is using the already existing storage in the system.
3) Storing Cache in memory
Here we store all the cache files in memory. Django has a default caching system in the form of the in-local memory caching.
To add the caches in local memory, add the code
CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache', 'LOCATION': ('Location1','Location2',...), } }
Here we can save the cache files in different portions. Add the location of all the portions as a tuple in the LOCATION attribute.
This method is by-far the most powerful and fastest of all the above options.
Prerequisites for Django Caching
Now to cache the website, we must first have a View and a corresponding URL path. So add the following sample View into you views.py:
def SampleView(request): Html = '<h1>Django Caching<h1><br><p>Welcome to Caching Tutorial</p>' return HttpResponse(html)
The URL path for the code will be:
path('sample/', SampleView),
Now for the next section, you can store the cache in any of the form shown above:
Storing different parts of the website as cache
In Django, we can:
- Cache only a particular view
- Or cache the full website
We will now look at them individually.
1. Per-Site cache storage
To cache the whole site, add the following code in the MIDDLEWARE section of settings.py
'django.middleware.cache.UpdateCacheMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.cache.FetchFromCacheMiddleware',
Note: The order of the code given above is important. Make sure they are present in the same order.
Implementation of the per-site storage cache
Run the server and go to the URL path “/sample”
Notice that the website took 13ms to load the site for the first time. Now hit reload and check again.
Notice that now the page reloaded in just 6ms. The time has reduced to more than half.
2. Per-View cache storage
To cache just a particular View, the syntax used will be:
#Method1: Cach_page syntax in views.py from django.views.decorators.cache import cache_page @cache_page(200) def SampleView(request): html = '<h1>Django Caching<h1><br><p>Welcome to Caching Tutorial</p>' return HttpResponse(html) #Method2: Cache_page syntax in urls.py from django.views.decorators.cache import cache_page urlpatterns = [ path('sample/', cache_page(200)SampleView), ]
The cache_page() attribute takes only one argument – The expiry time of the cache in seconds. We can use any of the two methods shown above.
Implementation of the per-View storage cache
Run the server and hit the URL
The time taken is 22 ms. Now reload and check.
See now the time taken has reduced to 8ms only
Conclusion
That’s it, guys!! I hope you have gained good knowledge about caching and how to use them according to our Web application needs and requirements. Do practice all the codes given above to improve your understanding of the topic. See you in the next article !! Till then, keep coding!! | https://www.askpython.com/django/django-caching | CC-MAIN-2020-50 | refinedweb | 984 | 64.61 |
echo, noecho - enable/disable terminal echo
#include <curses.h> int echo(void); int noecho(void);
The echo() function enables Echo mode for the current screen. The noecho() function disables Echo mode for the current screen. Initially, curses software echo mode is enabled and hardware echo mode of the tty driver is disabled. echo() and noecho() control software echo only. Hardware echo must remain disabled for the duration of the application, else the behaviour is undefined.
Upon successful completion, these functions return OK. Otherwise, they return ERR.
No errors are defined.
Input Processing, getch(), <curses.h>, XBD specification, Parameters that Can be Set . | http://pubs.opengroup.org/onlinepubs/007908799/xcurses/noecho.html | CC-MAIN-2017-09 | refinedweb | 102 | 62.54 |
Opened 8 years ago
Closed 5 years ago
#9211 closed (duplicate)
Objects with newlines in representation break popup JavaScript in the admin
Description
When you have these models:
class Note(models.Model): text = models.TextField() def __unicode__(self): return self.text class Person(models.Model): name = models.CharField(max_length=25) note = models.ForeignKey(Note) def __unicode__(self): return self.name
And the following in admin.py:
from django.contrib import admin from models import Note, Person admin.site.register(Note) admin.site.register(Person)
When entering a Person model in the admin a new may Note created by clicking the plus icon next to the selection list. If the person entering the note presses return and puts newlines in the TextField, the dismissAddAnotherPopup JavaScript chokes on the Notes representation.
Attached is a patch that escapes carriage returns, however, I want some discussion on whether there needs to be any other escaping performed.
Attachments (1)
Change History (5)
Changed 8 years ago by jbronn
comment:1 Changed 8 years ago by mtredinnick
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Accepted
comment:2 Changed 7 years ago by anonymous
- milestone post-1.0 deleted
Milestone post-1.0 deleted
comment:3 Changed 5 years ago by ramiro
comment:4 Changed 5 years ago by ramiro
- Resolution set to duplicate
- Status changed from new to closed
Should this be done with escapejs() instead? That would handle the cases you've identified and a few dozen others as well. At first glance, it looks like the right tool here, but I might be missing something. | https://code.djangoproject.com/ticket/9211 | CC-MAIN-2016-22 | refinedweb | 270 | 57.98 |
.
Step 1: Concept
We attach the magnetic reed sensor to my office door and door-frame. A wire runs from the magReed sensor to a pin in the Arduino circuit. Arduino watches that pin's status, HIGH or LOW, and when the status changes from one to the other Arduino reports on that change through Serial.write(). The Processing sketch picks up the Serial.write() call and checks to see if the current state is the same as the last one posted to Twitter. If the two states are the same then it will not post, but if the new state is different from the previous state, then we're in business.
Processing uses twitter4j and OAuth to post the new state to your Twitter account. Done and done.
By the way great work ,helped a lot...!!!
I tried this project.
But Only works at just 1st time to my iphone twitter app.
and then never works again --;
My error message is here:
------------------------------------------------------------------
RXTX Warning: Removing stale lock file. /var/lock/LK.027.033.008
What's wrong with me ?
Intel iMac x64, MacOSX 10.8.3
Arduino 1.5.1
Processing 2.0b8
Twitter4j.jar that you offering one.
But i am currently getting these errors on processing:
Stable Library
=========================================
Native lib Version = RXTX-2.1-7
Java lib Version = RXTX-2.1-7
[0] "COM5"
[1] "COM7"
[2] "COM8"
gnu.io.PortInUseException: Unknown Application
at gnu.io.CommPortIdentifier.open(CommPortIdentifier.java:354))
[color=red]This is my current processing code:[/color]
import processing.serial.*;
import twitter4j.conf.*;
import twitter4j.internal.async.*;
import twitter4j.internal.org.json.*;
import twitter4j.internal.logging.*;
import twitter4j.http.*;
import twitter4j.api.*;
import twitter4j.util.*;
import twitter4j.internal.http.*;
import twitter4j.*;
static String OAuthConsumerKey = "ykaw70kvBc6jVV21eLlWA";
static String OAuthConsumerSecret = "PllQqnwaZV7aYuH33vniloGW2U5fbkfYxef2LQVAK0";
static String AccessToken = "20026567-9juOyYchv9k7PXsu7kyrvhpHKkOY3Fg6xTn9vatuA";
static String AccessTokenSecret = "AcIhlVX95nvpFYMyk9oh41PWHe3TXYqhMbSy6hLgQ";
Serial arduino;
Twitter twitter = new TwitterFactory().getInstance();
void setup() {
size(125, 125);
frameRate(10);
background(0);
println(Serial.list());
String arduinoPort = Serial.list()[0];
arduino = new Serial(this, arduinoPort, 9600); [color=red]ERROR BEGINS ON THIS LINE[/color]
loginTwitter();
}
void loginTwitter() {
twitter.setOAuthConsumer(OAuthConsumerKey, OAuthConsumerSecret);
AccessToken accessToken = loadAccessToken();
twitter.setOAuthAccessToken(accessToken);
}
private static AccessToken loadAccessToken() {
return new AccessToken(AccessToken, AccessTokenSecret);
}
void draw() {
background(0);
text("simpleTweet_00", 18, 45);
text("@msg_box", 30, 70);
listenToArduino();
}
void listenToArduino() {
String msgOut = "";
int arduinoMsg = 0;
if (arduino.available() >= 1) {
arduinoMsg = arduino.read();
if (arduinoMsg == 1) {
msgOut = "Opened door at "+hour()+":"+minute()+":"+second();
}
if (arduinoMsg == 2) {
msgOut = "Closed door at "+hour()+":"+minute()+":"+second();
}
compareMsg(msgOut); // this step is optional
// postMsg(msgOut);
}
}
void postMsg(String s) {
try {
Status status = twitter.updateStatus(s);
println("new tweet --:{ " + status.getText() + " }:--");
}
catch(TwitterException e) {
println("Status Error: " + e + "; statusCode: " + e.getStatusCode());
}
}
void compareMsg(String s) {
// compare new msg against latest tweet to avoid reTweets
java.util.List statuses = null;
String prevMsg = "";
String newMsg = s;
try {
statuses = twitter.getUserTimeline();
}
catch(TwitterException e) {
println("Timeline Error: " + e + "; statusCode: " + e.getStatusCode());
}
Status status = (Status)statuses.get(0);
prevMsg = status.getText();
String[] p = splitTokens(prevMsg);
String[] n = splitTokens(newMsg);
//println("("+p[0]+") -> "+n[0]); // debug
if (p[0].equals(n[0]) == false) {
postMsg(newMsg);
}
//println(s); // debug
}
[color=red]And these are the errors on python:[/color]
running... simpleTweet_01_python
arduino msg: #peacefulGlow
Traceback (most recent call last):
File "C:/Users/Ciaran/Desktop/python final", line 47, in
listenToArduino()
File "C:/Users/Ciaran/Desktop/python final", line 24, in listenToArduino
compareMsg(msg.strip())
File "C:/Users/Ciaran/Desktop/python final", line 31, in compareMsg
pM = ""+prevMsg[0]+""
IndexError: list index out of range
>>>
[color=red][font=Verdana]This is my current python code:[/font][/color]
print 'running... simpleTweet_01_python'
# import libraries
import twitter
import serial
import time
# connect to arduino via serial port
arduino = serial.Serial('COM5', 9600, timeout=1)
# establish OAuth id with twitter
api = twitter.Api(consumer_key='ykaw70kvBc6jVV21eLlWA',
consumer_secret='PllQqnwaZV7aYuH33vniloGW2U5fbkfYxef2LQVAK0',
access_token_key='20026567-9juOyYchv9k7PXsu7kyrvhpHKkOY3Fg6xTn9vatuA',
access_token_secret='AcIhlVX95nvpFYMyk9oh41PWHe3TXYqhMbSy6hLgQ')
# listen to arduino
def listenToArduino():
msg=arduino.readline()
if msg > '':
print 'arduino msg: '+msg.strip()
compareMsg(msg.strip())
# avoid duplicate posts
def compareMsg(newMsg):
# compare the first word from new and old
status = api.GetUserTimeline('yourUsername')
prevMsg = [s.text for s in status]
pM = ""+prevMsg[0]+""
pM = pM.split()
nM = newMsg.split()
print "prevMsg: "+pM[0]
print "newMsg: "+nM[0]
if pM[0] != nM[0]:
print "bam"
postMsg(newMsg)
# post new message to twitter
def postMsg(newMsg):
localtime = time.asctime(time.localtime(time.time()))
tweet = api.PostUpdate(hello)
print "tweeted: "+tweet.text
while 1:
listenToArduino()
_______________________________________________________________________
I know atleast one of the buttons are working as it sends the signal of [color=red]arduino msg: #peacefulGlow
[/color] to pyhon when running module but as soon as the button is pressed then error messages appear.
My LED is not lighting up at all :(
I can send pictures of the circuitboard if needed.
Please will someone help me with this.
Either contact me here or and email to broadleyutb@googlemail.com would be great
Thanks
I have now got the led to cycle through some colours but it seems to fade in and out nicely then every few seconds it will blink off then back on?
and I also do not understand how you are supposed to run python+processing (for button code) at the same time of running the arduino RGB LED code as both can not use the same COM (COM5 in my case) at the same time
Im getting errors on processing like, no libraries for twitter4j.http.
So I downloaded a jar file called, twitter4j-2.0.9 and draged it. But I was libraries using twitter4j 2.2.5-
Then another error came up, acces token is ambiguos.
And then on the code I put:
import twitter4j.http.OAuthToken*;
And then it says the Token is not visible.
Im using mac, I really need help. Thanks.
Good luck! If you iron it out please come back and post the answer.
Not an answer, but I found Python to be really easy to use. In fact I don't even use Processing at all anymore.
Im trying to figure how to install the libraries for mac.
Thanks for the great project.
I'm trying to get this code running, but I am stuck installing the library. I've put it in the location you suggested which didn't work. I then tried the default library location (where I have installed other libraries) which is a subdirectory of the sketch folder. This didn't work either.
I did find a discussion about installing libraries into Processing () and tried to change the name of the twitter4j-core.jar to twitter4j.jar so that it matched the library name as they suggested. Processing then recognized the library, but gave me the following error:
"No library found for twitter4j.http"
Any thoughts on what I am doing wrong and why this isn't working?
Thanks, Aaron
I have attached the twitter4j.jar file to this instructable page. I'm not sure if you could just use that file or if there's more installation that needs to happen, but it's here for archival purposes now.
On my Windows 7 box I've installed the twitter4j jar file here:
C:\Program Files\processing-1.5.1\modes\java\libraries\twitter4j\library\twitter4j (executable jar file)
I think I may have had to exit and reopen Processing, or restart the machine, I don't recall. Anyway, with the executable twitter4j file in that directory, it finally showed up in the Processing IDE under Menu:Sketch>Import Library>twitter4j. But you're saying that's not working for you, right?
Well, I did a little surfing and found that I'm a version or so out of date. The libraries can now go into the sketchbook directory, as you seem to know already. This what you mean by "the default library location" yes?
Over at I found this information:
"Processing now allows for a “libraries” folder inside your Processing sketchbook, which is a great deal more convenient than having your 3rd party libraries installed in the Processing application folder."
They're saying you can now install a library like twitter4j here (you add the directories into your \My Documents\Processing\ folder):
C:\My Documents\Processing\libraries\twitter4j\library\twitter4j (executable jar file)
This is mentioned again on the Processing site here:
" libraries must be ... placed within the "libraries" folder of your Processing sketchbook."
(To find the Processing sketchbook location on your computer, open the Preferences window from the Processing application and look for the "Sketchbook location" item at the top.)
And in the forums here:
You've tried this both ways and it's not working? I don't know what's up. I'm sorry. Unless someone else contributes here, I'd say ask the Processing forums. They've got a forum just for Contributed Libraries here:
Sorry I don't have the answer for you. It *should* work. ;-)
Good luck!
It needs to be in Processing's libraries folder. On Windows, that directory looks like this:
C:\Program Files\processing-1.5.1\modes\java\libraries
Probably for linux the last few directories are going to be the same:
... processing-1.5.1\modes\java\libraries
Within Processing, you'd import the library by starting with the "Sketch" menu:
Processing > Sketch > Import Library... > twitter4j
And that should do it.
Just add twitter4j-core-2.2.3.jar to your application classpath. If you are familiar with Java language, looking into the JavaDoc should be the shortest way for you to get started. twitter4j.Twitter interface is the one you may want to look at first.
...and then, maybe I'll write an instructable about how to install twitter4j on ubuntu to make it simple :).
Good Luck! | http://www.instructables.com/id/Simple-Tweet-Arduino-Processing-Twitter/CDZ17TAH0OJ2LLL | CC-MAIN-2015-22 | refinedweb | 1,608 | 50.43 |
Ticket #843 (closed defect: wontfix)
south confused by similar app names
Description
I have two apps in separate projects, where one project is common code for two separate interfaces.
webproxy.main
proxycore.main
If I tell south to do schema migration for proxy.main, it finds proxycore.main first and doesn't find proxy.main at all.
It should be able to tell the difference.
(msl9.4)msoulier@espresso:...b/django/webproxy$ python manage.py schemamigration webproxy.main --auto
+ Added model main.UserProfile?
+ Added model main.ProxyApp?
+ Added model main.ProxyPermission?
Created 0002_autoadd_userprofileadd_proxyappadd_proxypermission.py. You can now apply this migration with: ./manage.py migrate webproxy.main
Note, those models are from proxycore.main
(msl9.4)msoulier@espresso:...b/django/webproxy$ python manage.py schemamigration proxycore.main --auto
Nothing seems to have changed.
So now I must rename proxycore.main to something else, like proxycore.core, as a workaround.
Attachments
Change History
Changed 19 months ago by yedpodtrzitko
- Attachment app_namespace.diff added
comment:1 Changed 19 months ago by anonymous
Hi, there's an initial patch to make South aware of multiple apps w/ the same name (for now only tested with schemamigration --initial command and migrate command)
Regards,
yedpodtrzitko
comment:2 Changed 19 months ago by andrew
- Status changed from reopened to closed
- Resolution set to wontfix
As I've said before, this is a bug in Django, not South - until Django lands the new app loading branch which distinguishes apps by more than label, South will act like this (much like the built-in Django commands like ./manage.py sql) | http://south.aeracode.org/ticket/843 | CC-MAIN-2014-10 | refinedweb | 261 | 52.76 |
hacking: That is, the GMarkup-based XBEL parser that should be used to parse “desktop bookmarks” (recent files, filechooser’s bookmarks, default locations, etc.)
Today I’ve worked hard on the namespace parsing mechanism, and even though I feel like it’s a little too fragile, and it doesn’t cover every conceivable XML namespace declaration, it’s a start. At the moment, it parser the XBEL streams that I pass to it, resulting from libxml2.
I’ll tighten up the namespace marking routines, in order for it to be XML:NS compliant, and hope that nobody messes up with his own bookmarks. :-)
life: Tonight, I made bread with Marta. On Sunday there will be a party at her parents’, and she’s going to cater for 25+ people. We had to go shopping for two days – and believe me: it has been very tiresome.
exams: tomorrow, a C exam.
Update 2005-09-16@09:39: the
EggBookmarkFile code hit CVS tonight, and so the code in
EggRecentManager using it. Profiling is still on, so it’ll require auto-foo patching, but everything works nicely at the moment. I’ll do more work in the API and more profiling on the widgets, as soon as I can compile sysprof. | https://blogs.gnome.org/ebassi/2005/09/15/eggbookmarkfile/ | CC-MAIN-2017-51 | refinedweb | 209 | 70.63 |
PyX — Example: axis/rating.py
Rater: How a nicely looking axis partition is chosen
from pyx import * p2 = path.curve(0, 0, 3, 0, 1, 4, 4, 4) p1 = p2.transformed(trafo.translate(-4, 0).scaled(0.75)) p3 = p2.transformed(trafo.scale(1.25).translated(4, 0)) myaxis = graph.axis.linear(min=0, max=10) c = canvas.canvas() c.insert(graph.axis.pathaxis(p1, myaxis)) c.insert(graph.axis.pathaxis(p2, myaxis)) c.insert(graph.axis.pathaxis(p3, myaxis)) c.writeEPSfile("rating") c.writePDFfile("rating")
Description
We here demonstrate how an axis actually chooses its partition.
There are good-looking partitions of an axis and bad-looking ones. The element which chooses one or the other, is the
rater of an axis. It asks yet another element, the
parter to suggest several partitions and then chooses the best, according to criteria such as tick distance, number of ticks, subticks, etc. (The partitioning process itself is explained in a later example).
In this example we show the influence of the tick distances: Several axes with the same parameters are plotted on a path which is scaled. Note that the axes choose the ticks appropriately to the available space.
The rating mechanism takes into account the number of ticks and subticks, but also the distances between the labels. Thus, the example in the middle has less ticks than the right version, because there is more space available at the larger path. More interestingly more labels are also shown on the very left path, although it is smaller than the middle one. This is due to the fact, that there is not enough room for labels with a decimal place on the smaller axis!
The rating mechanism is configurable and exchangeable by passing rater instances to the
rater keyword of the axis constructor. But instead of reconfiguring the whole rating mechanism, simple adjustments to favour more or less ticks are easily possible by the axis keyword argument
density.
In this example, the same axis instance is used several times. This works since the axis does not store any data within its own instance. Instead, an
anchoredaxis instance is created by the
pathaxis which embeds the axis in a proper environment. This way, a place for storing information for this specific use of the axis instance is provided. When using axes in a graph, the graph instance takes care of the proper setup of the anchored axes instances. | http://pyx.sourceforge.net/examples/axis/rating.html | CC-MAIN-2013-48 | refinedweb | 405 | 57.98 |
Python GUI Programming: wxPython vs. tkinter
Python GUI Programming: wxPython vs. tkinter
The author compares wxPython to tkinter for ease of use and performance differences while trying to overcome his own biases
Join the DZone community and get the full member experience.Join For Free
Access over 20 APIs and mobile SDKs, up to 250k transactions free with no credit card required.
I thought Tcl/Tk was old, odd, and obscure, which turns out not to be the best technical judgment I've ever made.
In modern Tcl/Tk, tkinter is simple, powerful, and fast. It looks pretty good too. Modern Tk includes a styling mechanism via themed widgets that allow it to be made even prettier.
Comparing wxPython and tkinter
Comparing wxPython and tkinter using basic examples we get a sense of what they look like, in code and on-screen. I don't mean to knock wxPython, which is powerful, well-documented, and has been my go-to Python UI solution for many years. But tkinter has some features I really like, particularly the way it handles layout management.
Here are screenshots of two similar examples, one using tkinter (taken from:), one using wxPython (slightly modified from:):
They have comparable controls and layouts. The wxPython version is 76 lines and the tkinter version is 48 lines, most of which is accounted for by layout code.
The wxPython example uses nested HBOX and VBOX sizers, which is my preferred way to handle layout using that toolkit because I find it easier to reason about, and therefore, easier to maintain and modify. The tkinter example uses a grid layout, and this does account for some of the difference in program length. However, it also points to quite a different design choice between the two toolkits: wxPython externalizes the layout classes in its sizer hierarchy, whereas tkinter internalizes layout so that each widget manages its own children using a variety of policies, of which grid is just one.
UI layout in wxPython is not a lot of fun, and I've never found GUI builders - from DialogBlocks to BoaConstructor and beyond - to be much help. Managing the parallel hierarchies of sizers and windows adds complexity without a lot of additional functionality - in that delightful 1990's object-oriented way that seemed like such a good idea to all of us at the time.
tkinter does away with all that by hiding layout policy behind the widget interface. You just add children to their parent's grid. You don't have to create the grid sizer, add the children to it, and then set it as the sizer on the parent. This inevitably creates a bunch of names like "gridSizer1" along the way that you'll regret when it comes time to edit the UI code.
Then there is the speed comparison. For example, how long does it take from typing
python myapp.py at the command prompt to the UI being shown on the screen? The difference is negligible on a desktop machine, but on a little embedded ARM processor the wall-clock seconds for these simple example programs come out like this:
wxPython: 6 seconds
tkinter: 1 second
It wasn't the world's most sophisticated test--I just used my watch for the timings--but the difference is so big it doesn't have to be. The canonical absolute response time that users are willing to tolerate is 2 seconds, going from three times that to less than half is kind of a big deal.
I've done a lot of embedded work over the years, and even got wxX11 to compile on an ARM board that has almost as much computing power as a toaster. The C++ version is fast enough that users don't experience perceptible lag. I've run some wxPython-based tools on the same system (maintenance scripts for field engineers, who will tolerate anything) and have always been disappointed at how slow they were. I would love to re-write the main application UI in Python and let C++ do all the low-level stuff underneath, but it just didn't seem practical given even wxGTK/C++ was unacceptably slow on that board.
I've looked at a lot of different toolkits for embedded UI over the years: Qt Embedded, GTK and GTK+, FLTK, and so on. None of them met the criteria of power, maturity, i18n/l10n, and speed that I needed for embedded systems running on boards in the "Raspberry Pi or a bit smaller" category.
Now, I feel like the search may be over, and the solution was right under my nose the whole time. I just never paid attention to it because of an outdated and incorrect attitude toward Tcl/Tk and tkinter.
NOTE: In modern Python the old Tkinter module has been renamed tkinter, and the ttk module has become a submodule of it, so when legacy code did
from Tkinter import * and
import ttk, modern (Python 3) code will do
from tkinter import * and
import tkinter.ttk. Also, if you get an error on these imports saying the module
_tkinter could not be loaded, your Python install can't find _tkinter.so or equivalent on its LD_LIBRARY_PATH or equivalent, which may be an issue with your environment settings or a problem with your Python build. I had to build Python 2.7.10 from scratch on ARM and then do some fiddling to get _tkinter.so to build. Watch out for error messages during the build and note that tkinter is built as part of "make install" not "make", at least on Linux. The build can have trouble finding the Tcl and Tk headers and shared libs if your paths aren't set correctly Setting CPPFLAGS, LDFLAGS and LD_LIBRARY_PATH as described in the answer to this question on Stackoverflow may help.
#1 for location developers in quality, price and choice, switch to HERE.
Published at DZone with permission of Tom Radcliffe . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/python-gui-programming-wxpython-vs-tkinter | CC-MAIN-2019-09 | refinedweb | 1,018 | 58.92 |
Shebangml - markup with bacon
This is an experimental markup language + parser|interpreter with support for plugins and cleanly configurable add-on features. I use it as a personal home page tool and lots of other things.
See Shebangml::Syntax for details.
$hbml->configure(%options);
Adds a handler for a namespace.
$hbml->add_handler($name);
The
$name will have
Shebangml::Handler:: prepended to it, and should already be loaded at this point. It is good practice to declare a version (e.g.
our $VERSION = v0.1.1) in your handler package -- and may be required in the future.
If a
new method is available, a new object will be constructed and stored as the handler. Otherwise, the handler will be treated as a class name. Tags in the handlers namespace are constructed as:
.yourclass.themethod[foo=bar]
or
.yourclass.themethod[foo=bar]{{{content literal}}}
These would cause the processing to invoke one of the following (the latter if you have defined
new()) and send the result to
$hbml->output().
Shebangml::Handler::yourclass->themethod($atts, $content); $yourobject->themethod($atts, $content);
$hbml->add_hook($name => sub {...});
Processes a given input $source. This method holds its own state and can be repeatedly called with new inputs (each of which must be a well-formed shebangml document) using the same $hbml object.
Arguments are passed to "new" in Shebangml::State.
$hbml->process($source);
Handles contentless tags and any tags constructed with the {{{ ... }}} literal quoting mechanism.
$hbml->put_tag($tag, $atts, $string);
$hbml->put_tag_start($tag, $atts);
$hbml->put_tag_end($tag);
$hbml->put_text($text);
This method is called for any whole, starting, or ending tags which start with a dot ('.'). The builtin or plugin handler for the given tag must exist and must have a prototype which corresponds to the way it is used.
$hbml->run_tag($tag, @and_stuff);
Yes, your method should have a prototype.
my $out = $hbml->escape_text($text);
$hbml->put_literal($string);
$hbml->output(@strings);
$hbml->do_include($atts);
$hbml->do_doctype($atts);
Parses one or more lines of attribute strings into pairs and returns an atts object.
my $atts = $self->atts(@atts);
Some parts which might not survive revision:
This is set during process() and becomes accessible for callbacks as a class accessor.. | http://search.cpan.org/~ewilhelm/Shebangml/lib/Shebangml.pm | CC-MAIN-2016-44 | refinedweb | 362 | 57.87 |
The following script is designed to turn on an LED whilst a button is in a depressed state and turn the LED off when the button is released. When run however the LED comes on whether or not the button is held down and stays on regardless. I've tried all day to find out what the problem is but to no avail. Can anybody help me identify what might be the problem? Thanks!
Code: Select all
import RPi.GPIO as GPIO import time import os from time import sleep GPIO.setmode(GPIO.BOARD) GPIO.setup(7, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) GPIO.setup(10,GPIO.OUT) print "LED ON" GPIO.output(10,GPIO.HIGH) while True: input_state = GPIO.input(7) if input_state == True: print "LED OFF" GPIO.output(10,GPIO.LOW) time.sleep(1) exit() message = input("Press enter to quit\n\n") GPIO.cleanup() | https://lb.raspberrypi.org/forums/viewtopic.php?p=1414713 | CC-MAIN-2019-35 | refinedweb | 146 | 68.16 |
Hi people,
I'm trying to display an image (gif image) in an applet,
however, it does not show up when the applet is run.
Here is my code:
---------------------------------------------------------
import java.applet.Applet;
import java.awt.Graphics;
import java.awt.Image;
// An applet that loads an image and
// displays it.
public class DrawImage extends Applet
{
Image image; //an 'Image' object
//initialise applet
public void init()
{
//Load Image into object 'image'
//URL url , String file
image = getImage(getCodeBase(), "gifpic.gif");
}
//applet's paint function
public void paint(Graphics g)
{
if(image != null)
{ //if image is found then drawImage()
g.drawImage(image, 0, 0, this);
}
else
{ //if image is not found, then display message
System.out.println("image not found");
}
}
}
-----------------------------------------------------------------------
Be sure the image is loacted on the same server the applet is being loaded from. Be sure you have given the correct path to the file. For example, if the gifpic.gif is in /images, make the filename in your applet "/images/gifpic.gif".
Also, the check for whether the image is null will not work. Checking on whether an image has been loaded is much more complicated than that. I'm sorry that I don't recall what is involved. I haven't done this in a very long time.
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?150369-Image-won-t-display-in-Applet&mode=hybrid | CC-MAIN-2016-07 | refinedweb | 228 | 59.9 |
Binarization of Image using NumPy
In this article, we will learn how to binarize an image using NumPy, and of course, we will use OpenCV for reading the image both in grayscale and RGB.
To understand what binary is — binary is something that is made of two things. In computer terminology, binary is just 0 and 1. If we were to relate the same in the images, it is said as black and white images where —
- 0 signifies Black
- 1 signifies White.
At the initial stages of learning image processing, we often think of a grayscale image as a binary image. Although it is not. But slowly when we pick up the subject, we realize how wrong we were. So, moving ahead, we will learn how to binarize the image with both using the library and without using the library (NumPy is used for matrix operations just to avoid the slowness of the program when used the regular for loops). Besides this, we will also use Matplotlib to plot the results.
Credits of Cover Image - Photo by Angel Santos on Unsplash
RGB and Grayscale Overview
The binary operation works really well for the grayscale images. The problem with the color (RGB) images is that each pixel is a vector representing 3 unique values one for Red, one for Green, and one for Blue.
A typical grayscale image’s matrix would look like -
array([[162, 162, 162, ..., 170, 155, 128], [162, 162, 162, ..., 170, 155, 128], [162, 162, 162, ..., 170, 155, 128], ..., [ 43, 43, 50, ..., 104, 100, 98], [ 44, 44, 55, ..., 104, 105, 108], [ 44, 44, 55, ..., 104, 105, 108]], dtype=uint8)
A typical RGB image’s matrix would seem like -
array([[[226, 137, 125], ..., [200, 99, 90]], [[226, 137, 125], ..., [200, 99, 90]], [[226, 137, 125], ..., [200, 99, 90]], ..., [[ 84, 18, 60], ..., [177, 62, 79]], [[ 82, 22, 57], ..., [185, 74, 81]], [[ 82, 22, 57], ..., [185, 74, 81]]], dtype=uint8)
If we were to separate R, G, and B pixels from the above matrix. We get —
R matrix
array([[226, 226, 223, ..., 230, 221, 200], [226, 226, 223, ..., 230, 221, 200], [226, 226, 223, ..., 230, 221, 200], ..., [ 84, 84, 92, ..., 173, 172, 177], [ 82, 82, 96, ..., 179, 181, 185], [ 82, 82, 96, ..., 179, 181, 185]], dtype=uint8)
G matrix
array([[137, 137, 137, ..., 148, 130, 99], [137, 137, 137, ..., 148, 130, 99], [137, 137, 137, ..., 148, 130, 99], ..., [ 18, 18, 27, ..., 73, 68, 62], [ 22, 22, 32, ..., 70, 71, 74], [ 22, 22, 32, ..., 70, 71, 74]], dtype=uint8)
B matrix
array([[125, 125, 133, ..., 122, 110, 90], [125, 125, 133, ..., 122, 110, 90], [125, 125, 133, ..., 122, 110, 90], ..., [ 60, 60, 58, ..., 84, 76, 79], [ 57, 57, 62, ..., 79, 81, 81], [ 57, 57, 62, ..., 79, 81, 81]], dtype=uint8)
Whatever operation we compute on the grayscale image, we will need to compute the same on the RGB image but for 3 times separating R, G, and B pixels and finally merging them as a proper RGB image.
Time to Code
The packages that we mainly use are -
- NumPy
- Matplotlib
- OpenCV
Import the Packages
import cv2 import numpy as np converting the image into a binary image, we can simply make use of the threshold() method available in the cv2 library. This method, irrespective of what the image is (grayscale or RGB) converts into binary. It takes 4 arguments in use.
- src → It is basically the image matrix.
- thresh → It is the threshold value based on which pixels are given a new value. If the pixels are less than this value, we will revalue those pixels to 255. Otherwise, the pixels will be revalued to 0.
- maxval → It is the maximum pixel value that a typical image could contain (255).
- type → It is basically a thresholding type that is given and based on that type the operation is computed. There are several types with which the operation is taken care of.
After this, we will plot the results to see the variation and hence the below function.
def binarize_lib(image_file, thresh_val=127, with_plot=False, gray_scale=False): image_src = read_this(image_file=image_file, gray_scale=gray_scale) th, image_b = cv2.threshold(src=image_src, thresh=thresh_val, maxval=255, type=cv2.THRESH_BINARY)
Let’s test the above function -
binarize_lib(image_file='lena_original.png', with_plot=True)
binarize_lib(image_file='lena_original.png', with_plot=True, gray_scale=True)
Now that we have seen the results of both original and binary images, it is obvious that the library code works for both. It’s time to make our hands dirty to code the same from the scratch.
Code Implementation from Scratch
First, we will write a function that will revalue the pixel values which are less than the specified threshold to 255.
By doing it, we will see something like below -
def convert_binary(image_matrix, thresh_val): white = 255 black = 0 initial_conv = np.where((image_matrix <= thresh_val), image_matrix, white) final_conv = np.where((initial_conv > thresh_val), initial_conv, black) return final_conv
We will call the above function three times by separating R, G, and B values and finally merge the same to obtain the binarized image. Once doing it, we can plot the results just like how we did it before.
def binarize_this(image_file, thresh_val=127, with_plot=False, gray_scale=False): image_src = read_this(image_file=image_file, gray_scale=gray_scale) if not gray_scale: cmap_val = None r_img, g_img, b_img = image_src[:, :, 0], image_src[:, :, 1], image_src[:, :, 2] r_b = convert_binary(image_matrix=r_img, thresh_val=thresh_val) g_b = convert_binary(image_matrix=g_img, thresh_val=thresh_val) b_b = convert_binary(image_matrix=b_img, thresh_val=thresh_val) image_b = np.dstack(tup=(r_b, g_b, b_b)) else: cmap_val = 'gray' image_b = convert_binary(image_matrix=image_src, thresh_val=thresh_val) if with_plot:
We have made our binarizing code by just using NumPy. Let’s test the same -
binarize_this(image_file='lena_original.png', with_plot=True)
binarize_this(image_file='lena_original.png', with_plot=True, gray_scale=True)
This is it. Whatever we wanted to accomplish, we have accomplished it. The results are quite similar to the one we got by using the library code.
Hence this concludes the aim of this article.
Other Similar Articles
Do give a read … | https://msameeruddin.hashnode.dev/binarization-of-image-using-numpy?guid=none&deviceId=6ca05b3d-c492-44fa-9e21-aabefa16ba82 | CC-MAIN-2021-10 | refinedweb | 985 | 68.26 |
NAME
vm_page_wire, vm_page_unwire -- wire and unwire pages
SYNOPSIS
#include <sys/param.h> #include <vm/vm.h> #include <vm/vm_page.h> void vm_page_wire(vm_page_t m); void vm_page_unwire(vm_page_t m, int activate);
DESCRIPTION
The vm_page_wire() function increments the wire count on a page, and removes it from whatever queue it is on. The vm_page_unwire() function releases one of the wirings on the page. When write_count reaches zero the page is placed back onto either the active queue (if activate is non-zero) or onto the inactive queue (if activate is zero). If the page is unmanaged (PG_UNMANAGED is set) then the page is left on PQ_NONE.
AUTHORS
This manual page was written by Chad David <davidc@acns.ab.ca>. | http://manpages.ubuntu.com/manpages/oneiric/man9/vm_page_unwire.9freebsd.html | CC-MAIN-2014-15 | refinedweb | 117 | 58.18 |
>> What if one of your users does something like 'y = M1.x + 1'; then >> what are you going to do? Dave> The goal is *not* to put the program into the state it "would have Dave> been" had the changes in M1 been done earlier. That is Dave> impossible. We simply want to have all *direct* references to Dave> objects in M1 be updated. The above looks like a pretty direct reference to M1.x. <0.5 wink> It seems to me that you have a continuum from "don't update anything" to "track and update everything": don't update update global update all direct update anything funcs/classes references everything reload() super_reload() Dave nobody? Other ideas have been mentioned, like fiddling the __bases__ of existing instances or updating active local variables. I'm not sure precisely where those concepts fall on the continuum. Certainly to the right of super_reload() though. In my opinion you do what's easy as a first step then extend it as you can. I think you have to punt on shared objects (ints, None, etc). This isn't worth changing the semantics of the language even in some sort of interactive debug mode. Sitting for long periods in an interactive session and expecting it to track your changes is foreign to me. I will admit to doing stuff like this for short sessions: >>> import foo >>> x = foo.Foo(...) >>> x.do_it() ... TypeError ... >>> # damn! tweak foo.Foo class in emacs >>> reload(foo) >>> x = foo.Foo(...) >>> x.do_it() ... but that's relatively rare, doesn't go on for many cycles, and is only made tolerable by the presence of readline/command retrieval/copy-n-paste in the interactive environment. Maybe it's just the nature of your users and their background, but an (edit/test/run)+ cycle seems much more common in the Python community than a run/(edit/reload)+ cycle. Note the missing "test" from the second cycle and from the above pseudo-transcript. I think some Python programmers would take the opportunity to add an extra test case to their code in the first cycle, where in the second cycle the testing is going on at the interactive prompt where it can get lost. "I don't need to write a test case. It will just slow me down. The interactive session will tell me when I've got it right." Of course, once the interactive sessions has ended, the sequence of statements you executed is not automatically saved. You still need to pop back to your editor to take care of that. It's a small matter of discipline, but then so is not creating aliases in the first place. Dave> Reload() will always be a function that needs to be used Dave> cautiously. Changes in a running program can propagate in strange Dave> ways. "Train wreck" was the term another poster used. Precisely. You may wind up making reload() easier to explain in the common case, but introduce subtleties which are tougher to predict (instances whose __bases__ change or don't change depending how far along the above continuum you take things). I think changing the definitions of functions and classes will be the much more likely result of edits requiring reloads than tweaking small integers or strings. Forcing people to recreate instances is generally not that big of a deal. Finally, I will drag the last line out of Tim's "The Zen of Python": Namespaces are one honking great idea -- let's do more of those! By making it easier for your users to get away with aliases like x = M1.x you erode the namespace concept ever so slightly just to save typing a couple extra characters or executing a couple extra bytecodes. Why can't they just type M1.x again? I don't think the savings is really worth it in the long run. Skip | https://mail.python.org/pipermail/python-list/2004-March/239470.html | CC-MAIN-2014-10 | refinedweb | 647 | 71.65 |
On Mon, Jun 03, 2002 at 10:23:24PM -0500, Karl Fogel wrote:
> Greg Stein <gstein@lyra.org> writes:
>...
> > If the WC and/or client wants to do more work, then empty out your
> > close_edit() function and move the work outside of the editor (to be
> > performed when RA->do_FOO returns). But do not alter the *usage* of the
> > editor to fit your scenario.
>
> Yeah. The only reason I didn't like that was it clutters the RA
> interfaces with a param (the "update_baton" as you call it later) that
> is only used for passing persistent information through.
No way. Stuff the update baton into the edit_baton. The RA layer doesn't
need to know anything about it.
> But I agree -- that way is better than the change to editor usage, and
> is a fine doorway to persisting other gathered information, which
> seems likely to be a Good Thing down the road. Will do it that way,
> & thanks for pushing the suggestion.
Coolio!
> > How do you plan to represent revisions? It would seem that you would want to
> > map directory names onto (URL, revision) pairs.
>...
> (This property value gets parsed later; that's where the revisions and
> URLs and target subdirs will come from.)
But URLs cannot contain revisions. Unless you go on to say that a value is
"rev <space> URL", then I'm not sure what the file format is going to be.
> > Oh: also, I seem to recall a checkin somewhere saying that the dir names in
> > the svn:externals property are *single* components. IMO, that is wrong. They
> > should be relative paths. This allows you to do something like:
>
> The problem I'm thinking of is name conflicts within the checked out
> subdir. If we tamper with a project's ability to govern its own
> namespace, we have to deal with possible conflicts.
We aren't tampering. The person designing the module needs to compensate for
what is happening in the namespaces that he is including.
> For example, what if this is your externals description:
>
> foo...
> foo/bar...
> foo/baz...
> foo/qux...
>
> But at some point the `foo' project decides to add its *own*
> subdirectory named `bar'. Ick. Now what can we do, besides punt?
"Obstructed update". I bet it will just happen automagically. You'll end up
trying to do a "checkout" over the top of an existing directory, or to do an
update, and the URLs aren't going to match.
The point is: this is the problem of the module author. It does not impact
what the 'foo' project can do with its namespace.
> I'm sure we could find some defensible behavior, but if we're not
> losing any major functionality (which AFAICT we're not), why not just
> avoid the problem in the first place?.
Or, let's look at Apache:
httpd-2.0
httpd-2.0/srclib/apr
httpd-2.0/srclib/apr-util
Again: people are hooking "foreign" code underneath a root directory. I
would suggest that many modules are not "sibling" oriented, but nested.
Cheers,
-g
--
Greg Stein,
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
This is an archived mail posted to the Subversion Dev
mailing list. | https://svn.haxx.se/dev/archive-2002-06/0375.shtml | CC-MAIN-2017-26 | refinedweb | 541 | 73.58 |
File attached.
I need some help with a simple program to capitalize the first letter of
each word in a sentence.
I’m not sure if I should be using class Array, but I dont think that’s
the problem.
Here’s the code:
class Array
def title_format (title)
puts title.split(" “).each{|element| print
element.capitalize!}.join(” ")
return title_format
end
end
puts “Please type the sentence you want to have put in “Title Format””
title = gets.chomp
title.title_format
Help would be greatly appreciated. This is the error i get:
sentence_caps.rb:18: undefined method `title_format’ for “please help me
with my program”:String (NoMethodError)
Thanks | https://www.ruby-forum.com/t/simple-program-need-help/212333 | CC-MAIN-2021-49 | refinedweb | 107 | 59.4 |
Created on 2018-12-23 12:58 by hanno, last changed 2018-12-29 02:54 by terry.reedy. This issue is now closed.
2to3 (in python 3.6.6) will rewrite the reload function to use the imp module. However according to [1] "Deprecated since version 3.4: The imp package is pending deprecation in favor of importlib."
Also running the code with warnings enabled will show a deprecation warning.
Example, take this minimal script:
#!/usr/bin/python
import sys
reload(sys)
Running to 2to3 ends up with:
#!/usr/bin/python
import sys
import imp
imp.reload(sys)
$ PYTHONWARNINGS=d python3 foo.py
test.py:3: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
[1]
Seems this came up when the original version of fixer was added as noted in . Looking further this seems to be similar to where it was fixed in 3.7 and master to use importlib.reload . Since 3.6 is having it's last bug fix release I am not sure of the backport. Adding benjamin.peterson to the issue .
It was decided in #21446 to only backport the change, labelled an enhancement, to 3.7 and it is now too late to challenge that decision as 3.6 only gets security fixes. | https://bugs.python.org/issue35570 | CC-MAIN-2021-39 | refinedweb | 221 | 69.68 |
Hi all,Well friendz i have 2004 xli and i want to shift to vti. Plz temme wats the difference in xli and vti's fuel consumption.
well 1 thing is that xli is 1300 cc and vti 1600 cc so vti is more consumption . but it also counts on how u drive , full race , or slow and careful ...
but ive heard that vti's consumtion is less than exi.
i think that xli gives more bcz there is a difference of staright forward 300cc but w.r.t saloon i think vti gives more due to vtec!!!
well if you are upgrading from Corolla to Honda because of speed then i would suggest that dont waste money and
import a corolla VVTI 1.8 as here in UK 1.8 easily smokes Vti 1.6 by 3 cars distance atleast
But if you are going fuel wise even then i will suggest latest Vti as its consumption is MINT and very economical carBut doesent go fast at all not at all
no! i want to change because of vti's shape and features. xli has no faetures except power steering and ac.. no pwer windows no central locking .. low quality deck with only 2 speakers ...
yaar talk comes to interior but i think Xli has 1.3 as i said earlier so it gave you more as compare to Vti simple....thanxxx
Go for SE Saloon which has got air bag ,mp3,wooden finish,far attractive interior,very good consumption & u can touch 220km/hr whenever u want.
Hi remember VTi is 13 lakh and 1.6lit, while XLi is 8.79 lakh and if you ready to spend another 50,000 u can get all the features you hvae mentioned like power windows, mirrors, alloy rims, CD changer etc. After 5 or 10 years when you would like to sell your ride VTi will lose almost 50% of its value, while XLi may sell at the same price or even more if you keep the machine well maintained. So choice is yours. If you care about your hard earned money keep Xli, if you have easy money go for VTi | https://www.pakwheels.com/forums/t/xli-and-vti-fuel-consumption/33635 | CC-MAIN-2017-30 | refinedweb | 362 | 82.34 |
strcasecmp, strncasecmp − compare two strings ignoring case
#include <strings.h>
int strcasecmp(const char *s1, const char *s2);
int strncasecmp(const char *s1, const char *s2, size_t n);.
The strcasecmp() and strncasecmp() functions return an integer less than, equal to, or greater than zero if s1 (or the first n bytes thereof) is found, respectively, to be less than, to match, or be greater than s2.
For an explanation of the terms used in this section, see attributes(7).
4.4BSD, POSIX.1-2001.
bcmp(3), memcmp(3), strcmp(3), strcoll(3), string(3), strncmp(3), wcscasecmp(3), wcsncasecmp(3)
This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/. | http://man.linuxtool.net/centos7/u2/man/3_strncasecmp.html | CC-MAIN-2019-30 | refinedweb | 129 | 63.8 |
Opened 11 years ago
Closed 10 years ago
#7334 closed defect (fixed)
tracmercurial problem
Description (last modified by )
Hello, I've installed trac 0.11rc2 tracmercurial 0.11 (TracMercurial-0.11.0.3-py2.4.egg file is in /usr/lib/python2.4/site-packages) Mercurial version is: 1.0 (i didnt have any problems with 0.10.4 with tracmercurial 0.10) but i cannot browse mercurial repos: I see this warning:
Warning: Can't synchronize with the repository (Unsupported version control system "hg". Check that the Python support libraries for "hg" are correctly installed.) Here's entry from my trac.log: 2008-06-14 00:06:21,245 Trac[loader] ERROR: Skipping "hg = tracext.hg.backend": (can't import "No module named tracext.hg.backend")
And here's specific lines from my trac.ini
[components] tracext.hg.* = enabled [trac] repository_dir = /home/arch/hgrepos/l10n repository_type = hg
Attachments (0)
Change History (9)
follow-up: 2 comment:1 by , 11 years ago
comment:2 by , 11 years ago
comment:3 by , 11 years ago
sorry for acting like a newbie :(
i've solved my problem. Here's what i have done: i did a svn co to tracmercurial-0.11
python setup.py bdist_egg then: easy_install --always-unzip TracMercurial-0.11.0.3-py2.4.egg echo "__import__('pkg_resources').declare_namespace(__name__)" > /usr/lib/python2.4/site-packages/TracMercurial-0.11.0.3-py2.4.egg/tracext/__init__.py }} The problem is, a __init__.py is missing in tracext dir.
comment:4 by , 11 years ago
comment:5 by , 11 years ago
Actually Noah fixed this some days ago.
comment:6 by , 11 years ago
comment:7 by , 11 years ago
Fixed in [7217:7218].
comment:8 by , 10 years ago
Fixed? Believe it or not I just ran into the same problem, I'm installing from the multirepos branch at 8584, and there was no init.py in my tracext.
Thought you might want to know.
comment:9 by , 10 years ago
Just verified again all the 3 versions of the plugin using
tracext, the
tracext/__init__.{py,pyc} files are in the .egg.
Replying to dolus@eventualis.org:
Notice the version field in the ticket description: you want to fill in when you file a ticket. | https://trac.edgewall.org/ticket/7334 | CC-MAIN-2019-43 | refinedweb | 377 | 61.73 |
Contents
1 World-Wide Web 7
1.1 WorldWideWeb - Summary : : : : : : : : : : : : : : : : : : 7
1.2 WWW people : : : : : : : : : : : : : : : : : : : : : : : : : : 9
1.2.1 Eelco van Asperen : : : : : : : : : : : : : : : : : : : 10
1.2.2 Carl Barker : : : : : : : : : : : : : : : : : : : : : : : 10
1.2.3 Tim Berners-Lee : : : : : : : : : : : : : : : : : : : 10
1.2.4 Robert Cailliau : : : : : : : : : : : : : : : : : : : : 10
1.2.5 Peter Dobberstein : : : : : : : : : : : : : : : : : : : 10
1.2.6 "Erwise" team : : : : : : : : : : : : : : : : : : : : 11
1.2.7 David Foster : : : : : : : : : : : : : : : : : : : : : : 11
1.2.8 Karin Gieselmann : : : : : : : : : : : : : : : : : : 11
1.2.9 Jean-Francois Groff : : : : : : : : : : : : : : : : : : 11
1.2.10 Willem von Leeuwen : : : : : : : : : : : : : : : : : : 11
1.2.11 Nicola Pellow : : : : : : : : : : : : : : : : : : : : : 11
1.2.12 Bernd Pollermann : : : : : : : : : : : : : : : : : : 12
1.2.13 Pei Wei : : : : : : : : : : : : : : : : : : : : : : : : 12
1.3 Policy : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 12
1.3.1 Aim : : : : : : : : : : : : : : : : : : : : : : : : : : : 12
1.3.2 Collaboration : : : : : : : : : : : : : : : : : : : : : : 12
1.3.3 Code distribution : : : : : : : : : : : : : : : : : : : : 13
1.3.4 WorldWideWeb distributed code : : : : : : : : : : : 13
1.3.5 Copyright CERN 1990-1992 : : : : : : : : : : : : : : 15
1.4 History to date : : : : : : : : : : : : : : : : : : : : : : : : : 16
2 How can I help? 19
2.1 Information Provider : : : : : : : : : : : : : : : : : : : : : : 20
2.1.1 You have a few files : : : : : : : : : : : : : : : : : : 20
2.1.2 You have a NeXT : : : : : : : : : : : : : : : : : : : 20
2.1.3 Using a shell script : : : : : : : : : : : : : : : : : : : 20
2.1.4 You have many files : : : : : : : : : : : : : : : : : : 20
2 CONTENTS
2.1.5 You have an existing information base : : : : : : : : 20
2.2 Etiquette : : : : : : : : : : : : : : : : : : : : : : : : : : : : 21
2.2.1 Sign it! : : : : : : : : : : : : : : : : : : : : : : : : : 21
2.2.2 Give the status of the information : : : : : : : : : : 22
2.2.3 Refer back : : : : : : : : : : : : : : : : : : : : : : : : 22
2.2.4 A root page for outsiders : : : : : : : : : : : : : : : 22
2.3 Things to be done : : : : : : : : : : : : : : : : : : : : : : : 22
2.3.1 Client side : : : : : : : : : : : : : : : : : : : : : : : : 22
2.3.2 Server side : : : : : : : : : : : : : : : : : : : : : : : 23
2.3.3 Other : : : : : : : : : : : : : : : : : : : : : : : : : : 23
3 Design Issues 25
3.1 Intended Uses : : : : : : : : : : : : : : : : : : : : : : : : : : 26
3.2 Availability on various platforms : : : : : : : : : : : : : : : 26
3.3 Navigational Techniques and Tools : : : : : : : : : : : : : : 27
3.3.1 Defined structure : : : : : : : : : : : : : : : : : : : : 27
3.3.2 Graphic Overview : : : : : : : : : : : : : : : : : : 27
3.3.3 History mechanism : : : : : : : : : : : : : : : : : : : 27
3.3.4 Index : : : : : : : : : : : : : : : : : : : : : : : : : 28
3.3.5 Node Names : : : : : : : : : : : : : : : : : : : : : : 29
3.3.6 Menu of links : : : : : : : : : : : : : : : : : : : : : : 29
3.3.7 Design Issues : : : : : : : : : : : : : : : : : : : : : : 29
3.3.8 Web of Indexes : : : : : : : : : : : : : : : : : : : : : 29
3.4 Tracing Links : : : : : : : : : : : : : : : : : : : : : : : : : : 31
3.5 Versioning : : : : : : : : : : : : : : : : : : : : : : : : : : : : 31
3.6 Multiuser considerations : : : : : : : : : : : : : : : : : : : : 32
3.6.1 Annotation : : : : : : : : : : : : : : : : : : : : : : 32
3.6.2 Protection : : : : : : : : : : : : : : : : : : : : : : : 32
3.6.3 Private overlaid web : : : : : : : : : : : : : : : : : 33
3.6.4 Locking and modifying : : : : : : : : : : : : : : : : : 33
3.6.5 Annotation : : : : : : : : : : : : : : : : : : : : : : : 33
3.7 Notification of new material : : : : : : : : : : : : : : : : : : 34
3.8 Topology : : : : : : : : : : : : : : : : : : : : : : : : : : : : 34
3.8.1 Are links two- or multi-ended? : : : : : : : : : : : : 34
3.8.2 Should the links be monodirectional or bidirectional? 34
3.8.3 Should anchors have more than one link? : : : : : 35
3.8.4 Should links be typed? : : : : : : : : : : : : : : : : 35
3.8.5 Should links contain ancillary information? : : : : : 36
3.8.6 Should a link contain Preview information? : : : : : 36
3.9 Link Types : : : : : : : : : : : : : : : : : : : : : : : : : : : 36
3.9.1 Magic link types : : : : : : : : : : : : : : : : : : : : 36
3.10 Document Naming : : : : : : : : : : : : : : : : : : : : : : : 37
CONTENTS 3
3.10.1 Name or Address, or Identifier? : : : : : : : : : : : : 38
3.10.2 Hints : : : : : : : : : : : : : : : : : : : : : : : : : : 38
3.10.3 X500 : : : : : : : : : : : : : : : : : : : : : : : : : : : 39
3.11 Document formats : : : : : : : : : : : : : : : : : : : : : : : 39
3.11.1 Format negotiation : : : : : : : : : : : : : : : : : : 39
3.11.2 Examples : : : : : : : : : : : : : : : : : : : : : : : : 40
3.12 Design Issues : : : : : : : : : : : : : : : : : : : : : : : : : : 41
3.13 Document caching : : : : : : : : : : : : : : : : : : : : : : : 41
3.13.1 Expiry date : : : : : : : : : : : : : : : : : : : : : : 42
3.14 Scott Preece on retrieval : : : : : : : : : : : : : : : : : : : : 42
3.15 Design Issues : : : : : : : : : : : : : : : : : : : : : : : : : : 43
4 Relevant protocols 45
4.1 File Transfer : : : : : : : : : : : : : : : : : : : : : : : : : : 45
4.2 Network News : : : : : : : : : : : : : : : : : : : : : : : : : 45
4.3 Search and Retrieve : : : : : : : : : : : : : : : : : : : : : : 46
4.4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 46
4.5 HTTP as implemented in WWW : : : : : : : : : : : : : : : 46
4.5.1 Connection : : : : : : : : : : : : : : : : : : : : : : : 46
4.5.2 Request : : : : : : : : : : : : : : : : : : : : : : : : : 46
4.5.3 Response : : : : : : : : : : : : : : : : : : : : : : : : 47
4.5.4 Disconnection : : : : : : : : : : : : : : : : : : : : : : 47
4.6 HyperText Transfer Protocol : : : : : : : : : : : : : : : : : 48
4.6.1 Underlying protocol : : : : : : : : : : : : : : : : : : 48
4.6.2 Idempotent ? : : : : : : : : : : : : : : : : : : : : : : 48
4.6.3 Request: Information transferred from client : : : : 49
4.6.4 Response : : : : : : : : : : : : : : : : : : : : : : : : 51
4.6.5 Status codes : : : : : : : : : : : : : : : : : : : : : : 51
4.6.6 Penalties : : : : : : : : : : : : : : : : : : : : : : : : 52
4.7 Why a new protocol? : : : : : : : : : : : : : : : : : : : : : : 53
5 W3 Naming Schemes 55
5.1 Examples : : : : : : : : : : : : : : : : : : : : : : : : : : : : 55
5.2 Naming sub-schemes : : : : : : : : : : : : : : : : : : : : : : 56
5.3 Address for an index Search : : : : : : : : : : : : : : : : : : 57
5.3.1 Example: : : : : : : : : : : : : : : : : : : : : : : : : 57
5.4 W3 addresses of files : : : : : : : : : : : : : : : : : : : : : : 57
5.4.1 Examples : : : : : : : : : : : : : : : : : : : : : : : : 58
5.4.2 Improvements : Directory access : : : : : : : : : : : 58
5.5 Hypertext address for net News : : : : : : : : : : : : : : : : 58
5.5.1 Examples : : : : : : : : : : : : : : : : : : : : : : : : 59
5.6 Relative naming : : : : : : : : : : : : : : : : : : : : : : : : 59
4 CONTENTS
5.7 HTTP Addressing : : : : : : : : : : : : : : : : : : : : : : : 60
5.8 Telnet addressing : : : : : : : : : : : : : : : : : : : : : : : : 61
5.9 W3 address syntax: BNF : : : : : : : : : : : : : : : : : : : 62
5.10 Escaping illegal characters : : : : : : : : : : : : : : : : : : : 64
5.11 Gopher addressing : : : : : : : : : : : : : : : : : : : : : : : 64
5.12 W3 addresses for WAIS servers : : : : : : : : : : : : : : : : 66
6 HTML 67
6.1 Default text : : : : : : : : : : : : : : : : : : : : : : : : : : : 67
6.2 HTML Tags : : : : : : : : : : : : : : : : : : : : : : : : : : : 68
6.2.1 Title : : : : : : : : : : : : : : : : : : : : : : : : : : 68
6.2.2 Next ID : : : : : : : : : : : : : : : : : : : : : : : : : 68
6.2.3 Base Address : : : : : : : : : : : : : : : : : : : : : 69
6.2.4 Anchors : : : : : : : : : : : : : : : : : : : : : : : : 69
6.2.5 IsIndex : : : : : : : : : : : : : : : : : : : : : : : : : 70
6.2.6 Plaintext : : : : : : : : : : : : : : : : : : : : : : : 70
6.2.7 Example sections : : : : : : : : : : : : : : : : : : : 70
6.2.8 Paragraph : : : : : : : : : : : : : : : : : : : : : : : : 71
6.2.9 Headings : : : : : : : : : : : : : : : : : : : : : : : : 71
6.2.10 Address : : : : : : : : : : : : : : : : : : : : : : : : 72
6.2.11 Highlighting : : : : : : : : : : : : : : : : : : : : : : : 72
6.2.12 Glossaries : : : : : : : : : : : : : : : : : : : : : : : : 72
6.2.13 Lists : : : : : : : : : : : : : : : : : : : : : : : : : : : 72
6.3 SGML : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73
6.3.1 High level markup : : : : : : : : : : : : : : : : : : : 73
6.3.2 Syntax : : : : : : : : : : : : : : : : : : : : : : : : : : 74
6.3.3 Tools : : : : : : : : : : : : : : : : : : : : : : : : : : 74
6.3.4 AAP : : : : : : : : : : : : : : : : : : : : : : : : : : : 74
7 Coding Style Guide 75
7.1 Language features : : : : : : : : : : : : : : : : : : : : : : : 75
7.2 Module Header : : : : : : : : : : : : : : : : : : : : : : : : : 76
7.3 Function Headings : : : : : : : : : : : : : : : : : : : : : : : 77
7.3.1 Format : : : : : : : : : : : : : : : : : : : : : : : : : 77
7.3.2 Entry and exit condidtions : : : : : : : : : : : : : : 78
7.3.3 Function Heading: dummy example : : : : : : : : : 78
7.4 Function body layout : : : : : : : : : : : : : : : : : : : : : : 79
7.4.1 Indentation : : : : : : : : : : : : : : : : : : : : : : : 79
7.5 Identifiers : : : : : : : : : : : : : : : : : : : : : : : : : : : : 79
7.6 Directory structure : : : : : : : : : : : : : : : : : : : : : : : 80
7.7 Include Files : : : : : : : : : : : : : : : : : : : : : : : : : : 80
7.7.1 Module include files : : : : : : : : : : : : : : : : : : 80
CONTENTS 5
7.7.2 Common include files : : : : : : : : : : : : : : : : : 81
6 CONTENTS
Chapter 1
World-Wide Web
Documentation on the World Wide Web is normally picked up by browsing with a W3 browser. If you need a printed copy, it is also available in "laTeX" and "Postscript" formats by anonymous FTP from node info.cern.ch.
This document introduces the "World Wide Web Book", a paper document derived from the hypertext about the project. The book contains
ffl General informaion about the project, people and history;
ffl A list of things to be done, including how YOU can put data onto the web;
ffl A technical discussion of the design issues in projects such as WWW;
ffl Actual details of the implementation of the WWW project
ffl Such low-level details as software architecures and coding standards
The text of the book has been automatically generated from the hypertext,
so it may seem strange in places due to links in the hypertext which are
not there in the printed copy.
The authors of the material are general members of the W3 team at CERN, except where otherwise noted.
1.1 WorldWideWeb - Summary
The WWW project merges the techniques of information retrieval and hypertext to make an easy but powerful global information system.
The project is based on the philosophy that much academic information should be freely available to anyone. It aims to allow information sharing
8 CHAPTER 1. WORLD-WIDE WEB
within internationally dispersed teams, and Univerity of Graz's "Hyper-G", and Thinking Machine's "W.A.I.S." systems.
The WWW model gets over the frustrating incompatibilities of data format between suppliers and reader by allowing negotiation of format between a smart browser and a smart server. This should provide a basis for
1.2. WWW PEOPLE 9
You can try the simple line mode browser by telnetting to info.cern.ch with user name "www" (no password). You can also find out more about WWW in this way.
It is much more efficient to install the browser on your own machine. The line mode browser is currently available in source form by anonymous FTP from node info.cern.ch [currently 128.141.201.74] as
/pub/WWWLineMode_v.vv.tar.Z.
(v.vv is the version number - take the latest.) <P> Also available is a hypertext editor for the NeXT using the NeXTStep graphical user interface in file
/pub/WWWNeXTStepEditor_v.vv.tar.Z
and a skeleton server daemon, available as
/pub/WWWDaemon_v.vv.tar.Z
Documentation is readable using www. A plain text version of the installation
instructions is included in the tar file.
Tim BL
1.2 WWW people
This is a list of some of those who have contributed to the WWW project , and whose work is linked into this web. Unless otherwise stated they are at CERN, Phone +41(22)767 plus the extension given below. Address: 1211 Geneva 23, Switzerland.
10 CHAPTER 1. WORLD-WIDE WEB
1.2.1 Eelco van Asperen
Ported the line-mode browser the PC under PC-NFS; developped a curses version. Email: evas@cs.few.eur.nl.
1.2.2 Carl Barker
Carl is at CERN for a six month period during his degree course at Brunell Univerity, UK. Carl will be working on the server sode, possibly on client authentication. Tel: 8265. Email: barker@cernnext.cern.ch
1.2.3 Tim Berners-Lee
Currently in CN division. Before comming to CERN, Tim worked on, among other things, document production and text processing. He developped. Phone: 3755, Email: timbl@info.cern.ch
1.2.4 Robert Cailliau
Currently in ECP division, Programming Techniques group. Robert has been interested in document production since 1975. He ran the Office Computing Systems group from 87 to 89. He is a long-time user of Hypercard, which he used to such diverse ends as writing trip reports, games, bookkeeping software, and budget preparation forms. Robert is contributing browser software for the Macintosh platform, and will be analysing the needs of physics experiments for online data access. Phone: 5005(office), 4646 (Lab). Email: cailliau@cernvm.cern.ch
1.2.5 Peter Dobberstein
While at the DESY lab in Hamburg (DE), Peter did the port of the linemode browser onto MVS and, indirectly, VM/CMS. These were the most difficult of the ports to date. He also overcame many incidental problems in making a large amount of information in the DESY database available.
1.2. WWW PEOPLE 11
1.2.6 "Erwise" team
Kim Nyberg, Teemu Rantanen, Kati Suominen and Kari Sydfnmaanlakka ('f'.
1.2.7 David Foster
With wide experience in networking, and a current conviction information systems and PC/Windows being the way of the future, Dave is having a go at a MS-Windows berowser/editor. Dave also has a strong interest in server technology and intelligent information retrieval algorithms.
1.2.8 Karin Gieselmann
With experience as librarian of the "FIND" database on cernvm, interfacing with authors and readers, Karin has volunteered to be a "hyperlibrarian" and look after the content of hypertext databases. Email: Karin@cernvm.cern.ch
1.2.9 Jean-Francois Groff
Provided some useful input in the "design issues". Currently in ECP/DS as "cooperant", J-F joined the project in September 1991. He wrote the gateway to the VMS Help system , and is looking at new browsers (X- Windows, emacs) and integration of new data sources. Jean-Francois is also working on the underlying browser architecure. Phone: 3755, Email: jfg@cernvax.cern.ch
1.2.10 Willem von Leeuwen
at NIKHEF, WIllem put up many servers and has provided much useful feedback about the w3 browser code.
1.2.11 Nicola Pellow
Nicola joined the project in November 1990. She is a student at Leicester Polytechnic, UK, and left CERN at the end of August 1991. She wrote the original line mode browser .
12 CHAPTER 1. WORLD-WIDE WEB
1.2.12 Bernd Pollermann
Bernd is responsible for the "XFIND" indexes on the CERNVM node, for their operation and, largely, their contents. He is also the editor of the Computer Newsletter (CNL), and has experience in managing large databases of information. Bernd is in the AS group of CN division. He has contributed code for the FIND server which allows hypertext access to this large store of information. Phone: 2407 Office: 513-1-16
1.2.13 Pei Wei
Pei is the author of "Viola", a hypertext browser, and the ViolaWWW variant which is a WWW browser. He is at the University of Califorfornia
1.3 Policy
This outlines the policy of the W3 project at CERN.
1.3.1 Aim a very wide range of information of all types should be available as widely as possible.
1.3.2 Collaboration
We encourage collaboration by academic or commercial parties. There are always many things to be done, ports to be made to different environments, new browsers to be write, and additional data to be incorporated into the "web". We have already been fortunate enough to have several contributions in these terms, and also with hardware support from manufacturers. If you may be interested in extending the web or the software, please mail or phone us.
1.3. POLICY 13
1.3.3 Code distribution
Code written at CERN is covered by the CERN copyright. In practice the interpretation of this in the case of the W3 project is that the programs are freely available to academic bodies. To commercial organizations who are not reselling it, but are using it to participate in global information exchange, the charge is generally waived in order to cut administrative costs. Code is of course shared freely with all collaborators. Commercial organizations wishing to sell software based on W3 code should contact CERN.
Where CERN code is based on public domain code, that code is also public domain.
Code not originating at CERN is of course covered by terms set by the copyright holder involved.
Tim BL
1.3.4 WorldWideWeb distributed code
See the CERN copyright . This is the README file which you get when you unwrap one of our tar files. These files contain information about hypertext, hypertext systems, and the WorldWideWeb project. If you have taken this with a .tar file, you will have only a subset of the files.
Archive Directory structure
Under /pub/www, besides this README file, you'll find bin, src and doc directories. The main archives are as follows
src/WWWLineMode v.vv.tar.Z The Line mode browser - all source,
and binaries for selected systems.
WWWLineModeDefaults.tar.Z A subset of WWWLineMode v.vv.tar.Z. Basic documentation, and our current home page.
src/WWWNextStepEditor.tar.Z The Hypertext Browser/editor for the NeXT { source and binary.
src/WWWDaemon v.vv.tar.Z The HTTP daemon, and WWW-WAIS gateway programs.
doc/WWWBook.tar.Z A snapshot of our internal documentation - we prefer you to access this on line { see warnings
below.
bin/xxx/www Executable binaries for system xxx
14 CHAPTER 1. WORLD-WIDE WEB
Generated Directory structure
The tar files are all designed to be unwrapped in the same (this) directory. They create different parts of a common directory tree under that directory. There may be some duplication. They also generate a few files in this directory: README.*, Copyright.*, and some installation instructions (.txt).
NeXTStep Browser/Editor
The browser for the NeXT is those files contained in the application directory WWW/Next/Implementation/WorldWideWeb.appand is compiled.Whe you install the app, you may want to configure the default page, WorldWideWeb.app/default.html. These must point to some useful information! You should keep it up to date with pointers to info on your site and elsewhere. If you use the CERN home page note there is a link at the bottom to the master copy on our server.
Line Mode browser
Binaries of this for some systems are in subdirectories of /pub/www/bin. If the binary exists for your system, take that and also the /pub/www/WWWLineModeDefaults.tar.Z. Unwrap the documentation, and put (link) its directory into /usr/local/lib/WWW on your machine. Put the www executable into your path somewhere, and away you go.
If no binary exists, procede as follows. Take the source tar file WWW- LineMode v.vv.tar.Z , uncompress and untar it. You will then find the line Mode browser in WWW/LineMode/Implementation/... (See Installation notes )
Subdirectories to that directory contain Makefiles for systems to which we have already ported. If your system is not among them, make a new subdirectory with the system name, and copy the Makefile from an existing one. Change the directory names as needed. PLEASE INFORM US OF THE CHANGES WHEN YOU HAVE DONE THE PORT. This is a condition of your use of this code, and will save others repeating your work, and save you repeating it in future releases.
Whe you install the browsers, you may want to configure the default page. This is /usr/local/lib/WWW/default.html for the line mode browser. This must point to some useful information! You should keep it up to date with pointers to info on your site and elsewhere. If you use the CERN home page note there is a link at the bottom to the master copy on our server.
Some basic documentation on the browser is delivered with the home page in the directory WWW/LineMode/Defaults. A separate tar file of that
1.3. POLICY 15
directory (WWWLineModeDefaults.tar.Z) is available if you just want to update that.
The rest of the documentation is in hypertext, and so wil be readable most easily with a browser. We suggest that after installing the browser, you browse through the basic documentation so that you are aware of the options and customisation possibilities for example.
Documentation
The archive /pub/www/doc/WWWBook.tar.Z is an extract of the text from the WorldWideWeb (WWW) project documentation.
This is a snapshot of a changing hypertext system. The text is provided as example hypertext only, not for general distribution. The accuracy of any information is not guaranteed, and no responsibility will be accepted by the authors for any loss or damage due to inaccuracy or omission. A copy of the documentation is inevitably out of date, and may be inconsistent. There are links to information which is not provided in that tar file. If any of these facts cause a problem, you should access the original master data over the network using www, or mail us.
Servers
The Daemon tar file contains (in this release) the code for the basic HTTP daemon for serving files, and also for the WWW-WAIS gateway. To compile the WAIS gateway, you will need [a link to] a WAIS distribution at the same level as the WWW directory.
General
Your comments will of course be most appreciated, on code, or information
on the web which is out of date or misleading. If you write your own
hypertext and make it available by anonymous ftp or using a server, tell
us and we'll put some pointers to it in ours. Thus spreads the web... Tim
Berners-Lee
WorldWideWeb project
CERN, 1211 Geneva 23, Switzerland
Tel: +41 22 767 3755; Fax: +41 22 767 7155; email: timbl@info.cern.ch
1.3.5 Copyright CERN 1990-1992
The information (of all forms) in these directories is the intellectual property of the European Particle Physics Laboratory (known as CERN). It
16 CHAPTER 1. WORLD-WIDE WEB
is freely availble for non-commercial use in collaborating non-military academic institutes. Commercial organisations wishing to use this code should apply to CERN for conditions. Any modifications, improvements or extensions made to this code, or ports to other systems, must be made available under the same terms.
No guarantee whatsoever is provided by CERN. No liability whatsoever is accepted for any loss or damage of any kind resulting from any defect or inaccuracy in this information or code.
Tim Berners-Lee
CERN
1211 Geneva 23, Switzerland Tel +41(22)767 3755, Fax +41(22)767
7155, Email: tbl@cernvax.cern.ch
Tim BL
1.4 History to date
A few steps to date in the WWW project history are as follows:
March 1989 First project proposal written and circulated
for comment (TBL) . Paper "HyperText and
CERN" (in ASCII or WriteNow format) produced
as background.
October 1990 Project proposal reformulated with encouragement from CN and ECP divisional management.
RC is co-author.
November 1990 Initial WorldWideWeb prototype developped on the NeXT (TBL) .
November 1990 Nicola Pellow joins and starts work on the linemode browser . Bernd Pollermann helps get
interface to CERNVM "FIND" index running.
TBL gives a colloquium on hypertext in general.
Christmas 1990 Line mode and NeXTStep browsers demonstrable. Acces is possible to hypertext files, CERNVM
"FIND", and internet news articles.
Febraury 1991 workplan for the purposes of ECP division. 26 February 1991 Presentation of the project to the ECP group. March 1991 Line mode browser (www) released to limited audience on priam vax, rs6000, sun4.
May 1991 Workplan produced for CN/AS group
1.4. HISTORY TO DATE 17
17 May 1991 Presentation to C5 committee. General release
of www on central CERN machines.
12 June 1991 CERN Computer Seminar on WWW August 1991 Files available on the net, posted on alt.hypertext (6, 16, 19th Aug), comp.sys.next (20th), comp.text.sgml
and comp.mail.multi-media (22nd).
October 1991 VMS/HELPand WAIS gateways installed. Mailing lists www-interest and www-talk@info.cern.ch
mailing lists started. One year status report.
Anonymous telnet service started.
December 1991 Presented poster and demonstration at HT91 . W3 browser installed on VM/CMS. CERN
computer newsletter announces W3 to the HEP
world.
15 January 1992 Line mode browser release 1.1 available by anonymous FTP. See news . Presentation to AI-
HEP'92 at La Londe.
12 February 1992 Line mode v 1.2 annouced on alt.hypertext, comp.infosystems, comp.mail.multi-media, cern.sting,
comp.archives.admin, and mailing lists.
18 CHAPTER 1. WORLD-WIDE WEB
Chapter 2
How can I help?
There are lots of ways you can help if you are interested in seeing the web grow and be even more useful...
Put up some data There are many ways of doing this. The web
needs both raw data { fresh hypertext or old
plain text files, or smart servers giving views of
existing databases. See more details , etiquette
.
Suggest someone else does Maybe you know a system which it would be neat to have on the web. How about suggesting
to the person involved that they put up
a W3 server?
Manage a subject area If you know something of what's going on in a particular field, organization or country, would
you like to keep up-to-date an overview of online
data?
Write some software We have a big list of things to be done. Help yourself { all contributions gatefully received!
see the list .
Send us suggestions We love to get mail... www-bug@info.cern.ch Tell your friends Install/get installed the client software on your site. Quote things by their UDI to allow w3
users to pick them straight up.
Tim BL
20 CHAPTER 2. HOW CAN I HELP?
2.1 Information Provider
There are many ways of making your new or existing data available on the "web" . The best method depends on what sort of data you have. (If you have any questions, mail the www team at www-bug@info.cern.ch.). See also: Web etiquette . How can I help?
2.1.1 You have a few files
If you have some plain text files then you can easily write a small hypertext file which points to them. To make them accessible you can use either anonymous FTP , or the HTTP daemon .
2.1.2 You have a NeXT
You can use our prototype hypertext editor to create a web of hypertext, linking it to existing files. This is not YET available for X11 workstations. This is a fast way of making online documentation, as well as performing the hyper-librarian job of making sure all your information can be found.
2.1.3 Using a shell script
An HTTP daemon is such a simple thing that a simple shell script will often suffice. This is great for bits of information available locally through other programs, which you would like to publish. More details ...
2.1.4 You have many files
In this case, for speed of access, the HTTP daemon will probably be best. You can write a tree of hypertext in HTML linking the text files, or you can even generate the tree automatically from your directory tree. If you want to generate a full-text index, then you could use the public domain WAIS software - your data will then be accessible (as plain text, not hypertext) through the WAIS gateway .
2.1.5 You have an existing information base
If you have a maintained base of information, don't rush into changing the way you manage it. A "gateway" W3 server can run on top of your existing system, making the information in it available to the world. This is how it works:
ffl Menus map onto sets of hypertext links
2.2. ETIQUETTE 21
ffl Different search options map onto different "index" document addresses (even if they use the same index underneath in your system).
ffl Procedures used by those who contribute and manage information stay unaltered.
If your database is WAIS, VMS/HELP, XFIND, or Hyper-G, a gateway
exists already. These gateway servers did not take long to write. You can
pick up a skeleton server in C from our distribution . You can also write
one from scratch, for example in perl. An advantage of a gateway is that
you can maintain your existing procedures for creating text and managing
the database. See: Tips for server writers.
Tim BL
2.2 Etiquette
There are a few conventions which will make for a more useable, less confusing, web.
2.2.1 Sign it!
An important aspect of information which helps keep it up to date is that one can trace its author. Doing this with hypertext is easy { all you have to do is put a link to a page about the author (or simply to the author's phone book entry).
Make a page for yourself with your mail address and phone number. At the bottom of files for which you are responsible, put a small note { say just your initials { and link it to that page. The address style (right justified) is useful for this.
Your author page is also a convenient place to put and disclaimers, copyright noitices, etc which law or convention require. It saves cluttering up the mesages themselves with a long signature.
If you are using the NeXT hypertext editor, then you can put this link from your default blank page so that it turns up on the bottom of each new document.
2.2.2 Give the status of the information
Some
22 CHAPTER 2. HOW CAN I HELP?
complete? What is its scope? For a phone book for example, what set of people are in it?
2.2.3 Refer back
You may create some data as part of an information tree, but others may may make links to it from other places. Don't make assumptions about what people have just read. Make links from your data back to more general data, so that if people have jumped in there, and at first they don't undertand what it's all about, they can pick up some background information to get the context.
2.2.4 A root page for outsiders. I suggest you put a "map" line into your daemon rule file to map the document name "/" onto such a document. As well as a summary of what is available at your host, pointers to related hosts are a good idea. Tim BL
2.3 Things to be done
There are many of these ...if you have amoment, take your pick! There are also special lists of things to do for the line mode browser , the NeXT browser , and the daemon .
2.3.1 Client side
More clients Clients exist for many platforms, but not all.
Editors only exist on the NeXT, but will be
really useful for sourcing info and group work.
(Group editor?)
Search engines Now the web of data and indexes exists, some really smart intelligent algorithms ("knowbots?")
could run on it. Recursive index and link tracing,
Just think...
Text from hypertext We need a quick way to print a book from the web. (html to tex?)
2.3. THINGS TO BE DONE 23
2.3.2 Server side
Server upgrade Easier to install, porrt. Export directories as
hypertext. Run shell scripts embedded in the
directory for virtual documents and searches.
More Servers See the list of things we have thought of or been pointed to.
WAIS integration WAIS protocol extensions tro allow hypertext; HTML data type, docids to be conforming UDIs.
Integrate WAIS in client too.
Integrate client and server A client which generates HTML becomes a general purpose gateway. Especially useful
for sites where general access to news, external
internet, etc, is limited.
2.3.3 Other
Mail server Update listserv to supply www documents from
UDI, including at the bottom a list of references
with their UDIs.
Gateways JANET and DECnet for example. Real need. HTTP enhancements Format conversion, authorization, better logging information for statistics.
Tim BL
24 CHAPTER 2. HOW CAN I HELP?
Chapter 3
Design Issues
This lists decisions to be made in the design or selection of a hypermedia
information system. It assumes familiarity with the concept of hypertext.
A summary of the uses of hypertext systems is followed by a list of features
which may or may not be available. Some of the points appear in the
Comms ACM July 88 articles on various hypertext systems. Some points
were discussed also at ECHT90 . Tentative answers to some design decisions
from the CERN perspective are included.
Here are the criteria and features to be considered:
ffl Intended uses of the system.
ffl Availability on which platforms?
ffl Navigational techniques and tools: browsing, indexing, maps etc
ffl Keeping track of previous versions of nodes and their relationships
ffl Multiuser access: protection, editing and locking, annotation.
ffl Notifying readers of new material available
ffl The topology of the web of links
ffl The types of links which can express different relationships between nodes
These are the three important issues which require agreement betwen systems which can work together
ffl Naming and Addressing of documents
26 CHAPTER 3. DESIGN ISSUES
ffl Protocols
ffl The format in which node content is stored and transferred
ffl Implementation and optimisation - Caching , smart browsers, knowbots etc., format conversion , gateways.
3.1 Intended Uses
Here are some of the many areas in which hypertext is used. Each area has its specific requirements in the way of features required.
ffl General reference data - encyclopaedia, etc.
ffl Completely centralized publishing - online help, documentation, tutorial etc
ffl More or less centralized dissemination of news which has a limited life
ffl Collaborative authoring
ffl Collaborative design of something other than the hypertext itself
ffl Personal notebook
The CERN requirement has a mixture of many of these uses, except that there is not a requirement for distribution of fixed hypertext on hard media such as optical disk. Evidently, the system will have to be networked, though databases may start life at least as personal notebooks. [The (paper) document "HyperText and CERN" describes the problem to be solved at CERN, and the requirements of a system which solves them.
3.2 Availability on various platforms
The system is to be available (at CERN) on many sorts of machine, but priorities must be decided. A list comprises:
ffl A unix or VMS workstation with X-windows
ffl An 80 character terminal attached to a unix or VMS machine, or an MSDOS PC
ffl An 80 character terminal attached to an IBM mainframe running VM/CMS
3.3. NAVIGATIONAL TECHNIQUES AND TOOLS 27
ffl A Macintosh
ffl A unix workstation with NextStep
ffl An MS-DOS PC with graphics
3.3 Navigational Techniques and Tools
TBL There are a number of ways of accessing the data one is looking for. Navigational access (i.e., following links) is the essence of hypertext, but this can be enhanced with a number of facilities to make life more efficient and less confusing.
3.3.1 Defined structure
It is sometimes nice for a reader to be able to reference a document structure
built specifically to enhance his understanding, by the document author.
This is especially important when the structure is part of the information
the author wishes to convery.
See a separate discussion of this point .
3.3.2 Graphic Overview
A Graphic overview is useful and could be built automatically. Should it be made by the author, server, browser or an independent daemon?.
3.3.3 History mechanism
This allows users to retrace their steps. typical functions provided can be interpreted in a hypertext web as follows:
28 CHAPTER 3. DESIGN ISSUES
Home Go to initial node
Back Go to the node visited before this one in chronological order. Modify the history to remove the
current node.
Next When the current node is one of several nodes linked to the back node, go to the next of those
nodes. Leave the Back node unchanged. Modify
the history to remove the current node and
replace it with the "next" (new current) node.
Previous When the current node is one of several nodes linked to the back node, go to the preceding
one of those nodes.
In many hypertext systems, a tree structure is forcibly imposed on the data, and these functions are interpreted only with respect to the links in the tree. However, the reader as he browses defines a tree, and it may be more relevant to him to use that tree as a basis for these functions. I would therefore suggest that an explicit tree structure not be enforced. .
3.3.4 Index
An Index helps new readers of a large database quickly find an obscure node. Keyword schemes I include in the general topic of indexes. The index must, like a graphic overview, be built either by the author, or automatically by one of the server, browser, or a daemon. The index entries may be taken from the titles, a keyword list, or the node content or a combination of these. Note that keywords, if they are specifically created rather than random words, map onto hypertext concept nodes, or nodes of special type keyword. It is interesting to establish an identity relationship between keywords in two different databases { this may lead a searcher from one database into another.
Index schemes are important but indexes or keywords should look like normal hypertext nodes. The particular special operation one can do with a good keyword index system which one can't do with a normal hypertext
3.3. NAVIGATIONAL TECHNIQUES AND TOOLS 29 ").
See also: HyperText and Information Retrieval
3.3.5 Node Names
These allow faster access if one knows the name. They allow people to give references to hypertext nodes in other documents, over the telephone, etc. This is very useful. However, in Notecards, where the naming of nodes was enforced, it was found that thinking up names for nodes was a bore for users. KMS thought that being able to jump to a named node was important. The node name allows a command line interface to be used to add new nodes.
I think that naming a node should be optional: perhaps by default the system could provide a number which can be used instead of a name.The system should certainly support the naming of nodes, and access by name.
3.3.6 Menu of links
Regular linkwise navigation may be done with hotspots (highlighted anchors) or may be done with a menu. It may be useful to have a menu of all the links from a given node as an alternative way of navigating. Enquire, for example, offers a menu of references as the only way
3.3.7 Web of Indexes
In WWW , an index is a document like any other. An index may be built to cover a certain domain of information. For example, at CERN there is a CERN computer center document index . There is a separate functional telephone book index . Indexes may be built by the original information provider, or by a third party as a value-added service.
Indexes may point to other indexes. An index search on one index may turn up another index in the result hit list. In this case, the following algorithm seems appropriate.
30 CHAPTER 3. DESIGN ISSUES
Index context
Most index searches nowadays, though some look like intelligent semantically aware searches, are basically associative keyword searches. That is, a document matches a search if there is a large correlation (with or without boolean operations) between the set of words it or its abstract contains and the set of words specified in the search. Let us consider extending these searches to linked indexes.".
Context narrowing
Suppose we search a general physics index with the keywords "CERN NEWSLETTER". That index may contain an entry with keyword "CERN" pointing to the CERN index. Therefore, a search on the first index will turn up the CERN index. We should then search the CERN index, but looking only for the keyword "NEWSLETTER". The keyword "CERN" is discarded, as it is assumed by the new context. In this simple model, we can assume that the contextwords could be used directly as the keywords for the index itself..
3.4. TRACING LINKS 31
Context Broadening
We have discussed here only a narrowing of context, not a broadening.
One can imagine also a reference to a broader context index. In this case,
perhaps one should add to the search some keywords which come from the
original context but were not expressed. This would be dangerous, and
people would not like it as they often feel that they are expressing their
request in absolute terms even when they are not. Also, they may have
been trying to escape from too restricing a context.
One should also consider a search which traces hypertext links as well as using indexes.
See also: Navigational techniques , Hypertext and IR ,
Tim BL
3.4 Tracing Links
A form of search in a hypertext base involves tracing the links between given nodes. For example, to find a module suitable for connecting a decstation to SCSI, one might try finding paths between a document on decstations and a document on SCSI. This is similar to relevance feedback in index searching.
Tracing is made more powerful by using typed links . In that case, one
could perform semantic searches for all document written by people who
were part of the same organisation as the author of this one, for example.
This can use node typing as well.
When using link tracing, documents take over from keywords. See Scott Preece's vision .
Tim BL
3.5 Versioning
Definition: The storage and management of previous copies of a piece of
information, for security, diagnostics, and interest.
Do you want version control?
Can you reference a version only?
If you refer to a particular place in a node, how does one follow it in a new version, if that place ceases to exist?
(Peter Aiken is the expert in this area - Tim Oren, Apple)
Yes, at CERNwe will want versioning. Very often one wants to correct a news item, even one of limited life, without reissuing it. This is a problem with VAX/NOTES for example. I would suggest that the text for the
32 CHAPTER 3. DESIGN ISSUES
3.6 Multiuser considerations.
3.6.1 Annotation
Annotation is the linking of a new commentary node to someone else's existing node. It is the essence of a collaborative hypertext. An annotation does not modify the text necessarily: one can separate protection against writing and annotation.
3.6.2 Protection
Protection against unauthorized reading and writing is provided by servers. We use the word domain.
3.6. MULTIUSER CONSIDERATIONS 33.
3.6.3 Private overlaid web
3.6.4 Locking and modifying:
ffl One can write-protect the file temporarily. This unfortunately levaes no clue as to who has locked it, when and why. It is also indistinguishable from a genuine protection to a document which should not be modified
ffl One can create a lock file containing information about who/when/why, whose name is derived from the name of the file in question.
3.6.5 Annotation
Annotation is the linking of a new commentary node to someone else's existing node. It is the essence of a collaborative hypertext.
34 CHAPTER 3. DESIGN ISSUES
3.7 Notification of new material
Does one need to bring it to a reader's attention when new unread material is added?
ffl Asynchronously (e.g. by mail) when the update is made?
ffl Synchronously when he browses or starts the application?
ffl Under the control of the modifying author? (i.e. can I say whether my change is a notifiable change? - Yes)
How do you express interest - in a domain, in a node, in things near a node, in anything you have read already, etc? A separate web which is stored locally, and logically overlay the public web?
There.
3.8 Topology
Here are a few questions about the underlying connectivity of a hypertext web.
3.8.1 Are links two- or multi-ended?
The term "link" normally indeicates with two ends. Variations of this are liks with multiple sources and/or multiple destinations, and constructs which relate more than two anchors. The latter map onto logic description systems, predicate calculus, etc. See the "Aquanet" system from Xerox PARC - paper at HT91). This is a natural step from hypertext whose the links are typed with semantic content. For example, the relation "Document A is a basis for document B given argument C". From now on however, let us restrict ourselves to links in the conventional sense, that is, with two ends.
3.8.2 Should the links be monodirectional or bidirec-
tional?
If they are bidirectional, a link always exists in the reverse direction. A disadvantage of this being enforced is that it might constrain the author of
3.8. TOPOLOGY 35. This
is important when a critical parameter of the system is how long it takes
someone to create a link.
KMS and hypercard have one-way links; Enquire has two-way links. There is a question of how one can make a two-way link to a protected database. The automatic addition of the reverse link is very useful for enhancing the information content of the database. See also: Private overlaid web , Generic Links .
It may be useful to have bidirectional links from the point of view of managing data. For example: if a document is destroyed or moved, one is aware of what dangling links will be created, and can possibly fix them.
A compromise that links be one-way in the data model, but that a reverse link is created when any link is made, so long as this can be done without infringing protection. An alternative is for the reverse links to be gathered by a background process operating on a basically monodirectionally linked web.
3.8.3 Should anchors have more than one link?
There is a design issue in whether one anchor may lead to many links, and/or on link have many anchors. It seems reasonable for many anchors to lead to the same reference. If one source anchor leads to more than one destination anchor, then there will be ambiguity if the anchor is clicked on with a mouse. This could be resolved by providing a menu to the user, but I feel this would complicate it too much. I therefore suggest a many-to-one mapping. JFG disagrees and would like to see a small menu presented to the user if the link was ambiguous. Microcosm does this.
3.8.4 Should links be typed?
A typed link carries some semantic information, which allows the system to manage data more efficiently on behalf of the user. A default type ("untyped") normally exists in some form when types are implemented. See also a list of some types . (Should a link be allowed to have many types? (- JFG ) I don't think so: that should be represented by more than one link.(- TBL ))
36 CHAPTER 3. DESIGN ISSUES
Link typing helps with the generation of graphical overviews , and with automatic tracing .
3.8.5 Should links contain ancillary information?
Does the system allow dating, versioning, authorship, comment text on a link? If so, how is it displayed and accessed? This sort of information complicates the issue, in that readable information is no longer carried within node contents only. Pretty soon, following this path leads to a link becoming a node in itself, annotatable and all. This perverts the data model significantly, and I cannot see that that is a good idea. Information about the link can always be put in the source node, or in an intermediate node, for example an annotation. However, this makes tracing more difficult. It is certainly nice to be able to put a comment on a link. Perhaps one should make a link annotatable. I think not.
3.8.6 Should a link contain Preview information?
This is information stored at the source to allow the reader to check whether he wants to follow a link before he goes. I feel that the system may cache some data (such as the target node title), or the writer of the node may include some descriptive material in the highlighted spot, but it is not necessary to include preview information just because access may be slow. Caching should be done instead of corrupting the user interface. If you have a fast graphic overview , this could
3.9 Link Types
See discussion of whether links should be typed .
Descriptive (normal) link types are mainly for the benefit of users and
tracing, and graphics representation algorithms. Some link types for example
express relationships between the things described by two nodes.
A Is part of B / B includes A
A Made B / B is made by A
A Uses B / B is used by A
A refers to B / B is referred to by A
3.9.1 Magic link types
These have a significance known to the system, and may be treated in special ways. Many of these relate whole nodes, rather than particular
3.10. DOCUMENT NAMING 37
anchors within them. (See also multiended links and predicate logic) They might include:
Annotation
The information in the destination node is additional to that in the source
node, and may be viewed at the same time. It may be filtered out (as a
function of author?).
Annotation is used by one person to write the equivalent of "margin notes" or other criticism on another's document, for example. Tracing may ignore annotations when generating trees or sequences.
Embedded information
If this link is followed, the node at the end of it is embedded into the
display of the source node. This is supported by Guide, but not many
other systems. It is used, in effect, by those systems (VAX/notes under
Decwindows, Microsoft Word) which allow "Outlining" { expanding a tree
bit by bit.
The browser has a more difficult job to do if this is supported.
person described by node A is author of node B
This information can be used for protection, and informing authors of interest, for sending mail to authors, etc.
person described by node A is interested in node B
This information can be used for informing readers of changes.
Node A is in fact a previous version of node B
Node A is in fact a set of differences between B and its previous
version. This information will probably not be stored as nodes, but
3.10 Document Naming
This is probably the most crucial aspect of design and standardization in an open hypertext system. It concerns the syntax of a name by which a document or part of a document (an anchor) is referenced from anywhere else in the world.
38 CHAPTER 3. DESIGN ISSUES.
3.10.1 Name or Address, or Identifier?
Conventionally, a "name" has tended to mean a logical way of referring to an object in some abstract name space, while the term "address" has been used for something which specifies the physical location. The term "unique identifier" generally referred to a name which was guaranteed to be unique but had little significance as regards the logical name or physical address. A name server was used to convert names or unique identifiers into addresses..
3.10.2 Hints
Some document reference formats contain "hints" to the reader about the document, such as server availability, copyright status, last known physical address and data formats. It is very important not to confuse these with the document's name, as they have a shorter lifetime than the document.
3.11. DOCUMENT FORMATS 39
3.10.3 X500
The X500 directory service protocol defines an abstract name space which is hierarchical. It allows objects such as organizations, people, and documents to be arranged in a tree. Whereas the hierarchical structure might make it difficult to decide in which of two locations to put an object (it's not hypertext), this does allow a unique name to be given for anything in the tree. X500 functionally seems to meet the needs of the logical name space in a wide-area hypertext system. Implementations are somewhat rare at the moment of writing, so it cannot be assumed as a general infrastructure. If this direction is chosen for naming, it still leaves open the question of the format of the address into which a document name will be translated. This must also be left as open-ended as the set of protocols. Tim BL
3.11 Document formats
The question of the format of the contents of a node is independent of the format of all the management information (except for the format of the anchor position within the node content). Therefore, the hypertext system can be largely defined without specifying the node format. However, agreement must be reached between client and server about how they exchange content information. Many hypertext systems qualify as hypermedia systems because they handle media other than plain text. Examples are graphics, video and sound clips, object-oriented graphics definitions, marked-up text, etc.
3.11.1 Format negotiation
Most hypermedia systems on the market today have the same application program responsible for the hypertext navigation and for the browsing. It would be safer to separate these features as much as possible: otherwise, in defining a universal hypertext system, one is burdened with defining a universal multimedia browser. This would certainly not stand the test of time. Node content must be left free to evolve. This implies that format conversion facilities must be available to allow simple browsers to access data which is stored in a sophisticated format. Such conversion facilities tend to exist in many applications, though not, in general, in hypertext applications.
The format of the content of a node should be as flexible as possible. Having more than one format is not useful from the user's point of view {
40 CHAPTER 3. DESIGN ISSUES
only from the point of view of an evolving system. I suggest the following rules:
1. Basic formats
There is a set of formats which every client must be able to handle. These include 80-column text and basic hypertext ( HTML ).
2. Conversion
A server providing a format which is not in the basic set of formats required for a client must have the possibility of generating some sort of conversion of the text (even if necessary an apology for non-conversion in the case of graphics to text) for a client which cannot handle it. This ensures universal readability world over.
3. Negotiation
For every format, there must be a set of other possible formats which the server can convert it into, and the most desirable format is selected by negotiation between the two parties. The negotiation must take into account:
ffl the expected translation time, including current load factors
ffl the expected data degradation
ffl the expected transmission time (?!!)
The times one could assume will be roughly proportional to the length of the document, or at least linear in it..
3.11.2 Examples
Examples of rich text formats which exist already at CERN are as follows, with, in brackets after each, other formats into which it might be convertible:
ffl SGML ( Tex , Postscript, plain text)
3.12. DOCUMENT CACHING 41
ffl Bookmaster (Postscript, I3812, plain text)
ffl TeX (DVI, plain text)
ffl DVI
ffl Microsoft RTF (postscript,
ffl Postscript, Editable Postscript (IBM 3812 bitmap)
ffl plain text
When a server (or browser) is obliged to perform a conversion from one format to another, one imagines that the result would be cached so that, if the same conversion were needed later, it would be available more rapidly. Format conversion, like notification of new material, is something which can be triggered either by the writer or by the browser. In many cases, a conversion from, say, SGML into Postscript or plain text would be made immediately on entry of the new material, and kept until the source has been updated (See caching , design issues
3.12 Document caching
Three operations in the retrieval of a document may take significant time:
ffl Format conversion by the server, including version regeneration
ffl Data transmission across the network
ffl Format conversion by the browser
At each stage, the server (in the first case) or browser (in the other cases) may decide to keep a temporary copy of the result. This copy should ideally be common to many browsers.
Automatic
ffl expiry date
ffl file size
ffl time taken to get the file
ffl frequency of access
ffl time since access
42 CHAPTER 3. DESIGN ISSUES
3.12.1 Expiry date
As a guide to help a cache program optimise the data it caches, it is useful if a document is transmitted with an estimate by the server of the lengt of time the data may be kept for. This allows fast changing documents to be flushed from the system, preventing readers from being mislead. (I would not propose any notification of document changes to be distributed to cache managers automatically). For example, an RFC may be cached for years, while the state of the alarm system may be marked as valid for only one minute.
Window-oriented browsers effectively cache documents when they keep
several at a time in memory, in different windows. In this case, for very
volatile data, it may be useful to have the browser automatically refresh
the window when its data expires.
( design issues )
3.13 Scott Preece on retrieval
3 Oct 91 (See tracing , Navigation )
My
3.13. SCOTT PREECE ON RETRIEVAL 43
44 CHAPTER 3. DESIGN ISSUES
Chapter 4
Relevant protocols
The WorldWideWeb system can pick up information from many information sources, using existing protocols. Among these are file and news transfer protocols.
4.1 File Transfer ). See also the prospero project and the shift project, for more powerful file access systems.
4.2 Network News. (See news address syntax )
46 CHAPTER 4. RELEVANT PROTOCOLS
4.3 Search and Retrieve
The WWW project defines its own protocol for information transfer, which allows for negotiation on representation. This we call HTTP, for HyperText Transfer Protocol .See also HyperText Transfer Format , and the HTTP address syntax )
4.4
Whilst the HTTP protocol provides an index search function, another common protocol for index search is Z39.50, and the version of it
4.5 .
4.5.1oriented service. The interpretation of the protocol below in the case of a sequenced packet service (such as DECnet(TM) or ISO TP4) is that that the request should be one TPDU, but the repose may be many.
4.5.2 Request
The client sends a document request consisting of a line of ASCII characters terminated by a CR LF (carriage return, line feed) pair. A well-behaved server will not require the carriage return character.
4.5. HTTP AS IMPLEMENTED IN WWW 47 search functionality of the protocol lies in the ability of the addressing syntax to describe a search on a named index .
A search should only be requested by a client when the index document itself has been descibed as an index using the ISINDEX tag .
4.5.3 Response.
4.5.4
48 CHAPTER 4. RELEVANT PROTOCOLS
4.6 HyperText Transfer Protocol
See also: Why a new protocol? , Other protocols used
This is a list of the choices made and features needed in a hypertext transfer protocol. See also the HTTP protocol as currently implemented .
4.6.1 Underlying protocol
There are various distinct possible bases for the protocol - we can choose
ffl Something based on, and looking like, an Internet protocol. This has the advantage of being well understood, of existing implementations being all over the place. It also leaves open the possibility of a universal FTP/HTTP or NNTP/HTTP server. This is the case for the current HTTP.
ffl Something based on an RPC standard. This has the advantage of making it easy to generate the code, that the parsing of the messages is done automatically, and that the transfer of binary data is efficient. It has the disadvantage that one needs the RPC code to be available on all platforms. One would have to chose one (or more) styles of RPC. Another disadvantage may be that existing RPC systems are not efficient at transferring large quantities of text over a stream protocol unless (like DD-OC-RPC) one has a let-out and can access the socket directly.
ffl Something based on the OSI stack, as is Z39.50. This would have to be run over TCP in the internet world.
Current HTTP uses the first alternative, to make it simple to program, so that it will catch on: conversion to run over an OSI stack will be simple as the structure of the messages is well defined.
4.6.2 Idempotent ?
Another choice is whether to make the protocol idempotent or not. That is, does the server need to keep any state informat about the client? (For example, the NFS protocol is idempotent, but the FTP and NNTP protocols are not.) In the case of FTP the state information consists of authorisation, which is not trvial to establish every time but could be, and current directory and transfer mode which are basically trivial. The propsed protocol IS idempotent.
This causes, in principle, a problem when trying to map a non-dempotent system (such as library search systems which stored "result sets" on behalf
4.6. HYPERTEXT TRANSFER PROTOCOL 49
of the client) into the web. The problem is that to use them in an idempotent way requires the re-evaluation of the intermediate result sets at each query. This can be solved by the gateway intelligently caching result sets for a reasonable time.
4.6.3 Request: Information transferred from client
Parameters below, however represented on the network, are given in upper
case, with parameter names in lower case. This set assumes a model of
format negociation).
GET document name Please transfer a named document back. Transfer
the results back in a standard format or one
which I have said I can accept. The reply includes
the format. In practice, one may want
to transfer the document over the same link (a
la NNTP) or a different one (a la FTP). There
are advantages in each technique. The use of
the same link is standard, with moving to a
different link by negociation (see PORT ).
SEARCH keywords Please search the given index document for all items with the given word combination, and
transfer the results back as marked up hypertext.
This could elaborate to an SQL query.
There are many advantages in making the search
criterion just a subset of the document name
space.
SINCE datetime For a search, refer to documents only dated on or after this date. Used typically for building
a journal, or for incremental update of indexes
and maps of the web.
BEFORE datetime For a search, refer to documents before this dat only.
ACCEPT format penalty I can accept the given formats . The penalty is a set of numbers giving an estimate of the
data degradation and elapsed time penalty which
would be suffered at the CLIENT end by data
being received in this way. Gateways may add
50 CHAPTER 4. RELEVANT PROTOCOLS
or modify these fields.
PORT See the RFC959 PORT command. We could change the default so that if the port command
is NOT specified, then data must be sent back
down the same link. In an idempotent world,
this information would be included in the GET
command.
HEAD doc Like GET, but get only header information. One would have to decide whether the header
should be in SGML or in protocol format (e.g.
RPC parameters or internet mail header format).
The function of this would be to allow
overviews and simple indexes to be built without
having to retrieve the whole document. See
the RFC977 HEAD command. The process of
generation of the header of a document from
the source (if that is how it is derived) is subject
to the same possibilties (caching, etc) as a
format convertion from the source.
USER id The user name for logging purposes, preferably a mail address. Not foir authentication unless
no other authentication is given.
AUTHORITY authentication A string to be passed across transparently. The protocol is open to the authentication
system used.
HOST The calling host name - useful when the calling host is not properly registered with a name
server.
Client Software For interest only, the application name and version number of the client software. These values
should be preserved by gateways.
4.6.4 Response
Status A status is required in machine-readable format.
See the 3-figure status codes of FTP for
example. Bad status codes should be accompanied
by an explantory document, possible conianing
links to futher information. A possibility
would be to make an error response a special
SGML document type. Some special status
4.6. HYPERTEXT TRANSFER PROTOCOL 51
codes are mentioned below .
Format The format selected by the server Document The document in that format
4.6.5 Status codes
Success Accompanied by format and document.
Forward Accompanied by new address. The server indicates
a new address to be used by the client for
finding the document. the document may have
moved, or the server may be a name server.
Not Authorized The authorisation is not sufficient. Accompanied by the address prefix for which authorisation
is required. The browser should obtain
authoisation, and use it every time a request
is made for a document name matching that
prefix.
Bad document name The document name did not refer to a valid document.
Server failure Not the client's fault. Accompanied by a natural language explanation.
Not available now Temporary problem - trying at a later time might help. This does not i,ply anything about
the document name and authorisation being
valid. Accompaned by a natural language explaination.
Search fail Accompanied by a HTML hit-list without any hits, but possibly containing a natural explanation.
Tim BL
4.6.6 Penalties
There are two questions to consider when deciding on different possible transfer formats between servers and clients: Information degradation and elapsed time.
Degradation
When information is converted from one format to another, it may be degraded. For example, when a postscript file is rendered into bitmap, it
52 CHAPTER 4. RELEVANT PROTOCOLS
loses its potentially infinite resolution; when a TeX file is rendered into pure ASCII, it loses its structure and formatting.
This degradation is difficult to guess from simply the file type. and
for a given file it is quite subjective. Any attept to estimate a penalty
will therfore be very aproximate, and only useful for distinguishing widely
differing cases. A suitable unit would be the proportion, betwen and 1, of
the information which is not lost. Let's call it the degradation coefficient. One would hope that these coefficiemnts are multiplicative, that is that the process of converting a document into one format with degradation coeficient c1 and then further converting the result of that with coeficient c2 would in all be a process with coeffcient c1*c2. This is not, in fact, necessaily the case in practice but is a reasonable guess when we know no better.
Elapsed time
The elapsed time is another penalty of conversion. As an aproximation one might assume this to be linear in the size of the file. It is not easy to say whether the constant part or the size-proportional part is going to be the most important. The server, of course, knows the size of the file. It can in fact as a result of experience make improving guesses as to the conversion time. The conversion time will be a function also of local load. For particlular files, it may be affected by the caching of final or intermediate steps in a conversion process. Given a model in which the server makes the decision on the basis of information supplied by the client, this information could include, for each type, both the consant part (seconds) and the size-related part (seconds per byte).
4.7 Why a new protocol?
Existing protocols cover a number of different tasks.
ffl Mail protocols allow the transfer of transient messages from a single author to a small number of recipients, at the request of the author.
ffl File transfer protocols allow the transfer of data at the request of either the sender or receiver, but allow little processing of the data at the responding side.
ffl News protocols allow the broadcast of transient data to a wide audience.
4.7. WHY A NEW PROTOCOL? 53
ffl Search and Retrieve protocols allow index searches to be made, and allow document access. Few exist: Z39.50 is one and could be extended for our needs.
The protocol we need for information access ( HTTP ) must provide
ffl A subset of the file transfer functionality
ffl The ability to request an index search
ffl Automatic format negotiation.
ffl The ability to refer the client to another server
Tim BL
54 CHAPTER 4. RELEVANT PROTOCOLS
Chapter 5
W3 Naming Schemes
(See also: a discussion of design issues involved , BNF syntax , W3 background ) .
5.1 Examples
This is a fully qualified file name, referring to a document in the file name space of the given internet node, and an imaginary anchor 123 within it.
#greg
56 CHAPTER 5. W3 NAMING SCHEMES
This refers to anchor "greg" in the same document as that in which the name appears.
5.2.
wais Access is provided using the WAIS adaptaion of the Z39.50 protocol.
x500 Format to be defined.
5.3. ADDRESS FOR AN INDEX SEARCH 57
5.3given".
5.3.1 Example:
indicates the result of perfoming a search for keywords "sgml" and
5.
58 CHAPTER 5. W3 NAMING SCHEMES
5.4.1 Examples
This is a fully qualified file name.
fred.html
This <A NAME=0 HREF=Relative.html>relative name</A> , used within a file, will refer to a file of the same node and directory as that file, but the name fred.html.
5.4.2*.
5.5.
5.6. RELATIVE NAMING 59
5.5.1.
5.6.)
This implies that certain characters ("/", "..") have a significance reserved
for representing a hierarchical space, and must be recognized as such
by both clients and servers.
In the WWW address format , the rules for relative naming are:
ffl If the " scheme " parts are different, the whole absolute address must be given. Other wise, the scheme is omitted, and:
ffl.
ffl If the access and host parts are the same, then the path may be given with the unix convention, including the use of ".." to mean indicate deletion of a path element. Within the path:
60 CHAPTER 5. W3 NAMING SCHEMES
ffl If a leading slash is present, the path is absolute. Otherwise:
ffl The last part of the path of the base address (e.g. the filename of the current document) is removed, and the given relative address appended in its place.
ffl Within the result, all occurences "xxx/.." or "/." are recursively removed, where xxx is one path element (directory).
The use of the slash "/" and double dot ".." in this case must be respected by all servers. If necessary, this may mean converting their local representations in order that these characters should not appear
5.7
5.8. TELNET ADDRESSING 61
host), a service name is NOT an appropriate
way to specify a port number for a hypertext
address. If the port number is omitted the
preceding colon must also be omitted. In this
case, port number 2784 is assumed [This may
change!].
See also: WWW addressing in general , HTTP protocol .
Tim BL
5.8 is mandatory.
62 CHAPTER 5. W3 NAMING SCHEMES
address. If the port number is omitted the preceding
colon must also be omitted. In this case,
port number 23 is assumed.
Tim BL
5.9 difficult to read with the line mode browser.)
An absolute address specified in a link is an anchoraddress . The address which is passed to a server is a docaddress .
anchoraddress docaddress [ # anchor ]
docaddress httpaddress | fileaddress | newsaddress | telnetaddress | gopheraddress | waisaddress
httpaddress h t t p : / / hostport [ / path ] [ ? search ] fileaddress f i l e : / / host / path
newsaddress n e w s : groupart
waisaddress waisindex | waisdoc
waisindex w a i s : / / hostport / database [ ? search ] waisdoc w a i s : / / hostport / database / wtype / digits / path
groupart * | group | article
group ialpha [ . group ]
article xalphas @ host
database xalphas
wtype xalphas
telnetaddress t e l n e t : / / [ user @ ] hostport gopheraddress g o p h e r : / / hostport [/ gtype [ / selector ]
5.10. ESCAPING ILLEGAL CHARACTERS 63
] [ ? |1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
digits digit [ digits ]
alphanum alpha | digit
alphanums alphanum [ alphanums ]
void
See also: General description of this syntax, Escaping conventions. Tim BL
5.10
64 CHAPTER 5. W3 NAMING SCHEMES
5.11
5.12. W3 ADDRESSES FOR WAIS SERVERS 65
address. If the port number is omitted the preceding
colon must also be omitted. In this case,
port number
5.12 W3 addresses for WAIS servers
Servers using the WAIS ( "Wide Area Information Systems" ) protocols
from Thinking Machines may be accessed as part of the web using addresses
of the form (see BNF description )
w a i s : / / hostport / database ...
Access (currently) goes through a gateway which stores the "source" files which contain the descriptions of WAIS servers. This address corresponds to the address of an index. To this may optionally be appended either a search string or a document identifier.
Note that changes have been proposed to WAIS document id format, so this representation of them may have to change with that format. Currently the WAIS document address necessary for retrieval by a client requires the following information, which is orginally provided by the server in the hit list.
Document format This is normally "TEXT" but other formats
such as PS, GIF, exist.
Document length This is needed by the client who must loop to retrie the whole document in slices.
Document identifier This is an entity consisting of numerically tagged fields. the binary representation used by WAIS
66 CHAPTER 5. W3 NAMING SCHEMES
is transformed for readability into a sequence
of fields each consisting of a decimal tag, an
equals sign (=) , the field value, and a semicolon.
Within the field value, hex escaping is
used for otherwise illegal characters.
See also: Other W3 address formats , BNF definition .
Chapter 6
HTML
The WWW system uses marked-up text to represent a hypertext document for transmision over the network. The hypertext mark-up language is an SGML format. HTML parsers should ignore tags which they do not understand, and ignore attributes which they do not understand of tags which they do understand.
To find out how to write HTML, or to write a program to generate it, read:
The tags A list of the tags used in HTML with their significance. Example A file containing a variety of tags used for test purposes, and its source text .
You can use th line-mode browser to get the source text ( -source option ) of an html document you find on the network, so you can use any existing documents as examples.
6.1
68 CHAPTER 6. HTML
6.2.
6.2.1 .
6.2.2 Next ID
This tag takes a single attribute which is the number of the next documentwide>
6.2. HTML TAGS 69
6.2.3. NOT CURRENTLY USED
6.2.4.
70 CHAPTER 6. HTML
All attributes are optional, although one of NAME and HREF is necessary for the anchor to be useful.
6.2.5 .
6.2.6.
6.2.7:
ffl The text may contain any ISO Latin printable characters, including the tag opener, so long as it does not contain the closing tag in full.
6.2. HTML TAGS 71
ffl Line boundaries are significant, and are to be interpreted as a move to the start of a new line.
ffl.
6.2.8 Paragraph
This tag indicates a new paragraph. The exact representation of this (indentation, leading, etc) is not defined here, and may be a function of other tags, style sheets etc. The format is simply
<P>
(In SGML terms, paragraph elements are transmitted in minimised form).
6.2.92>, <H3>, <H4>, <H5>, <H5>, <H6>
These tags are kept as defined in the CERN SGML guide. Their definition is completely historical, deriving from the AAP tag set. A difference is that HTML documents allow headings to be terminated by closing tags:
<H3>Second level heading</h2>
72 CHAPTER 6. HTML
6.2.10 Address
This tag is for address information, signatures, etc, normally at the top or bottom of a document. typically, it is italic and/or right justified or indented. The format is:
<ADDRESS> text ... </ADDRESS>
6.2.11 Highlighting
The highlighted phrase tags may occur in normal text, and may be nested. For each opening tag there must follow a corresponding closing tag. NOT CURRENTLY USED.
<HP1>...</HP1> <HP2>... </HP2> etc.
6.2.12>
6.2.13 Lists
A list is a sequence of paragraphs, each of which is preceded by a special mark or sequence number. The format is:
<UL>
<LI> list element
<LI> another list element ...
</UL>
The opening list tag must be immediately followed by the first list element.
The representation of the list is not defined here, but a bulleted list for
unordered lists, and a sequence of numbered paragraphs for an ordered
list would be quite appropriate. Other possibilities for interactive display
include embedded scrollable browse panels.
Opening list tags are:
6.3. SGML 73
UL A list multi-line paragraphs, typically separated
by some white space.
MENU A list of smaller paragraphs. Typically one line per item, with a style more compact than UL.
DIR A list of short elements, less than one line. Typical style is to arrange in four columns or provide
a browser, etc.
6.3 SGML
The "Standard Generalised Mark-up Language" is an ISO standardised derivative of an earlier IBM "GML". It allows the structure of a document to be defined, and the logical relationship of its parts. This structure can be checked for validity against a "Document Type Definition", or DTD. The SGML standard defines the syntax for the document, and the syntax and semantics of the DTD. See books { Eric van Herwijnen's "Practical SGML" and Charles Goldfarb's "SGML Handbook". Some of the points generally broght up in (frequent) discussions of SGML follow.
6.3.1 High level markup
An SGML document is marked up in a way which says nothing about the representation of the document on paper or a screen. A presentation program must marge the document with style information in order to produce a printed copy. This is invaluable when it comes to interchange of documents between different systems, providing different views of a document, extracting information about it, and for machine processing in general. However, some authors feel that the act of communication includes the entire design of the document, and if this is done correctly the formatting is an essential part of authoring. They resist any attempts to change the representation used for their documents.
6.3.2 Syntax
The SGML syntax is sufficient for its needs, but few would say that it is particularly beautiful. The language shows its origins in systems where text was the principle content and markup was the exception, so a document which contains a lot of SGML is clumsy. There is always, of course, an element of personal taste to syntax.
74 CHAPTER 6. HTML
6.3.3 Tools
For many years, SGML was generated by hand, by people editing the
source. This has lead to a hatred of SGML among those who prefer their
own mark-up language which may have been quicker, more powerful, or
more familiar. The advent of WYSIWYG editors and solid SGML applications
should improve that facet of SGML.
See also: HyTime , HTML , Hypertext Document formats .
Tim BL
6.3.4 AAP
AAP stands for the American Asociation of Publishers, one of the first groups to fix on a common SGML DTD.
Chapter 7
Coding Style Guide
This document describes a coding style for C code (and therefore largely for C++ and Objective-C code). The style is used by the W3 project used so that:-
ffl Code is portable and maintainable.
ffl Code is easily readable by other project members.
If you have suggestions, do send them. (We do not include points designed to allow automatic processing of code by parsers with an incomplete awareness of C syntax.)
The style guide is divided into sections on Language features , Macros
, Module header , Function header , Code style , Identifiers , Include files ,
Directory structure .
(See also pointers to some public domain styles ).
Tim BL
7.1 Language features
Code to be common shared code must (unfortunately!) be written in C, rather than any objective C or C++, to ensure maximum portability. This section does not apply to code written for specific platforms.
C code must compile under either a conforming ANSI C compiler OR an original Kernighan & Ritchie C compiler. Therefore, the STDC macro must be used to select alternative code where necessary.. ( example ) Code should compile without warnings under an ANSI C compiler such as gcc with all warnings enabled.
76 CHAPTER 7. CODING STYLE GUIDE
Parameters and Arguments The PARAMS(()) macro is used to give a
format parameter list in a declataion so that it
will be suppressed if the compiled is not standard
C - see example .. The ARGS1 macro is
for the declaration of the implementation, taking
first the type then the argument name. For
n arguments, macros ARGn exists taking 2n
arguments each.
#endif Do put the ending condidtion in a comment. Don't put it as code - it won't pass all compilers.
const This keyword does not exist in K&R C, s use the macro CONST which expands to "const"
under standard C and nothing otherwise. { See
HTUtils.h
(part of: style guide )
7.2 Module Header
The module header is the comment at the top of a .h or .c file. Information need not (except for the title) be repeated in both the .c and .h files. Of course History sections are separate. See a dummy example . Note:-
ffl
Heading To make it easy to spot the file in a long listing,
put a header and te file name in the top righthand
corner.
Authors Just a list to make the initials intelligible. Use initials in the history or in comments in the file.
History A list of major changes of the file. You do not need to repeat information carried by a code
management system or in an accompanying hypertext
file.
Section headings Sections in the file such as public data, private module-wide data, etc should be made visible.
Two blank lines and a heading are useful for
this.
Tim BL
/* Foo Bar Module foobar.c
7.3. FUNCTION HEADINGS 77
** ==============
**
**) **
** CERN copyright -- See Copyright.html
*/
/* Global Data
** -----------
*/
Tim BL
7.3 Function Headings
This style concerns the comments, and so is not essential to compilation. However, it helps readability of code written by a number of people. Some of these conventions may be arbitrary, but are none the less useful for that.
7.3.1 Format
See a sample procedure heading . Note:-
ffl White space of two lines separating functions.
ffl The name of the function right-justified to make it easy to find when flicking through a listing
ffl The separate definitions for standard and old C.
ffl The macros PUBLIC and PRIVATE (in HTUtils.h ) expand to null and to "static" respectively. They show that one has thought about whether visibility is required outside the module, and they get over the overloading of the keyword "static" in C. Use one or the other. (Use for top level variables too).
78 CHAPTER 7. CODING STYLE GUIDE
7.3.2 Entry and exit condidtions
It is most important to document the function as seen by the rest of the world (especially the caller). The most important aspects of the appearance of the function to the caller are the pre- and post-condidtions.
The pre condidtions include the value of the parameters and structures they point to. Both include any requirements on or changes to global data, the screen, disk files, etc.
7.3.3 Function Heading: dummy example
} /* previous_function() */
/* Scan a line scan_line()
** -----------
** On entry,
** l points to the zero-terminated line to be scanned ** On exit,
** *l The line has null termintors inserted after each ** word found.
** return value is the number of words found, or -1 if error. ** lines This global value is incremented.
*/
PRIVATE int scan_line ARGS1(const char *, l);
{
/* Code here */
} /* scan_line() */
Tim BL
7.4 Function body layout
With the body of functions, this is the way we aim to do it...we;re not religious about it, but consistency helps.
7.5. IDENTIFIERS 79
7.4.1 Indentation
ffl Put opening f at the end of the same line as the if, while, etc which affects the block;
ffl Align the closing brace with the START of that opening line;
ffl Indent everything between f and g by an extra 4 spaces.
ffl Comment the closing braces of conditionals and other blocks with the type of block, including the correct sense of the condition of the block being closed if there was an "else", of the function name. For example,
if (cb[k]==0) { /* if black */
foo = bar;
} else { /* if white */
foo = foobar;
} /* if white */
} /* switch on character */
} /* loop on lines */
} /* scan_lines() */
Tim BL
7.5 Identifiers
When chosing identifier names,
ffl Macros should be un upper case entirely unless they mimic and replace a genuine function.
ffl External names should be prefixed with HT to avoid confusion with other projects' code. Wthin the rest of the identifier, we use initial capitals a la Objective-C (e.g. HTSendBuffer).
ffl The macro SHORT NAMES is defined on systems in which external names must be unique to within 8 characters (case insesitive). If your names would clash, at the top of the .h file for a module you should include macros defining distinct short names:
#ifdef SHORT_NAMES
#define HTSendBufferHeader HTSeBuHe
80 CHAPTER 7. CODING STYLE GUIDE
#define HTSendBuffer HTSeBuff
#endif
(back to <A NAME=1HREF=Coding.html>Overview</A>)<P>
7.6 Directory structure
This is an outline of the directory structure used to support multiple platforms.
ffl All code is under a subdirectory "Implementation" at the appropriate point in the tree.
ffl All object files are in a subdirectory Implementation/xxx where xxx is the machine name. See for example WWW/LineMode/Implementation/*.
ffl Makefiles in the system-specific directories incldue a CommonMakefile which is in the parent Implementation directory (..).
7.7 Include Files
7.7.1 Module include files
Every module in the project should have a C #incldue file defining its
interface, and a .c source file (of the same name apart from the suffix)
containing the implementation.
The .c file should #include itse own .h file.
A .h file should be protected so that errors do not result if it is #included twice.
An interface which relies on other intrefaces should #include those interface files. An implemention file which uses other modules should #include the .h file if it not already #included by its own .h file.
7.7.2 Common include files
These are all in the WWW/Implementation directory.
HTUtils.h Definitions of macros like PUBLIC and PRI- VATE and YES and NO. For use in all .c files.
7.7. INCLUDE FILES 81
tcp.h All machine-dependent code for accesing TCP/IP
channels and files. Also defines some machinedependent
bits like SHORT NAMES. Project-wide definition of constants, etc.
(See also: Style in general , directory structure ) Tim BL | http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0cstr--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&cl=CL1.132&d=HASH0102b83a0da4ae5fa2bcc7cc.1>=2 | CC-MAIN-2018-22 | refinedweb | 14,176 | 66.74 |
Created on 2017-08-27 20:17 by Paul Pinterits, last changed 2020-11-19 11:35 by iritkatriel.
The file paths displayed in exception tracebacks have their symlinks resolved. I would prefer if the "original" path could be displayed instead, because resolved symlinks result in unexpected paths in the traceback and can be quite confusing.
An example:
rawing@localhost ~> cat test_scripts/A.py
import B
B.throw()
rawing@localhost ~> cat test_scripts/B.py
def throw():
raise ValueError
rawing@localhost ~> ln -s test_scripts test_symlink
rawing@localhost ~> python3 test_symlink/A.py
Traceback (most recent call last):
File "test_symlink/A.py", line 2, in <module>
B.throw()
File "/home/rawing/test_scripts/B.py", line 2, in throw
raise ValueError
ValueError
As you can see, even though both scripts reside in the same directory, the file paths displayed in the traceback look very different. At first glance, it looks like B is in a completely different place than A.
Furthermore, this behavior tends to trip up IDEs - PyCharm for example does not understand that test_scripts/B.py and test_symlink/B.py are the same file, so I end up having the same file opened in two different tabs.
Would it be possible to change this behavior and have "/home/rawing/test_symlink/B.py" show up in the traceback instead?
Here is a case where the opposite was requested:
There they want the traceback to be the same regardless of which symlink the script was found by.
I think a change like the one you are proposing here should be discussed on python-ideas. Would you like to bring it up there? | https://bugs.python.org/issue31289 | CC-MAIN-2020-50 | refinedweb | 268 | 67.96 |
I ran into a considerable amount of difficulty writing a video-file using OpenCV (under Python). Almost every video-writing example on the Internet is only concerned with capturing from a webcam, and, even for the relevant examples, I kept getting an empty/insubstantial file.
In order to write a video-file, you need to declare the FOURCC code that you require. I prefer H.264, so I [unsuccessfully] gave it “H264”. I also heard somewhere that since H.264 is actually the standard, I needed to use “X264” to refer to the codec. This didn’t work either. I also tried “XVID” and “DIVX”. I eventually resorted to trying to pass (-1), as this will allegedly prompt you to make a choice (thereby showing you what options are available). Naturally, no prompt was given and yet it still seemed to execute to the end. There doesn’t appear to be a way to show the available codecs. I was out of options.
It turns out that you still have one or more raw-format codecs available. For example, “8BPS” and “IYUV” are available. MJPEG (“MJPG”) also ended-up working, too. This is the best option (so that we can get compression).
It’s important to note that the nicer codecs might’ve not been available simply due to dependencies. At one point, I reinstalled OpenCV (using Brew) with the “–with-ffmpeg” option. This seemed to pull-down XVID and other codecs. However, I still had the same problems. Note that, since this was installed at the time that I tested “MJPG”, the latter may actually require the former.
Code, using MJPEG:
import cv2 import cv import numpy as np _CANVAS_WIDTH = 500 _CANVAS_HEIGHT = 500 _COLOR_DEPTH = 3 _CIRCLE_RADIUS = 40 _STROKE_THICKNESS = -1 _VIDEO_FPS = 1 def _make_image(x, y, b, g, r): img = np.zeros((_CANVAS_WIDTH, _CANVAS_HEIGHT, _COLOR_DEPTH), np.uint8) position = (x, y) color = (b, g, r) cv2.circle(img, position, _CIRCLE_RADIUS, color, _STROKE_THICKNESS) return img def _make_video(filepath): # Works without FFMPEG. #fourcc = cv.FOURCC('8', 'B', 'P', 'S') # Works, but we don't have a viewer for it. #fourcc = cv.CV_FOURCC('i','Y','U', 'V') # Works (but might require FFMPEG). fourcc = cv.CV_FOURCC('M', 'J', 'P', 'G') # Prompt. This never works, though (the prompt never shows). #fourcc = -1 w = cv2.VideoWriter( filepath, fourcc, _VIDEO_FPS, (_CANVAS_WIDTH, _CANVAS_HEIGHT)) img = _make_image(100, 100, 0, 0, 255) w.write(img) img = _make_image(200, 200, 0, 255, 0) w.write(img) img = _make_image(300, 300, 255, 0, 0) w.write(img) w.release() if __name__ == '__main__': _make_video('video.avi') | https://dustinoprea.com/2015/09/13/drawing-to-a-video-using-opencv-and-python/ | CC-MAIN-2017-26 | refinedweb | 422 | 68.97 |
Hide Forgot
If we try to move some of our current rcX scripts to 'native' upstart scripts,
this will break dependencies, as you can't (as far as I can tell) properly
express dependencies between the two systems.
Fixing this may require rewriting /etc/rc after all.
we can add scripts to the runlevels which trigger upstart events at certain
milestones. Then we can have upstart services set to "start on stage1complete".
It may get ugly, but once most or all of the upstart jobs are in place it will
clean up easily.
It's the 'once most or all' that worries me, it makes it a very much all or
nothing move.
The intermediate time will be fine, there will just be a bit more crap in the
event files than we like and a bit of minor ugliness in a few of the scripts
(nothing compared to what's there already).
Can you give an example of how you'd handle this? Say, you have something that
has Required-Start: ntp
but ntp is now a upstart event.
There's some issues with this right now, but here is the idea...
At the beginning of the script:
initctl emit ntp
initctl event | grep -C1 "started ntp"
The emit is obvious enough. intctl event normally blocks forever, just listing
away events as they occur, but the grep will exit after the first matching line,
SIGEPIPEing the initctl.
The issue with this is for some reason initctl doesn't seem to write any output
when piped. Perhaps a security feature? expect could solve this, but I haven't
used it much.
Right, but then the old SysV init script that depended on ntp would need to
conditionally include that.
Did you know that emit will block until all jobs that react to the event have
been started and/or finished? The "grep" bit shouldn't be necessary.
Well that does make it simpler :)
(In reply to comment #6)
> Right, but then the old SysV init script that depended on ntp would need to
> conditionally include that.
I'm still not certain I understand your argument here. We just add the emit line
to the file. Its effectively a noop if ntp is already running. There's no
conditional inclusion.
Your suggestion seems to be for the case of an upstart script that requires a
SysV script. I was asking about a SysV script that requires an upstart script,
and how that can be done *without changing the existing SysV script*.
(In reply to comment #10)
> Your suggestion seems to be for the case of an upstart script that requires a
> SysV script. I was asking about a SysV script that requires an upstart script,
> and how that can be done *without changing the existing SysV script*.
My suggestion was for the latter, and either way the solution is the same. For
the first its "Tell upstart to start the things I need" and for the second its
"tell upstart to start the things that need me"
Right, but it means that all things that have sysv dependencies would need
modified if any of their deps move to upstart (or just modified wholesale). That
isn't very practical.
Ah, I see your point.
Hmm, given what we've explored here, its probably fairly easy to modify /etc/rc
to just send an event before each script is started and then hold off starting
it until the event has been responded to. I'll play with it tonight or tomorrow.
Another thought, what if we (Scott, put the tea down) put an initctl emit $0
line at the end of the "functions" include? It would end up in the right place
in just about every file, though doing such a thing might cost us our eternal souls.
Created attachment 295697 [details]
Patch to /etc/rc to add events
Here is a modified /etc/rc that generates the necessary upstart events for
every service. I moved all the events into a "sysv." namespace, which I think
will make life easier during the transition.
Bill, if you approve, we can build a new initscripts package with this and
close the bug.
Hm. While I understand the idea of separating things into a sysv namespace, it
would mean that any dependent upstart events would actually need editing if the
event moved from sysv to upstart. So we may not want to separate them.
Also, of course, it's only half of the issue.
What half are we missing?
That fixes upstart-event-depends-on-sysv, not sysv-event-depends-on-upstart.
Unless I'm missing something.
We can still make init scripts block on upstart events.
So it would go like this: If ntp is moved to an upstart event, we still have an
ntp sysvinit script. That script just starts the upstart event and waits for it
to come up.
We should be able to develop events in this way without having to think much
about sysvinit. Then one day the sysv scripts just go away.
A version of your patch added in initscripts git. We'll see how it works.
Ideally it would go in init.d/functions for everything to use, but that's not
really practical as that doesn't wrap start/stop/etc.
what are thoughts on this now that its been around a bit?
Well, we emit events for sysv scripts now. I think that's probably as good as
we're going to get for the moment.
Changing version to '9' as part of upcoming Fedora 9 GA.
More information and reason for this action is here:
changing back to rawhide
This bug appears to have been reported against 'rawhide' during the Fedora 10 development cycle.
Changing version to '10'.
More information and reason for this action is here:
Closing per Bill's comment as of 2008-03-17. | https://bugzilla.redhat.com/show_bug.cgi?id=431231 | CC-MAIN-2019-26 | refinedweb | 981 | 81.22 |
In this lesson, we'll be exploring the pyplot
plot function and all of its associated attributes and arguments.
The Imports We'll Need For This Lesson
As before, I'm assuming you've run the following code before working through this lesson:
import matplotlib.pyplot as plt %matplotlib inline import numpy as np from IPython.display import set_matplotlib_formats set_matplotlib_formats('retina')
We will also be working with three random number datasets generated using NumPy's
randn method:
data1 = np.random.randn(20) data2 = np.random.randn(20) data3 = np.random.randn(20)
These datasets are one-dimensional NumPy arrays with 20 entries.
Plotting Data Using Matplotlib's Plot Method
To start, let's plot one of these data sets using
plt.plot():
plt.plot(data1)
If you want to change the x-values that are plotted along with the dataset, you could pass in another data set of length 20 before the
data1 argument.
For example, let's say you wanted the x-axis to range from 20 to 40 instead of from 0 to 20. You would first create a new variable, which we will call
xs, to hold the x-axis data points:
xs = range(20, 40)
Then you would plot the chart with the new x-axis like this:
plt.plot(xs, data1)
How To Format Your Matplotlib Graph Using Format Strings
You can also format the appearance of your plot using a
format string. If the second or third argument of your
plot method is a string, then matplotlib will automatically assume that this is meant to be a format string.
Format strings have three components:
marker: Specifies the shape that should be used on each data point.
line: Specifies what type of line should be used, such as dotted line or solid line.
color: Specifies the color of the line outside of the data points.
A few example of format strings are below:
plt.plot(xs, data1,'o--r')
plt.plot(xs, data1,'+--c')
plt.plot(xs, data1,'s--y')
You definitely do not need to memorize all of the characteristics of matplotlib's format strings. If you ever get stuck while trying to create a specific format, visit matplotlib's documentation for help.
Plotting Multiple Datasets Using Matplotlib's Plot Function
As we have seen, it is possible to present multiple datasets on the same plot using matplotlib. This section will outline two methods for doing this.
The first method is by adding each dataset to the plot's canvas using a separate
plot function, like this:
plt.plot(data1) plt.plot(data2) plt.plot(data3)
The second way is by using a single plot function.
Some caution is warranted here - you might think you can simply run
plt.plot(data1, data2, data3), but this will cause an error. Specifically, your Jupyter Notebook will either plot an incorrect graph or return
ValueError: third arg must be a format string.
This is because the second or third argument of a
plot method must be a format string. The solution is to chain together sequences of
data, formatString like this:
plt.plot(data1, '', data2, '', data3, '')
Notice that I simply passed in an empty string for each dataset's format string. This makes matplotlib stick with the default format for each string, like this:
Refactoring Complex Graphs For Readability
There are many situtations where you will want to transform matplotlib's shorthand into longer code that is more readable for outside users.
To do this, we will transform the the
plot function's format string into separate variables. An example of this is below, where I present two different ways to create an identical graph in matplotlib:
plt.plot(data1, 'r--s') plt.plot(data1, color='red', linestyle='dashed', marker='s')
This becomes even more important when dealing with very complex graphs. For example, consider the following plot:
If you were an outside developer, which of the following two code blocks is easier for you to understand?
#Method 1 plt.plot(data1, 'r--s', data2, 'g-.o', data3, 'b:^') #Method 2 plt.plot(data1, color='red', linestyle='dashed', marker='s') plt.plot(data2, color='green', linestyle='dashdot', marker='o') plt.plot(data3, color='blue', linestyle='dotted', marker='^')
For readability reasons, developers often refactor their code into longer examples before saving it or pushing it to some master repository.
Moving On
That concludes our discussion of matplotlib's
pyplot function. After working through some practice problems, I will explain how you can build beautiful boxplots using matplotlib. | https://nickmccullum.com/python-visualization/pyplot-plot/ | CC-MAIN-2021-31 | refinedweb | 749 | 65.01 |
Opened 8 years ago
Closed 3 years ago
Last modified 2 years ago
#12118 closed New feature (fixed)
in-memory test database does not work with threads
Description
When using the test configuration of the DB with XXX and accessing the DB from another thread, it fails miserably.
Using this example script on the Poll tutorial:
import os os.environ[ 'DJANGO_SETTINGS_MODULE' ] = 'settings' import settings import datetime, threading #django stuff from polls.models import * from django.core.mail import mail_admins from django.test.utils import * from django.db import connection def create_object(): print 'Creating Poll' p = Poll() p.question = "What's up doc ?" p.pub_date = datetime.date.today() p.save() print 'Poll object saved. Id: %d' % p.id WITH_THREAD = False if __name__ == '__main__': setup_test_environment() old_db_name = settings.DATABASE_NAME new_db_name = connection.creation.create_test_db(verbosity=1) print 'New DATABASE:', new_db_name if WITH_THREAD: t = threading.Thread( target=create_object ) t.start() t.join() else: create_object() teardown_test_environment() connection.creation.destroy_test_db( old_db_name )
If I run it with WITH_THREADS set to False: Poll object saved. Id: 1 Destroying test database...
If I run it with WITH_THREADS set to True: Exception in thread Thread-1: Traceback (most recent call last): File "c:\Python26\lib\threading.py", line 522, in __bootstrap_inner self.run() File "c:\Python26\lib\threading.py", line 477, in run self.__target(*self.__args, **self.__kwargs) File "run_with_threads.py", line 19, in create_object p.save() File "c:\Python26\lib\site-packages\django\db\models\base.py", line 410, in save self.save_base(force_insert=force_insert, force_update=force_update) File "c:\Python26\lib\site-packages\django\db\models\base.py", line 495, in save_base result = manager._insert(values, return_id=update_pk) File "c:\Python26\lib\site-packages\django\db\models\manager.py", line 177, in _insert return insert_query(self.model, values, **kwargs) File "c:\Python26\lib\site-packages\django\db\models\query.py", line 1087, in insert_query return query.execute_sql(return_id) File "c:\Python26\lib\site-packages\django\db\models\sql\subqueries.py", line 320, in execute_sql cursor = super(InsertQuery, self).execute_sql(None) File "c:\Python26\lib\site-packages\django\db\models\sql\query.py", line 2369, in execute_sql cursor.execute(sql, params) File "c:\Python26\lib\site-packages\django\db\backends\util.py", line 19, in execute return self.cursor.execute(sql, params) File "c:\Python26\lib\site-packages\django\db\backends\sqlite3\base.py", line 193, in execute return Database.Cursor.execute(self, query, params) OperationalError: no such table: polls_poll Destroying test database...
Change History (12)
comment:1 Changed 8 years ago by
comment:2 Changed 4 years ago by
I vote to reopen this.
sqlite, as of version 3.7.13 (released 2012-06-11) has the ability to share an in-memory database between multiple connections and threads.
See:
Making this work with the Django testing framework should be pretty easy:
In the sqlite database backend, instead of using the database name
:memory:, we should use a name such as
file:testdb?mode=memory&cache=shared (where "testdb" can be anything and ideally should be unique so that multiple tests can run concurrently, each with its own individual database).
As a bonus, doing this should allow removing the hacky sqlite-specific code from "LiveServerTestCase" (It contains a messy workaround for exactly the issue of this bug report)
The only thing that might be a little tricky is making this update in the Django code to still support older vesions of sqlite by falling back to the current behavior (
:memory:), but I imagine that this shouldn't be too difficult.
comment:3 Changed 4 years ago by
comment:4 Changed 4 years ago by
comment:5 Changed 3 years ago by
comment:6 Changed 3 years ago by
Pull request here:
comment:7 Changed 3 years ago by
comment:8 Changed 3 years ago by
comment:9 Changed 2 years ago by
Incidentally, in case it's of use to anyone else that's temporarily stuck on an earlier versions of django, you can hack a workaround for this by using a path in
/dev/shm in
TEST_NAME:
DATABASES['default']['TEST_NAME'] = '/dev/shm/myproject-djangotestdb.sqlite'
(unix only)
Near as I can tell this is an sqlite or pysqlite restriction, though I haven't been able to find a definitive statement in sqlite or pysqlite doc. SQLAlchemy, however, appears to have come to the conclusion that you simply cannot share :memory databases between threads, see:. Thus I think there is no bug in Django here, the answer is don't do that and if you really want to do it you'll need to take it up with the maintainers of the lower level code. | https://code.djangoproject.com/ticket/12118 | CC-MAIN-2017-47 | refinedweb | 766 | 50.94 |
[ Charset ISO-8859-1 unsupported, converting... ]
> I am new to lucene and I can not understand why I am getting following error
> with this program?
>
> public class Search {
> public static void main(String[] args) {
> try{
> String indexPath = "d:\\org", queryString = "parag";
> Searcher searcher = new IndexSearcher(indexPath);
>
> ERROR: the error I am getting is that it throws IOException saying that can
> not find org directory in D: (which is there with a file).
> Can anybody help me? So that I can read a file from directory and search
> with given criterion.
Hm, are you sure it should be two slashes? Alternately, try using
forward-slashes (java will map // to whatever your system uses, but not
necessarily \\).
Steven J. Owens
puff@darksleep.com | http://mail-archives.apache.org/mod_mbox/lucene-java-user/200110.mbox/%3C20011022133712.874553C6D1@darksleep.com%3E | CC-MAIN-2017-04 | refinedweb | 121 | 63.59 |
any() and all() shorthand
Discussion in 'Python' started by castironpi
foreach shorthand, Mar 25, 2005, in forum: C++
- Replies:
- 2
- Views:
- 646
- =?iso-8859-1?Q?Ali_=C7ehreli?=
- Mar 25, 2005
Shorthand for Property declaration in VBDavid W, Jan 17, 2007, in forum: ASP .Net
- Replies:
- 0
- Views:
- 990
- David W
- Jan 17, 2007
Shorthand for namespacesFoxpointe, Oct 31, 2006, in forum: XML
- Replies:
- 4
- Views:
- 455
- Foxpointe
- Oct 31, 2006
Shorthand for($scalar) loops and resetting pos($scalar)Clint Olsen, Nov 12, 2003, in forum: Perl Misc
- Replies:
- 6
- Views:
- 400
- Jeff 'japhy' Pinyan
- Nov 13, 2003
Shorthand operator for AND, NOTArchos, Dec 19, 2011, in forum: Javascript
- Replies:
- 5
- Views:
- 597
- Archos
- Dec 22, 2011 | http://www.thecodingforums.com/threads/any-and-all-shorthand.582923/ | CC-MAIN-2014-52 | refinedweb | 117 | 51.55 |
Key Web App Standard Approaches Consensus 143
suraj.sun tips a report up at CNet which begins: ."
Golden age of the web set to continue (Score:3, Informative)
Personally the new web technology that I'm most keen to get my hands on is the pushState/replaceState [mozilla.org].
Re:Golden age of the web set to continue (Score:4, Insightful)
>
...it looks like the Golden Age of the web will continue...
Provided that your definition of a Golden Age includes many new and exciting exploits.
Re: (Score:2)
>
...it looks like the Golden Age of the web will continue...
Provided that your definition of a Golden Age includes many new and exciting exploits.
The web isn't just for the enjoyment of users. Developers need to get their fix of fun, too.
Re: (Score:3, Interesting)
Exploits is the one of the many issues. How about change control, patching and schema changes, this has got catastrophe written all over it unless the API accounts for a lot more than whats written any serious database application reliant on it would require a strong set of change log rules, shifting data when needed, schema compliance checks before allowing access.
Re:Golden age of the web set to continue (Score:4, Funny)
Don't look now, but someone used one of those exploits to replace your comment's font.
Re: (Score:1)
Re: (Score:2)
Don't look now
...
Why the heck not? How else is someone supposed to see a hacked *font*?
Re: (Score:2, Interesting)
it looks like the Golden Age of the web will continue for some time.
Dude, the web didn't even exist until about 18 years ago. We're still evaluating the impact that the internet is having on culture -- what with some countries defining it as an inalienable human right and others eager to all but destroy or censor the crap out of it, the "golden age" is not what I'd call this time period. I'd call it the friggin' dark ages -- a mish-mash of global entities all competing at cross-purposes, a thriving black market, and every week more of our technology becomes connected to it,
Re: (Score:3, Interesting)
Re: (Score:2)
I like to think it's the phase where you've built a global government, but haven't built your UFO yet.
Re: (Score:2)
You win. Great comment - thanks.
Re:Golden age of the web set to continue (Score:5, Insightful)
I read that pushState / replaceState link and it scared me. Note the following from it:
Suppose [mozilla.org] executes the following JavaScript:
var stateObj = { foo: "bar" };
history.pushState(stateObj, "page 2", "bar.html");
This will cause the URL bar to display [mozilla.org], but won't cause the browser to load bar.html or even check that bar.html exists.
Why do I have a feeling that said effect can and will primarily be used for horribly evil purposes?
Re: (Score:2)
Sounds OK to me as long as the site portion remains the same. This isn't any different from other cross-site scripting or impersonation problems. For all I care sites can do whatever they want to the URL bar as long as the site-identificating portion remains constant.
Re: (Score:2)
Fine for you, but what about the millions who will be reassured by the url displayed and end up handing their banking credentials over? The url display is supposed to display the current url and has been that way since the very beginning. Suddenly redefining it to display whatever the server wants is a terrible idea and a fraud waiting to happen.
Re: (Score:2)
What part of as long as the site portion remains the same did you miss? All you'll be able to change is the path portion within your site, which you already control anyway.
Ever since AJAX and DHTML, sites have had full control over what pages actually display, regardless of the URL they were accessed through. Websites can already manipulate the URL bar at will through javascript, it's jus
Re: (Score:2)
I don't think this would be a problem. If you already own the website, then you already can change the URL at will to anything you want.
The only reason this would be a bigger issue is XSS attacks - but those are already have way more important concerns than just spoofing the URL.
Personally, I would love it. It would make it much easier to merge the mobile/AJAX/static structures of the website, allow end-users to access the same bookmarks from multiple devices, and provide a much cleaner look than we alrea
Re: (Score:2)
Currently, the real issue with AJAX-webapp links is that the server never gets the hash (fragment) portion of the URL. This makes it hard to serve the correct page to a mobile device, and completely impossible if the device does not support JavaScript.
It also makes the server-side logs less and less useful, without introducing other kludges like calling "fake" URLs (via iframes or other tags) just to populate actual activity in the logs..
Re: (Score:2)
Server logs are logs of server activity. If no server activity is generated for certain user actions, why do you need logs? There's no fundamental difference between logs of user-visible URIs and logs of backend AJAX calls; in fact, AJAX API URIs can be made just as descriptive for server log purposes.
If you want to track user actions, then obviously you'll have to add an explicit tracking bug, which negates some of the advantages of dynamic sites without necessarily triggering server activity for each user
Re: (Score:2)
Server logs are logs of server activity.
you completely misunderstood my post. the point is that most platforms (gmail, for example) have moved to putting everything that would have typically lived in the query string behind a hash (#) because it's accessible in javascript. this stuff doesn't get sent to the server, of course, so doesn't show in logs.
Piled Higher and Deeper
Re:Piled Higher and Deeper (Score:5, Funny)
>
...look just like a local app did ten years ago.
No, no, no. It will look completely different. It'll have rounded corners. Or something. I know! It'll have animated 3D shadows! How can anyone get any work done using a program that lacks animated 3D shadows?
Re: (Score:2)
i *especially* am amazed at the Mac's throbbing shadows when your program opens several dozen windows. The first few windows only deepen the shadow, which is reasonable even if a waste of effort. But at some point, it decides that deepening the shadow would make it too big, so the shadow starts throbbing instead. I haven't tried writing a custom program to open windows at various speeds to see what happens, but I suspect it is not actually decreasing the shadow for each window, rather setting an internal
Re: (Score:2)
If there's any good side in it, it means you don't have to install some random untrusted applications on your computer but they just work on browser with HTML and JavaScript.
Re: (Score:2)
JavaScript downloaded from a Web site _is_ "untrusted applications". Soon HTML itself will be a full-blown progamming language.
Re: (Score:1)
Re: (Score:2)
>
...it shouldn't be able to affect anything outside of its sandbox.
Sure. Of course it shouldn't. And if it did, why that would be wrong.
Re: (Score:2)
How would HTML5 change that?
Re:
Apu: Please do not mock the power of duct tape. These are forces beyond the understanding of mere mortals.
The Web is not the Net. (Score:5, Informative)
>
...Internet Explorer, Firefox, and Chrome account for more than 90 percent
> of the usage on the Net...
The Web is not the Net.
Re: (Score:2)
Do they claim so? Browser usage is definitely what most people do on the Internet, so it might be either way. Especially as people moved from communication on IRC and IM to Facebook and other sites.
Re: (Score:3, Interesting)
> Do they claim so?
The browsers they list as having 90% of the Net have 90% of the Web. As there is more to the Net than the Web they are necessarily wrong.
> Browser usage is definitely what most people do on the Internet...
You forget spammers and botnet operators, both large and growing markets.
Re:The Web is not the Net. (Score:4, Funny)
You forget spammers and botnet operators, both large and growing markets.
Well, they'll just have to abide by the new HTML standards like the rest of us. What's fair is fair.
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:2)
Re:The Web is not the Net. (Score:4, Funny)
> How do you find non-Web resources on the Internet other than through search
> engines on the Web?
I use Gopher.
Re: (Score:1)
> How do you find non-Web resources on the Internet other than through search
> engines on the Web?
I use Gopher.
Oh, and by the way, get the hell off my lawn!
FTFY
Re: (Score:2)
> Oh, and by the way, get the hell off my lawn!
Sonny, when I was your age we didn't have lawns. Grass hadn't been invented yet.
Error -420: Grass not found (Score:2)
Grass hadn't been invented yet.
Then what did you smoke to get high?
Re: (Score:2)
...or Usenet, or eDonkey, or Limewire, or...
Apple (Score:1, Troll).
Browser OS/webapps isn't really their market.
Personally, I reckon they are trying to work out who to sue.
Re: (Score:1)
> Personally, I reckon they are trying to work out who to sue.
Just be careful never to call it iNdexedDB (or bdDEXEDnI).
Slowly reinventing the wheel in the browser (Score:5, Insightful)
Congratulations, you've developed a framework for client-server application development. Welcome to 1990. But wait, it's different this time because it's lightweight? Only it's not. Your framework runtime (the browser) consumes many times the resources that existing client-server applications ever did, and you still can't provide the same level of functionality.
Progress in the software industry today looks like this:
- 2003: Microsoft releases Office 2003
- 2008: Google releases quirky, limited-functionality clone of Office 2003 that runs in the browser
- 2016: Google releases quirky but fully functional clone of Office 2003 that runs in the browser, only it's progress because it's Web 5.0!!!
Thanks but no thanks.
Re:Slowly reinventing the wheel in the browser (Score:5, Interesting)
When you look at much of the development of platforms, a great deal of effort has been expended to make sure that the programming model is simple. E.g., from the perspective of a typical process running in a typical modern OS, the world still looks like a simple OS: your own flat address space and simple system calls to use to write to disk, etc. Generally, you don't have to deal with interrupts, shared memory, etc. But networking is where all of this breaks down. The location of your storage is important, because while hard disks are slow, network storage is really slow. Some parts of your application run here, and some run there, and here and there may even be wildly different platforms (e.g., 'there' could be a functional language running on a cluster, while 'here' could be a mobile web browser on a cellphone), so race conditions and slow network links and processors are a real problem.
This constant shifting around is an attempt to find the right complexity balance. I don't know if there is a 'right' balance for all scenarios, but it doesn't look like that's going to stop people from trying to find it. Just look at all the iterations of RPC out there. They all suck, too (you just can't pretend the network doesn't exist!), but that does not stop them from being useful. Just look at NFS.
Re: (Score:2)
Of course we hate concurrency. That doesn't mean we don't need it.
Concurrency makes code nearly impossible to debug. We don't *like* Erlang. But without concurrency we can only execute in one hyperthread at a time, and that's slow.
Now if you throw in delays for IP connections, handling sockets that might or might not be there, etc.
.... now you're getting to a place where most applications are better off avoiding. Yeah, there are toolkits and frameworks to make dealing with it plausible, and to ensure t
Re: (Score:2)
Concurrency makes code nearly impossible to debug.
Then you're probably doing it wrong. But I'd certainly not characterize it as being particularly easy. Key issues are that you need to avoid shared state (shared state concurrency is very difficult to debug) and you need to beware of global problems like deadlocks and livelocks; not everything can be solved just by looking at individual threads.
But if you keep the level of separation between different concerns strong, with every piece of state having a clear single owner at a time, you avoid most problems.
We don't *like* Erlang. But without concurrency we can only execute in one hyperthread at a time, and that's slow.
O
Re: (Score:2)
Erlang is slow compared to Python. And it doesn't have much in the way of graphics support.
If you aren't using Erlang, and you segregate out all the timing dependencies carefully, then you've just eliminated most of the benefit of concurrency. Ideally you'd like to be able to execute most loops in parallel...but it's only really worth doing for loops that do a lot of calculation. So, with any normal language, you've got to refactor those loops into something that looks completely different. Etc.
Go looks
Kinda necessary (Score:2)
The Web is better (Score:4, Insightful)
I think you're wrong. Functionality is not the name of the game. Communication and content are. Look, I was doing client-server development in the 1990s: Mac Programmer's Workshop (C++), Unix sockets (C), Microsoft Foundation Classes (C++). I would never go back. True, your example does illustrate your point. There are whole classes of application, like word processors, for which the Web is not (currently). But those are mostly stable, well-defined categories. The Web is not a better way to write Word, but it is a better way to create other software we want even more.
1. The Web is social. When you develop an application, communication between users is practically a given. Back in the day, client-server software was deployed within organizations and was focused on access to data or business processes. Communication was rare and tended to be limited.
2. The web centers on content to which developers add various functionality. You may have to work harder on your applications controls, but HTML and CSS give you tremendous power. A framework like Flash or
.NET may let you put things exactly where you want them, but this takes flexibility (e.g. text sizing) away from the user. And they are still missing significant chunks of what HTML+CSS can do.
3. The Web is simple. The learning curve for web applications is dramatically lower than for the kinds of apps you are talking about. HTML gives you hyperlinks for free. It also gives you a history with forward/back buttons, bookmarkable URLs, and a world of users who have been trained to use them. Programmers who try to develop apps without these features loose out on core benefits of the Web (hello, Flash).
4. The Web is relatively unified and transparent. I can view source on any page, or if that doesn't work use Firebug to break down the DOM. These days the standards are complex, but there are real advantages over a mess of competing frameworks. Browser implementations are inconsistent: but that beats writing client-server software that works on some mix of Mac, Windows, and assorted Unix flavors, then trying to persuade the wider world to install client software.
5. Javascript doesn't suck. I was surprised too when I found this out. It has some real weaknesses for sure (dynamic scoping!). It's no Python or Ruby, but it is powerful and its idiosyncrasies pale beside, say, C++ or PHP. Perhaps its biggest flaw is the pathetically poor standard library.
If you want to write a word-processor, the weaknesses of the Web compared to traditional client-server development may be very frustrating. You could still go with client-server, which seems like the right tool for the job. But you don't. The advantages of the Web are overwhelming. It's easier to be nostalgic about the benefits of client-server than to reinvent the benefits of the web.
Dynamic Scoping? (Score:3, Informative)
It has some real weaknesses for sure (dynamic scoping!).
Of all the things to pick on... dynamic scoping? Javascript'd be a harder language to work in without it... you'd essentially be getting rid of closures.
Re: (Score:2)
Excuse my ignorance, but aren't closures by definition an implementation of static scoping?
Re: (Score:2)
I am not "nostalgic" about the benefits of client-server apps, they are what business needs for head-down handling of data in a manner that does not require a 50,000 line mashup of javascript, html, xhtml, xml and bloody css.
Browser implementations are worse then inconsistent, they are insane. One does it just every so slightly different then the other and your app fails in the land of the web browser.
But not to worry, the project for the application browser has begun. It will present a clean well defined
Re: (Score:2)
Because I shouldn't have to. The fucking control EXISTS in the form and its value should fucking be returned, just like any other control! Really I want to find the little biotch who decided this was "clever" and beat their ass!
Re: (Score:2)
Hmmm lets see.... How about returning "1" or "0"... Or perhaps "t" or "f", "T" or "F", "y" or "n"
In your terms why not just ONLY return the defined controls that have actual data in them?
How many CPU cycles does isset() consume?
In php parlance I have to invoke two functions, $_POST[] and isset(), how much more additional overhead is required? In a single instance it is more then likely negligible, but if I have 1000 users all hitting the same server, what is the additional overhead? IMHO that is the pr
Re: (Score:2)
To check to see if data is there? Post is defined as returning the control and it's associated data, even if the data is null, well with one glaring exception.
And if you want to go deeper into how badly the DOM was conceived, why isn't a text area are an input element? It is used to capture multi-line text input all the time, we have both used exactly that way. But if in JavaScript you want to ripple through all the input controls, guess what text area's do not show up. DOM has serious problems that nee
Re: (Score:2)
Read much?
...you have a query ready to be sanitized before it is passed onto the database server.
Re: (Score:2)
Game over, thanks for playing, no refund for you.
It was an example of why the UA should return ALL the controls WITH their associated data.
Re: (Score:2)
Re: (Score:1)
For the most part, but Google Office has some advantages. Multiple people editing the same document at the same time can be really powerful.
Re: (Score:1)
Web programming has always felt to me like a failed attempt at reinventing X11.
Except the web works on dialup links whereas X11 feels sluggish on my 10/100 network
Re: (Score:2)
The Application Browser will install in user space, require no system privileges, maintain its own lib's and dll's in its own directory space and more importantly it will be lightweight, fast and very very small. It will stay the hell out of the windows registry and stay the hell out of the etc directory. It will keep everything it needs local.
I must have missed something (Score:1)
Isn't local storage part of HTML 5?
Re:I must have missed something (Score:5, Informative)
Yes, and I've already written apps using it. Safari supports the html5 local storage pretty well, including in the iPhone.
I, too, am unsure how this differs from other new local db storage techniques.).
Re: (Score:2)
I, too, am unsure how this differs from other new local db storage techniques.
Since they couldn't decide on what version of SQL dialect to use for Web SQL they're abandoning it for this new and improved idea developed by someone at Oracle. With Oracle's attitudes about licensing how could this be anything but the perfect solution?).
For this I generally use an XMLHttpRequest for a tiny file. If I get my "200" I'm connected, if I don't I'm not.
Re: (Score:3, Informative)
Why? Just make a request to your webserver. Even if you are connected to the "Internet", if you can't access your server you won't be able to sync.
Re: (Score:2)
Why? 'cause while it is possible to do in javascript, the appropriate place for it is the application/browser.
Re: (Score:3, Informative)
But the browser doesn't know how your app works! What about if your domain is accessible, but the URLs that provide the webservice your app needs isn't?
You'd have to provide an URL anyway, so the abstracted code would be something like:
function isNetConnected(url) {
request.open("GET", url, false);
request.send(null);
return (request.status == 200):
}
I don't find this to be "REALLY useful".
Re: (Score:2)
I'll stick to my guns, having created apps using local data for mobile use. Having the ability to detect net connection would be really useful.
While it's possible to hack this together in javascript, getting system status information from a try/catch type block executed in an asyncronous fashion leads to false positives, false negatives, and code that's generally difficult to debug.
It's not important that it works for you, it's important it work in the field.
A simple synchronous javascript API that allows t
Re: (Score:2, Informative)
What's missing, by the way, in my opinion, to make these REALLY useful, is a simple javascript call to determine if you are currently web connected.
You mean something like
var online = navigator.onLine
as defined in [w3.org] ?
I'm glad Microsoft is involved in the early stages (Score:4, Insightful)
so they have plenty of time to plan the (seemingly) minor but maddeningly frustrating ways they'll deviate from the standard.
Re: (Score:1)
mod parent up, we've all been there, wait, we are still there.
Re: (Score:2)
You would prefer IE to just use MSSQL CE?
Need to decouple Javascript before it's too late (Score:5, Insightful)
And I see that our options as developers for interacting with this stunning new invention are still limited to one: Javascript.
With application development increasingly moving to the browser, we as developers are going to find ourselves locked into a one language platform.
The browser platform should standardize on a VM, not on a language. Say goodbye to traditional paths of evolution of programming languages driven by competition. Want to innovate by using a functional language to bring your solution to market faster? No can do. It's JavaScriptway or the highway.
Re:Need to decouple Javascript before it's too lat (Score:1)
Re: (Score:2, Interesting)
Re: (Score:3, Informative)
Want to innovate by using a functional language to bring your solution to market faster? No can do.
That's not entirely true - you could write in Haskell and compile to JavaScript [haskell.org].
Re: (Score:2)
Yes, I could also burn my own eyes out with a cutting torch but why would I want to!
Re: (Score:3, Interesting)
Not entirely true. Technically xslt is a programming language and is supported by many browsers. I know of at least one person writing an XML/XSLT CMS.
It's too late (Score:3, Insightful)
Want to innovate by using a functional language to bring your solution to market faster? No can do.
If you're familiar enough with functional language F (and JavaScript) to be justifiably snobby about JavaScript's status as a functional language and suggesting a VM as a solution, you shouldn't have much trouble writing an F-to-JavaScript compiler.
(If you do, then you likely fail the "justifiably" part of the snobby criteria, and you're also probably not likely to get a jump on that time-to-market measure, g
Re: (Score:3, Insightful) la
Re: (Score:2) lag permanently 5 years behind because it doesn't help them sell more server hardware... And the whole thing can just fester until Google comes along and teams of the smartest people in the world waste years of their lives building a layer of sanity over the JavaScript mess that is acceptable enough to write apps for...
Java had almost everything right, but failed utterly on two important fronts:
- slow initial load time of the interpreter.
- GUI designers for layout and graphical elements
The first problem still hasn't be solved, and it's been more than a decade. When the Java plugin starts, everything grinds to a halt. What's even worse is that I have a fast dual-core CPU and a high-end SSD, and it still takes forever for that thing to load. Meanwhile,
.NET apps and Flash both start instantly.
The second problem is only now
Re: (Score:2)
- slow initial load time of the interpreter.
- GUI designers for layout and graphical elements
True... although third parties would have sprung up to take care of the GUI designer issue if there had been more demand. Sun did make attempts in this area, but they couldn't overcome their earlier mistakes.
The slow startup issue could have been addressed earlier with a resident VM strategy... but they wanted to wait for an perfect isolation API... and they wasted a decade on that and it never mattered.
The bigger problem though was going with a native AWT from the start instead of a mostly Java implementation like the current Swing. Netscape wanted a native look and feel so they caved and put out this mess that was the original AWT that didn't work consistently across platforms and wasn't truly extensible... and that was a lot of people's first exposure to Java. Then Microsoft froze progress there in their browsers and the rest is history.
Re:Need to decouple Javascript before it's too lat (Score:2)
Which is why it makes an easy target for a Scheme compiler, right? [github.com] [plt-scheme.org] [brown.edu]
Death of Web as I know it. (Score:3, Insightful)
The day I as user would not be able to resize browser window, adjust font size or copy-paste any random text from a page, will be the death of the web as I concerned.
Indexed DB/etc is OK - but rest of the carp they do under the guise of making web seamlessly integrating with the desktop is a huge leap back.
Some people has to sit for a moment and recall why web applications started winning over desktop applications.
Users like the division (Score:2)
Web as application delivery mechanism (Score:2)
But with this web stuff, now, if I want to persist data, I need to do
if you want to reinvent the wheel, do it right! (Score:2)
another idea
Re: (Score:2)
Damn and here I am with no mod points. +10^6 insightfull!!!!! You GO boy!
Re: (Score:2)
Damn and here I am with no mod points. +10^6 insightfull!!!!! You GO boy!
Wow, here I was thinking I've been getting a lot of mod points lately - but you've got me beat by several orders of magnitude!
Cookies on steriods (Score:1)
This is so ass backwards (Score:2)
I have an idea. Let's create a lightweight desktop app that can browse the web and stream audio/video, upload/download files, and submit text for online shopping, and posting to Slashdot. Let's call it web... err... uhmm... web browser. Yeah, that's it. Let's call it a web browser.
If we need to do anything more, develope a "helper application". Even better; an internet-enabled app that avoids screwing around with my browser altogether. I don't know about everybody else here, but I was around in the da
Re: (Score:3, Informative)
Re: (Score:2)
1 Introduction.
Hokay...
Someday the W3C is going to learn how to write I think. You know, in a human language, where the laws of physics apply. Non-normative indeed.
Re: (Score:2)
Re: (Score:2)
Yep, and if they didn't recommend a space limit of 10 megs, it might almost be useful.
Re: (Score:1)
Good news, no one will force you to participate. Isn't it great? You get to ignore it because you hate it, and those of us who don't hate it get to not be ignorant. Life is grand with choice.
Re: (Score:3, Insightful)
Microsoft will get behind anything that means the wheel can be reinvented - because somehow, some way, they will be able to make money from not actually having done anything new.
Re: (Score:2)
When was the last time you gave an intelligent response to an argument, rather than just throwing out abuse? | http://tech.slashdot.org/story/10/03/13/1659223/key-web-app-standard-approaches-consensus?sdsrc=next | CC-MAIN-2015-22 | refinedweb | 5,030 | 72.87 |
Re: [Zope-dev] Methods through the Web (security?)
Brian Lloyd wrote: I don't have a good answer for you, though I tend to agree with you that some things just don't want to be accessed outside of some larger context. I'd like to hear some different viewpoints on how people think something like this should work... What the difference
Re: [Zope-dev] Methods through the Web (security?)
Brian Lloyd wrote: Yes you could, except that you would also make them inaccessible from DTML (or from anywhere else) for the same class of users. Is it really acceptable that in order to use dtml-in objectIds on a page that needs to be accessible to anonymous users that I must grant
Re: [Zope-dev] Methods through the Web (security?)
Tres Seaver wrote: I don't get the issue here, I guess; either anonymous users can view objectIds (through the web, through XML-RPC, whatever), or they can't (because you don't want them to have the information that a given object is there, I guess?) Perhaps you just don't want to expose
Re: [Zope-dev] New Help System in 2.2
Paul Everitt wrote: I
Re: [Zope-dev] The future of Zope with XML
Kevin Dangoor wrote: Thanks for getting this document going. It's good to see where XML is going in Zope. [note that the zope-xml list may be a better list to discuss these things; I've cc-ed to there but we might want to move the thread there completely] I do have a question,
Re: [Zope-dev] Zope vs. .... missing features
Thomas Weholt wrote: Have anybody compared Zope to Roxen? Midgard? Or similar products? I saw Roxen had document revision system, a thing I "reported missing" earlier in this list. While Zope may not have that, it does have versions and undo. More version control facilities would indeed be
[Zope-dev] Catalog acquisition problems?
Hi there, We've been experiencing some odd interactions between the ZCatalog and acquisition. Inside the dtml-in catalog .. tags things seem to go screwy. It's picking up the properties in the root folder instead of in the subfolder (the context), where the dtml-in catalog .. is used. Why
Re: [Zope-dev] Catalog acquisition problems?
Dieter Maurer wrote: Martijn Faassen writes: [snip acquisition 'problem' with the catalog] I can understand that the result is not what one wants, but when I understand aquisition correct, it is in accordance with the documentation. Yes, after some more pondering and some experimentation
[Zope-dev] 2.2 annoying nit
Hi there, I see that the 2.2 still has this annoying, and seemingly completely unnecessary minor change in the tab order: Tabs for folder in 2.1: Contents View Properties Import/Export Security Undo Find Tabs for folder in 2.2: Contents Import/Export Properties View Find Security Undo
Re: [Zope-dev] I feel your Wiki Pain ;-)
Toby Dickenson wrote: On Fri, 15 Sep 2000 11:27:33 -0400 (EDT), Ken Manheimer [EMAIL PROTECTED] wrote: (Not sure that will scale, but creating new lists for each proposal definitely won't scale. I dont see this as a problem: You only create a new list when the traffic for that proposal
Re: [Zope-dev] 2.3.0 release badness
Chris Withers wrote: [snip] Sometimes, it'll just sit there redirecting back to the css page infinitely.. yum :-S Yeah, I've seen that kind of weirdness show up too occasionally; some kind of infinite css getting loop in Netscape. A couple of times I've seen it blow up the server logs; I
Re: [Zope-dev] 2.3.0 release badness
[EMAIL PROTECTED] wrote: I
[Zope-dev] ZSQL methods seriously broken
Hi there, Type marshalling is seriously broken in ZSQL methods. The bug is a bit subtle, though. There are reports of this in the collector almost a month old, and the severity of this bug is pretty high (could seriously disrupt Zope upgrades to recent versions which apparently have this bug; I
Re: [Zope-dev] ZSQL methods seriously broken
[browsing through old versions of Zope] I can't find any code that's supposed to do this in old versions of Zope either. (I may be missing something, though) If this feature was never there, I'd consider the ZSQL documentation (for instance in the Zope help) to be quite broken however.
Re: [Zope-dev] How long below the radar?
Rene Pijlman wrote: On 10 Jul 2001 08:06:42 +0200, you wrote: | How about treating some of the most critically needed Zope modules | as a community project? I agree totally. So what do you think are the most needed Zope products? Form tools! :) Uhmm. Check. Feel free to join the
[Zope-dev] bug in OFS/Traversable.py
Hi there, There appears to be a bug in Zope 2.4.1's OFS/Traversable.py. I've tried to identify the same bug in the CVS, but oddly enough I couldn't find it. There doesn't appear to be any Zope 2.4.1 CVS branch and the Zope 2.4 branch didn't have any changes to this file in 3 months, but doesn't
Re: [Zope-dev] DISCUSS: Community checkins for CVS
Paul Everitt wrote: At last, the announcement I've been dying to make. After much deliberation -- meaning, I've procrastinated for too long :^) -- I'm pleased to announce our approach for opening the CVS repository to community checkins. Cool, at last!
Re: [Zope-dev] ZTables and/or Catalog plugable brains?
Jay, Dylan wrote: [snip] Also in my searches I came across lots of references to something called ZTables. This seems to be a Catalog with a UI that is about lots of tabular information (rather than a ZCatalog which is specialized to replicating and indexing existing objects). Is this dead?
Re: [Zope-dev] not Python 2.2a1 but Python 2.2b1
Hannu Krosing wrote:. But
Re: [Zope-dev] Stripogram or similar in core
Andy,
Re: [Zope-dev] ParsedXML in Zclass methods loses permissions on Zope restart
Brad Clements wrote: I'm still casting around for a suggestion on where I can go to fix this. I have a ParsedXML object in the methods list of a ZClass Product. The Access Contents Information Permission Mapping always get's reset to blank in the ParsedXML object when Zope restarts.
Re: [Zope-dev] Cool stuff!
Phillip J. Eby wrote: P.S. Speaking of naming, I still dislike feature as a term for interface implementations; various suggestions available on the Feature page of the ComponentArchitecture Wiki. :) I agree. I still much prefer 'adaptor' and I don't buy the 'adaptors sound too much like
Re: [Zope-dev] My thoughts on the development process
Chris McDonough wrote: The other thing is that the core coders at Zope Corp snip are the only ones that can get around the fishbowl if they so desire. Here! Here! Not really. I couldn't, at least. You guys can use the fishbowl as what is in effect an announcement service. I'm
Re: [Zope-dev] My thoughts on the development process
Chris McDonough wrote: There really is a lot more work that goes into the stuff in the fishbowl from the folks at ZC than just an announcement Exactly. But in the end, if nobody responds except internally at ZC, and you implement it, the fishbowl stuff is kind of an announcement, right? And
Re: [Zope-dev] ZPT Plain Text
Chris Withers wrote: Phillip J. Eby wrote: I personally would like to see ZPT support plain text at some point, and it already has some of the things necessary to do it. But that's a separate issue from Zope 3X or Zope 3 itself. It already can: dummy tal:omit-tag=
Re: [Zope3-dev] Re: [Zope-dev] Cool stuff!
Re: [Zope-dev] My thoughts on the development process
Chris Withers wrote: Martijn Faassen wrote: a mailing list, are needed at least to get contributors going. I had to ask about releasing ParsedXML several times until I got some kind of 'aye' out of anyone. And it still wasn't clear. I shouldn't have to be that persistent. Well
Re: [Zope-dev] ZPT Plain Text
Re: [Zope-dev] disabling gc does not necessarily hide memorycorruption
Matthew T. Kromer wrote: [snip] Actually, I was kind of hoping Martijn Faassen would pipe up and say I applied the restricted python patches you've already put up on the Zope-2_4-branch, and my problems with ParsedXML went away! since he's one of the folks that did NOT benefit from
Re: [Zope-dev] Re: Zope 2.4 crashes -- possible fix identified, other solutions also suggested
Leonardo Rochael Almeida wrote: On Tue, 2001-12-18 at 13:44, Matthew T. Kromer wrote: Soo... if shutting off GC extends time between crashes for some folks from every 15 minutes to 3 times a day, my advise is to shut off GC. Now I can really confirm that gc.disable() is enough to avoid
[Zope-dev] security.declareProtected doesn't always work?
Hi there, I have some issues with using declareProtected() outside product classes (deriving from ObjectManager or SimpleItem). An external method example that _does_ work, taken from the ZDG: import Globals import Acquisition from AccessControl import ClassSecurityInfo class
Re: [Zope-dev] security.declareProtected doesn't always work?
Dieter Maurer wrote: [snip] Now replace the line security.declarePublic('getTitle') with something like security.declareProtected('View', 'getTitle'), and suddenly nobody is allowed to call getTitle() on a Book object anymore. You must acquistion wrap your book objects. Otherwise,
Re: [Zope-dev] Benchmarks: DTML vs. ZPT?
Chris Withers wrote: seb bacon wrote: It wouldn't surprise me - ZPT has the roughly the same overheads as DTML for the language parsing, but a presentation template goes through an HTML parser in addition - which is always going to be quite slow in python. IIRC, The HTML Parser is
[Zope-dev] copy paste 'leakage'
Hey, I'm running into a weird problem I'm not sure how to tackle. I've noticed that under some circumstances it takes a long time to copy and paste a ParsedXML object. This seems to happen in a clean Zope, at least in the Zope root, though it doesn't seem to happen in folders. I've also had it
Re: [Zope-dev] copy paste 'leakage'
Hi again, Another data point. Copy paste of ParsedXML documents is normal and fast when the object is in a folder not surrounded by too many other folders (or objects in general, not sure yet). If I create a bunch of very large folders sitting next to the ParsedXML document that I'm going to
Re: [Zope-dev] Zope 2.6 project updated
Gary Poster wrote: So, um, Stephan, any ideas? :-) I know you are busy, but are you interested in getting this in 2.6? I could help with testing as before, but I'd prefer to have you signed on as the primary resource. I don't know OrderedFolder very well but it'd be very useful in several
Re: [Zope-dev] Acquisition problem in 2.5.1b1? (was: where is Zope 2.5.1?)
Frank Tegtmeyer wrote: Brian Lloyd [EMAIL PROTECTED] writes: We are trying to get to the bottom of a few straggling instability reports, so we're planning to go ahead with I started with 2.5.1b1 today and have problems with our one central index_html approach. That would be really
Re: [Zope-dev] New-style ExtensionClass
Hey, Belated response, but.. Jim Fulton wrote: Speaking of Zope 2.8, Jeremy Hylton has suggested that, perhaps, Zope 2.8 should be a release that provides *only*: - New-style ExtensionClass, and - ZODB 3.3, featuring multi-version concurrency control, plus any features that have been
Re: [Zope-dev] New-style ExtensionClass
Jim Fulton wrote: See: Packages3/Interface in CVS If you put this ahead of the Zope 2 Interface package in your Python path, then you can use Zope 3 interfaces with Zope 2. That's great news! Is it the intention that this will be the default Interface package in Zope 2.8 then, or is
Re: [Zope-dev] Zope.org - SteveVisitingFredericksburgSprint
Jim Fulton wrote: Steve Alexander and I will be hosting a sprint in Fredericksburg January 12-14, 2004: A possible topic is Zope 2 to Zope 3 transition and working on Zope 2.9. Before independent discussions erupt here, see
[Zope-dev] 2.7 management_page_charset cannot be callable anymore
Hi there, Some changes in Zope 2.7 break the possibility to make management_page_charset a callable (for instance a method). This breaks Formulator, as it uses this facility. This works just fine in Zope 2.6, but breaks in Zope 2.7. The silly thing is that Formulator 2.6.0 breaks in Zope 2.7
Re: [Zope-dev] 2.7 management_page_charset cannot be callable anymore
Brian Lloyd wrote: I forward-ported these to the 2.7 branch the head. Any testing you can do to make sure I didn't break anything would be appreciated. I'm having trouble understanding what you forward-ported and what you'd like me to test. As far as I can determine
Re: [Zope-dev] 2.7 management_page_charset cannot be callable anymore
Re: [Zope-dev] 2.7 management_page_charset cannot be callable anymore
Hajime Nakagami wrote: Hi Sorry I have not execute Zope 2.7 or HEAD now. But I think needs not only the patch, but also below [patch to properties.dtml] To repeat: patching properties.dtml will never be able
Re: [Zope-dev] 2.7 management_page_charset cannot be callable anymore
Brian Lloyd wrote: I was trying to be responsive to getting the issue resolved, since I'd like to make a (hopefully final) beta of 2.7 of Friday. I'll be happy to check in (or have you check in) whatever fixes are needed to give you the flexibility you need so long as it is b/w compatible,
Re: [Zope-dev] Put an adapted object in context
Santi Camps wrote: My problem is that the adapter object, and also the adapted object contained in it, are out of publisher context or something like this. For instance, absolute_url() methods doesn't work becouse REQUEST is not defined. I'm not sure I understand what you mean; I don't
Re: [Zope-dev] Re: Adapters in Zope 2
Santi Camps wrote: Very interesting. That's what I was looking for. I will try to extract this mechanism from CMF. Silva has co-evolved (some of it inspired directly by CMF, some by Zope 3) much of the same infrastructure. Our view system is quite different, and some large changes to it in
Re: [Zope-dev] Adapters in Zope 2
Leonardo Rochael Almeida wrote: Acquisition is very powerful, and very magic at the same time. Adapters is Zope3 way of implementing Acquisition in a less surprising way. The main drawback of acquisition, which is a drawback in general of Zope 2, is that namespaces get conflated. Zope 2 is
Re: [Zope-dev] Put an adapted object in context
Santi Camps wrote: Thats very interesting !! I was rewriting __getattr__ to allow the adapter access adapted object attributes, but doing this way its clear and easier. Inheriting from Acquisition Implicit and applying the adapter using __of__ I obtain the same result and have less problems.
[Zope-dev] Re: Interfaces in Zope 2.5, 2.7, and 3.x
Jim Fulton wrote:
[Zope-dev] Re: [Zope3-dev] Re: Interfaces in Zope 2.5, 2.7, and 3.x
Tres Seaver wrote: Here is an excerpt from the 'runzope' I use for FrankenZope sites (that is our affectionate name for that Interface package): [snip script] Thanks! I'll try this one out. Regards, Martijn ___ Zope-Dev maillist - [EMAIL
Re: [Zope-dev] PageTemplateFile vs. Bindings vs. Security
Shane Hathaway wrote: There
Re: [Zope-dev] PageTemplateFile vs. Bindings vs. Security
Jamie Heilman wrote: Martijn Faassen wrote: Shane Hathaway wrote: There certainly ought to be a way to create an unrestricted PageTemplateFile, though it should be an explicit step. That is a good suggestion. I'd like that option. It would also be a potential performance benefit. On the other
Re: [Zope-dev] PageTemplateFile vs. Bindings vs. Security
Dario Lopez-Ksten wrote: Jamie Heilman wrote: Martijn Faassen wrote: On the other hand, in situations where the PageTemplate designers are *not* security conscious (they're designers, not primarily programmers) the option of explicit checks is useful. PageTemplateFile is a class used
Re: [Zope3-dev] Re: [Zope-dev] Re: More arguments for z (was Re: Zope and zope)
Stephan Richter wrote: On Thursday 15 April 2004 11:39, Casey Duncan wrote: Additionally (and Jim and I have discussed this amongst ourselves) I feel strongly that the dependancies should be enforced by tests. That is, if you introduce and errant dependancy (by adding an import to a new package
Re: [Zope3-dev] Re: [Zope-dev] Re: More arguments for z (was Re: Zope and zope)
Stephan Richter wrote: On Thursday 15 April 2004 13:22, Martijn Faassen wrote: Note that for checking dependencies in Python code I still think this tool could be improved by using technology from importchecker.py which can use Python's
Re: [Zope3-dev], [Zope-dev] Import checking code
Fred Drake wrote: On Thursday 15 April 2004 13:22, Martijn Faassen wrote: If somebody lets me know which API they want implemented for retrieving imports (and use of imports) I could do this lifting work myself. I'm not sure simply re-implementing one of the finddeps.py internal interfaces
Re: [Zope-dev] Re: The bleak Future of Zope?!
Jim Fulton wrote: I'm surprised to read this. Could you be more specific about your concerns? Did you read Andreas Jung's mail? He was pretty specific, but I had to hunt around as in my mailreader his reply had broken the thread. Regards, Martijn ___
Re: [Zope-dev] The bleak Future of Zope?
Stephan Richter wrote: For Zope 3 however, I can give a very well-informed opinion. Philipp privately pointed out to me that people exected Zope 3 technologies to arrive earlier in Zope 2, such as the CA and principals maybe. Note that you were one of those people, in 2002. I remembering you
Re: [Zope-dev] Re: The bleak Future of Zope?
Stephan Richter wrote: Nobody is willing to contribute. ZC agreed to change zope.org to Plone so more community members can contribute. But noone has stepped up; that's very sad. I believe part of the blockage is because contributors have to sign far more than just a simple CVS contributor's
Re: [Zope-dev] Re: The bleak Future of Zope?
Casey Duncan wrote: On Wed, 21 Apr 2004 11:36:31 +0200 Andreas Jung [EMAIL PROTECTED] wrote: - very few people are willing to contribute to documentation On a bright note, I think zopewiki.org could change that. It *greatly* lowers the bar on contributing substantive docs for Zope. I would
[Zope-dev] On a constructive note: Zope 2.8
Hey there, I understand from: Zope 2.8 is now planned for june. If Zope 2.8 is indeed released by june this could fit fairly well with my own (also delayed :) plans for using this facility in Silva. The obvious area I could try
[Zope-dev] Re: The bleak Future of Zope?!
Jim Fulton wrote: Martijn Faassen wrote: Jim Fulton wrote: I'm surprised to read this. Could you be more specific about your concerns? Did you read Andreas Jung's mail? He was pretty specific, but I had to hunt around as in my mailreader his reply had broken the thread. I was responding
[Zope-dev] Re: On a constructive note: Zope 2.8
Jim Fulton wrote: Have interfaces stabilized enough to start this work, or should I wait until next month (may is indicated on the planning). I think so. You think I can start now or you think I should wait? :) What steps need to be taken concretely before such integration is considered
[Zope-dev] Re: On a constructive note: Zope 2.8
Jim Fulton wrote: Jim Fulton wrote: this could fit fairly well with my own (also delayed :) plans for using this facility in Silva. The obvious area I could try to contribute is in integrating Zope 3 interfaces in Zope 2. I meant to mention that Kapil has offered to work on this. I suggest you
Re: [Zope-dev] The bleak Future of Zope?
Lennart Regebro wrote: A lot of the things that are CMF should have been put into Zope core. Agreed, that'd been a lot better. The CMF is a framework. It'd be nicer if it'd been a set of independent components. Then Silva (for instance) could've used more of what's in the CMF than is possible
[Zope-dev] Re: [Zope3-dev] Decouple Interface and zope.interface (Martijn was right)
Jim Fulton wrote: [decouple interface implementation] *falls into a dead faint* *wakes up and starts bouncing around* *Loud cheering!* Awesome, thanks, Jim! A good start of my working week, too. *cough* *regains composure* *ahum* +1 Regards, Martijn
Re: [Zope-dev] Do we need a Packages directory in the new repository
Jim Fulton wrote: Historically, we've had Packages, Products, Packages3 and Products3 directories in the CBS repository. I wonder of we need these going forward. Perhaps we should just have top-level projct directories in the new subversion repository. I think having a distinction between Zope 2
[Zope-dev] Re: [Zope3-dev] RE: [ZODB-Dev] Subversion repository layout
Tim Peters wrote:
[Zope-dev] Re: [Archetypes-devel] Unicode in Zope 2 (ZMI, Archetypes, Plone, Formulator)
Bjorn Stabell wrote:) While
Re: [Zope-dev] Re: [Archetypes-devel] Unicode in Zope 2 (ZMI, Archetypes, Plone, Formulator)
David Convent wrote: Hi Bjorn, I always believed that unicode and utf-8 were same encoding, but reading you let me think i was wrong. Can you tell me what the difference is between unicode and utf-8 ? Unicode should not be seen as an encoding as such. While Python internally uses an encoding
[Zope-dev] Re: [Zope3-dev] RE: [ZODB-Dev] Subversion repository layout
Kapil Thangavelu wrote: sigh.. debating over what the book says isn't very productive. my conclusions at the end of my previous email, namely that what this layout will accomplish for the zopeorg repository in terms of avoiding renames of checkouts will likely be fairly limited in pratice, still
Re: [Zope-dev] Re: Read-only root database doesn't work ... bug or feature?
Dieter Maurer wrote: T
Re: [Zope-dev] Re: Read-only root database doesn't work ... bug or feature?
Paul Winkler wrote: On Mon, May 24, 2004 at 09:50:31AM +0200, Martijn Faassen wrote: Yup, it's the help system. This is very odd. Did you see the message I sent to formulator-dev a few days ago? No, sorry, just taking a look at it. I spent some time tracing the source of the ReadOnlyErrors
Re: [Zope-dev] Re: Read-only root database doesn't work ... bug or feature?
Paul Winkler wrote: On Mon, May 24, 2004 at 06:55:02PM +0200, Dieter Maurer wrote: Content-Description: message body and .signature Martijn Faassen wrote at 2004-5-24 09:50 +0200: ... I know this has been reported before but I haven't looked into it yet. I'm wondering how to handle Formulator
Re: [Zope-dev] Re: Read-only root database doesn't work ... bug orfeature?
Tim Peters wrote: [Martijn Faassen] ... I'm not sure whether the patch ever could've worked. Firstly, the rich comparison operations were never called; I think perhaps due to some limitation in ExtenionClass. ExtensionClass doesn't play well with many newer Python class features. Rich comparisons
Re: [Zope-dev] Re: Read-only root database doesn't work ... bug orfeature?
Paul Winkler wrote: The fix is in the latest Formulator CVS; Paul, please test it if you can and let me know if you still see the untowards behavior. It seems good, thanks!! Great! Thanks everybody! Regards, Martijn ___ Zope-Dev maillist - [EMAIL
[Zope-dev] Re: [Zope3-dev] status of Zope versus zope?
Jim Fulton wrote:
[Zope-dev] Re: [Zope3-dev] status of Zope versus zope?
Jim Fulton wrote:
[Zope-dev] Re: status of Zope versus zope?
Philipp von Weitershausen wrote: Martijn Faassen wrote: I don't understand what this means. A different directory on the python path? I would recommend leaving old Zope2 stuff in lib/python and putting all Z3-related stuff in a parallel directory called 'src'. That way you can run a whole
[Zope-dev] Re: [Zope3-dev] status of Zope versus zope?
Jim Fulton wrote: Martijn Faassen wrote: Hm, that's not a big deal then. I'm just at a loss how this would fix the case-insensitivity import problem on Windows; I think I'm missing something. Yes, you are. Python has no trouble importing two packages with names differing only by case
[Zope-dev] Re: [Zope3-dev] status of Zope versus zope?
Fred Drake wrote:
Re: [Zope-dev] Re: Five and 2.9
Jim Fulton wrote: Raphael Ritz wrote: Thanks for this clairification, Jim. Alan, does that address your concerns? Any reasons left, not to adopt the five approach to Zope 3? Just understand that the Five approach is still being developed, so there's nothing to adopt yet. :) But I certainly
Re: [Zope-dev] Re: [Plone-developers] Re: Five and 2.9
alan runyan wrote: Alan, does that address your concerns? Just understand that the Five approach is still being developed, so there's nothing to adopt yet. :) But I certainly encourage folks to participate and help Martijn figure out what the approach should be. Raphael, I think its great that
Re: [Zope-dev] Re: [Plone-developers] Re: Five and 2.9
Jim Fulton wrote: Can we please stop using this name in writing? It is funny, but not very reassuring to outsiders. :) What do we call this project then? :) The Project-that-shall-not-be-named! Regards, Martijn ___ Zope-Dev maillist - [EMAIL
Re: [Zope3-dev] Re: [Zope-dev] Re: Five and 2.9
Janko Hauser wrote: Martijn Faassen wrote: There's the 'approach' and the implementation. The approach is fairly clear: a focus on baby steps to integrate into Zope 2.7. The aim is to introduce as much as possible as make sense of Zope 3 facilities into Zope 2. Besides ourself also Christian
Re: [Zope-dev] Events in the core
Re: [Zope-dev] Events in the core
Florent Guillaume wrote: [snip]
Re: [Zope-dev] Renamed the Zope package to Zope2 and including Zope 3 packages in Zope 2.8
Sidnei da Silva wrote: On Mon, Jan 31, 2005 at 10:53:01AM -0500, Jim Fulton wrote: snip | I haven't decided | which parts of Zope 3 should be included in Zope 2.8 and would like to | get input. If you have suggestions on what to include or exclude, | please respond here or on the z3-file list,
Re: [Zope-dev] Renamed the Zope package to Zope2 and including Zope 3 packages in Zope 2.8
Jim Fulton wrote: Paul Winkler wrote: +1 on all of those from me. However, I will be satisfied with anything that gets released as 2.8 sometime this year ;-) Absolutely. The top priority, IMO, is getting 2.8 out as soon as we can. Excuse me, but it seems bizarre to me that *if* the top priority
Re: [Zope-dev] Re: Renamed the Zope package to Zope2 and including Zope 3 packages in Zope 2.8
Christian Heimes wrote: Jim Fulton wrote:,
Re: [Zope-dev] Renamed the Zope package to Zope2 and including Zope 3 packages in Zope 2.8
Stephan Richter wrote: On Wednesday 02 February 2005 05:28, Chris Withers wrote: Martijn Faassen wrote: That's only to make things more easily deployable. Right now the hard part is however detaching Zope 3 stuff from its dependencies Really? That's extremely disappointing :-( The most important
Re: [Zope-dev] Renamed the Zope package to Zope2 and including Zope 3 packages in Zope 2.8
Jim Fulton wrote: Martijn Faassen wrote: ... Five has dependencies on zope.app, so to make Five use Zope 2.8 packages would require quite a bit of Zope 3 to be pulled in, or an awful lot of work to prevent it from being pulled in. I think Zope 3 is at a point where, if there are volunteers
Re: [Zope-dev] Re: Renamed the Zope package to Zope2 and including Zope 3 packages in Zope 2.8
Jim Fulton wrote: Would it make sense to have Zope 2.8 include all of the packages below other than zope.app and for Five to supply it's own zope.app? It would make life harder for Five, and create more work for us, as we'd have to worry about: * shipping a zope.app ourselves (does it contain
Re: [Zope-dev] Re: Renamed the Zope package to Zope2 and including Zope 3 packages in Zope 2.8
Jim Fulton wrote: Lennart Regebro wrote: [snip] I'm leaning more towards realeasing 2,8 now, and skipping this renaming thing alltogether. But then, I don't know your reason for wanting to do it in Zope 2.8, which I expect is a really good one (it usually is). I want zope.interface and
Re: [Zope-dev] Re: Renamed the Zope package to Zope2 and including Zope 3 packages in Zope 2.8
Dieter Maurer wrote: Martijn Faassen wrote at 2005-2-2 19:09 +0100: ... What other use cases are floating around? The CMF user group would like to use Zope3's events and subscriptions to make creation, deletion and modification interception more flexible. Yes, those are definitely useful. I mean
[Zope-dev] Re: [Zope3-dev] Re: Heads-up: Zope 2.8, Zope 3 and Five
On Tue, Mar 15, 2005 at 04:57:00PM +0100, Christian Heimes wrote: I really *love* to have Five and parts of ZopeX3 in Zope2 but I don't like how it is happening. Zope 2.8 is starting to stablize and still contains some critical bugs like the incompatibility with old style BTrees and you are
[Zope-dev] Re: [Zope3-dev] Re: Heads-up: Zope 2.8, Zope 3 and Five
On Tue, Mar 15, 2005 at 07:31:51PM +0100, Christian Heimes wrote: Martijn Faassen wrote: [Could you point me to the issue or mail describing the old-style BTree problem? I may have run into it under another name or something.] Persistent* were fixed
[Zope-dev] Re: [Zope-Checkins] SVN: Zope/trunk/ - Applied patch for
On Thu, Mar 17, 2005 at 10:34:10AM -0500, Tim Peters wrote: [Sidnei da Silva] Humm... we are trying to push a Zope 2.8 beta out, do you have or know of plans to use ZODB 3.4 with Zope 2.8? Yes. Jim needs to fix ZClasses for 2.8 too. ZODB 3.4 requires some Zope(3) features, like
[Zope-dev] Zope 2.8 + Five post-sprint status
Hey everybody, We're wrapping up here at a very pleasant and productive Zope 2/3/Five sprint here in Paris. We've accomplished quite a lot, and we'll let you hear what this is in more detail soon. We've spent a lot of time with Zope 2.8, integrating Zope X3.0 and Five into it. Our work is on | https://www.mail-archive.com/search?l=zope-dev@zope.org&q=from:%22Martijn+Faassen%22 | CC-MAIN-2018-47 | refinedweb | 5,255 | 64.51 |
RodneyShag + 23 comments
Java solution - passes 100% of test cases
I use a Trie.
If your tests are hanging/timing out/not finishing, you likely need a faster algorithm.
I keep a "size" at each node that keeps track of how many complete words can be made from that node. This makes my runtime faster since I don't have to recalculate the number of valid words from a node.
import java.util.Scanner; import java.util.HashMap; public class Solution { public static void main(String[] args) { Scanner scan = new Scanner(System.in); int n = scan.nextInt(); Trie trie = new Trie(); for (int i = 0; i < n; i++) { String operation = scan.next(); String contact = scan.next(); if (operation.equals("add")) { trie.add(contact); } else if (operation.equals("find")) { System.out.println(trie.find(contact)); } } scan.close(); } } /* Based loosely on tutorial video in this problem */ class TrieNode { private HashMap<Character, TrieNode> children = new HashMap<>(); public int size; public void putChildIfAbsent(char ch) { children.putIfAbsent(ch, new TrieNode()); } public TrieNode getChild(char ch) { return children.get(ch); } } class Trie { TrieNode root = new TrieNode(); public void add(String str) { TrieNode curr = root; for (char ch : str.toCharArray()) { curr.putChildIfAbsent(ch); curr = curr.getChild(ch); curr.size++; } } public int find(String prefix) { TrieNode curr = root; for (char ch : prefix.toCharArray()) { curr = curr.getChild(ch); if (curr == null) { return 0; } } return curr.size; } }
From my HackerRank Java solutions.
kruti_bliss11 + 2 comments
Awesome! but does it pass all test cases?
RodneyShag + 1 comment
Yes
stani_frolov + 1 comment
Great tip with keeping a numbers variable at each node. thanks!
ashish_kambiri + 1 comment
It increments the size if we do:
add hack add hack
jimmy_newsom + 0 comments
it says there are no duplicate words in the input, so for this example it doesnt matter. good point though
parthshorey + 1 comment
It doesnt work. it breaks at putIfAbsent. is this Java 7 or 8?
aleathorn + 3 comments
I did the same. Here is my C++ solution. I probably could have come up with a better class name. What to name thing is always the challenge, isn't it?
class Words { public: Words() {} int fullCountUnder; bool isFullWord; map<char, Words*> m; void insert(string text) { Words* word = this; word->fullCountUnder++; for (char &c : text) { Words* prev = word; word = word->has(c); if (word == NULL) { Words * newWord = new Words(); prev->m.insert(pair<char, Words*>(c, newWord)); word = newWord; } word->fullCountUnder++; } if (word != this) { word->isFullWord = true; } } Words* has(char c) { map<char, Words*>::iterator it = this->m.find(c); if (it != this->m.end()) { return it->second; } return NULL; } int matchCount(string partial) { Words* word = this; for (char& c : partial) { word = word->has(c); if (word == NULL) { return 0; } } return word->fullCountUnder; } };
fei_overney + 1 comment
In C++, the data members get automatically initialized in the (default) constructor, in this case, fullConutUnder = 0; isFullWord = false; and m is create with size 0
aleathorn + 1 comment
Thanks for this. I see that it is initializing these values as you mentioned, but I couldn't find any documentation on whether or not this would always be the case. Obviously it's best to be safe, in case something changes, so I'm curious if this is a feature or a byproduct of how things currently work. Any ideas?
fgleeson68 + 0 comments
In modern c++ (c++11,14,17 and so on) there is uniform initialization rules. It is a good habit to use the curly braces {} to specify either default construction (as was happening accidentally in your case) or supply params if using a different constructor. You can do this right in the header file when you declare the variable and avoid adding ugly initialization clauses just before the constructor body as was needed in the past. Although the default constructor will still save you using the curly braces makes your intention more clear to programmers reading your code.
class A { A() = default; private: bool isWord{}; };
yl125 + 0 comments
I think you can use recursion and it makes the class much simpler. Here is what I did with my class.
class Node{
public: map<char, Node> children; bool isCompleteWord; int words; Node() {isCompleteWord = false; words = 0;} void addContact(string s) { words++; if (s.size() == 0) { isCompleteWord = true; } else { children[s[0]].addContact(s.substr(1)); } } int find(string s) { if (s.size() == 0) { return words; } else { return children[s[0]].find(s.substr(1)); } }
};
f_hajirostami + 1 comment
Also based on your implementation, there's no need for a default constructor / the Trie(String[] words) method
RodneyShag + 1 comment
Actually it's still needed. Since a non-default constructor was created, Java requires that we create a default constructor as well. If you remove it and try to run the code, you will see that it will fail.
f_hajirostami + 1 comment
I did on java 8 and it worked fine
RodneyShag + 0 comments
I should have been clearer: Since my code says:
Trie trie = new Trie();
then I need do define a default constructor since one is not going to be defined for me (since I created a constructor with 1 parameter). I can however remove the line above (or replace it with something else), and then I will be able to remove my default constructor as well. Let me know if you have any other questions.
apontejose + 0 comments
Thanks for the words counter tip. That made a huge difference.
Lesson: If your program is timing out, find out the main performance issue and find a way to mitigate it.
arpita_dessai + 0 comments
Thank you. Maintaining "size" at each node helped. I was using a (redundant) queue to traverse the trie and then calculate, and that was turning to be very expensive.
hartleymack + 1 comment[deleted] RodneyShag + 0 comments rnatesan21 + 1 comment
not a big deal... for simplicity you could have used: for (char c:str.toCharArray()) for (char c:prefix.toCharArray()) { instead of
for (int i = 0; i < str.length(); i++)
and
for (int i = 0; i < prefix.length(); i++)
RodneyShag + 0 comments
(char c:str.toCharArray())
does make the code look cleaner, but it may make slower since we have to convert the String to a character array before traversing it.
timiareola + 0 comments
thank you. i couldnt have done this porblem without your help. her tutorial video didnt do much for me at all. im still a little lost about your solution but with a couple extra run throughs, i should be able to figure it out. Thanks!
Quazar777 + 1 comment
how does your algorithm differentiate between the first and second 'h's of 'hackathon'?
RodneyShag + 0 comments
For add, the algorithm just builds the Trie h-a-c-k-a-t-h-o-n.
For find, let's say we're trying to find "h", well it starts at the root of the Trie and searches from there, so it doesn't start at the 2nd h.
vishwa_bhat19 + 1 comment
Hi rshaghoulian,
A small doubt here.. in the add method the following two lines of code are confusing me a bit:
curr = curr.getChild(ch); curr.size++;
Why are we incrementing the size variable of the child, shouldn't we be increasing the size variable of the parent?
Shouldn't it be :
curr.size++; curr = curr.getChild(ch);
Thank you in advance!
RodneyShag + 0 comments
Try walking through the add() function, with a word, such as Ben.
- We create B as a child, go to it, and increment it's size
- We create e as a child, go to it, and increment it's size
- We create n as a child, go to it, and increment it's size
Now each letter has proper size.
lricardopena + 1 comment
Hello, I do a code similar, but at the end, I need to check if the root is a word, and if is a word, I add one because the test #1 din't pass if we don't do that.
struct Node { map<char, Node*> children; bool isEndOfWord = false; int number_words = 0; }; void add_name(Node *root, string name) { int number_words = 0; for (int i = 0; i < name.size(); i++) { if (root->children[name[i]] == NULL) { root->children[name[i]] = new Node(); } root->number_words++; root = root->children[name[i]]; } root->isEndOfWord = true; } int find_name_partial(Node *root, string partial) { if(!root) return 0; for (int i = 0; i < partial.size(); i++) { if (root->children[partial[i]] == NULL) return 0; root = root->children[partial[i]]; } return root->number_words + (root->isEndOfWord ? 1 : 0); }
edwarddouglasro1 + 0 comments
I did mine with a hashMap, no trees, passes all the test cases, Java 8. Someone tell me if the memory is a problem?
public class Solution { public static final String ADD = "add"; public static final String FIND = "find"; private static HashMap<String, Integer> contactList = new HashMap<String, Integer>(); public static void main(String[] args) { Scanner in = new Scanner(System.in); Integer n = Integer.parseInt(in.nextLine()); for(int a0 = 0; a0 < n; a0++){ String[] opData = in.nextLine().split("\\s+"); if (opData[0].equalsIgnoreCase(ADD)){ addSubstringsToContacts(opData[1]); } if (opData[0].equalsIgnoreCase(FIND)){ System.out.println(numOfPartials(opData[1])); } } } private static int numOfPartials(String partialStringToFind) { int result = 0; if (contactList.containsKey(partialStringToFind)) { result = contactList.get(partialStringToFind); } return result; } private static void addSubstringsToContacts(String contactName) { String partialValue; for (int j = 1; j <= contactName.length(); j++){ partialValue = contactName.substring(0, j); if (contactList.containsKey(partialValue)) { contactList.put(partialValue, contactList.get(partialValue) + 1); } else { contactList.put(partialValue, 1); } } } }
rakeshsenapathi + 0 comments
Thanks for helping out. Just to add, if anyone is facing timeout at testcase 3. Try replacing the Scanner class with BufferedReader.
tinidino + 14 comments
No node, no tree, actually it works. I just use the same mechanise that a searchengine should do.
# all letters, a, ab, abc, .. def edge_ngram(contact): return [contact[0:idx] for idx in range(1, len(contact) + 1)] contact_indices = {} def add(contact): for token in edge_ngram(contact): contact_indices[token] =contact_indices.get(token, 0) + 1 def find(name): return contact_indices.get(name, 0) n = int(input().strip()) for a0 in range(n): op, contact = input().strip().split(' ') if op == 'add': add(contact) elif op == 'find': print(find(contact))
roman_kiselew + 1 comment
Hey, this is a great solution! Short and elegant!
matant + 1 comment
there is 1 problem with that solution and is that
contact_indicesbecome very big in big inputs
Quazar777 + 1 comment
size isn't a problem in Python. And time complexity is not very big, because access in a dictionary is O(1).
yanndubois96 + 1 comment
The worst case memory complexity of this solution is O(c^L) where c is the number of possible character (in our case 26) and L is the max length of the querry (in our case 21). I agree that O(26^21) is O(1) as these are constants but this means that the worst case memory complexity is 26^21 * 8 bytes ~= 10^28 TeraBytes. I'm not sure what you mean by "size isn't an issue in python" but I'm pretty sure that you can' keep that in memory ! And this in this case they were nice enough to tell us that there are only lower case english caracters. Bottom line : you don't want to show that in an interview
tepperson + 0 comments
Not to nit-pick here, but your math is a little off. In Python a string takes up 40 bytes to begin with and since our parameters are that 1 =< N =< 10^5 and 1 =< name =< 21, that means that a string with a maximum possible length of 21 only takes up ~46 bytes in memory:
import sys sys.getsizeof('abcdefghijklmnopqrstu')
If you multiply this by the maxium possible entries, not including find operations, it comes out to ~4.6 MBs.
nthuy + 2 comments
neat! and actually used in IR systems!! It uses the postings list of tokenized edge n-gram (more on NLP Information Retrieval here:, it was my course textbook)
Here's my Java implementation:
import java.io.; import java.util.; import java.text.; import java.math.; import java.util.regex.*;
public class Solution {
static HashMap<String, Integer> tokenMap = new HashMap<String, Integer>(); public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); for(int a0 = 0; a0 < n; a0++){ String op = in.next(); String contact = in.next(); if (op.equals("add")) { add(contact); } else if (op.equals("find")) { find(contact); } } } static void add(String contact) { String[] splitWords = contact.split("\\s+"); for (String w : splitWords) { for (int i=0; i<=w.length(); i++) { String token = w.substring(0, i); if (tokenMap.containsKey(token)) { int currOccurrences = tokenMap.get(token); tokenMap.put(token, currOccurrences + 1); } else { tokenMap.put(token, 1); } } } return; } static void find(String contact) { if (tokenMap.containsKey(contact)) { System.out.println(tokenMap.get(contact)); } else { System.out.println(0); } }
}
point_to_null + 1 comment
Such an elegant solution. Amazing. So simple I'll try to implement it with BASH.
point_to_null + 2 comments
awk ' /^add/{for (idx=1; idx<=length($2); idx++) a[substr($2, 0, idx)]++} /^find/{print(a[$2] + 0)} '
Or, It could be a one-liner if that's your thing. Thanks tinydino, that was good.
uxvxr + 0 comments
Very elegant, point_to_null! Nothing short of brilliant, in fact. A couple of questions:
(1) Why does this still pass all tests, given that it completely ignores the first input (the first input is a number which specifies how many total add/find operations will be done)
(2) Why is this time- and space-wise more efficient than simply creating a string of names (for creating the contact list) and, for searching, merely counting grep matches, like this:
read numops contactlist="" for ((i = 0; i < numops; i++)); do read operation name [[ $operation == add ]] && contactlist="$contactlist $name" [[ $operation == find ]] && echo $contactlist | tr " " "\n" | grep -c ^$name done
I believe this code also will pass all tests, but hackerrank systems give me a timeout error on all but three tests! Is this happening because of the for loop I'm running?
johannh + 2 comments
Very nice solution. Thanks! Here is a c++ version:
vector<string> tokenize(string word){ int n = word.length(); vector<string> tokens (n); for (int i=0; i<n; i++){ tokens[i]=word.substr(0,i+1); } return tokens; } int main(){ unordered_map<string,int> contact_indices; int n; cin >> n; for(int a0 = 0; a0 < n; a0++){ string op; string contact; cin >> op >> contact; if (op=="add"){ vector<string> tokens = tokenize(contact); for (string token: tokens){ contact_indices[token]++; } } if (op=="find"){ cout << contact_indices[contact] << endl; } } return 0; }
danielpham1988 + 0 comments
thanks. this version is wonderful. when it comes to performance we need something like this.
konsalexee + 1 comment
How much space this solution takes in comparison with the Node solutions?
nikhilpandey360 + 1 comment
if this solution passes the tests why mine failing? it says timeout.
contacts= [] n = int(input().strip()) for i in range(n): op, contact = (input().strip().split(' ')) if op =="add": contacts.append(contact) else: count = 0 for x in contacts: if contact in x:count+=1 print(count)
narape + 0 comments
Because your solution is O(n) for every query and n is too large and/or too many queries.
I did mine using sorted sets and then I counted how many elements had a subset with the right bounderies (O(log n) to find the bounderies).
Using tries would be even better as it would be O(log k) where k is the length of the string
saikumarm4 + 0 comments
yes, we can get the solution in multiple ways. The objective of the question is for us to understand the behaviour of tree and make effective use of it.
tovin07 + 0 comments
Yep, so beautiful!
We can use collections.Counter to get a shorter solution
from collections import Counter def grams(contact): return [contact[:i] for i in range(1, len(contact)+1)] counter = Counter() n = int(input().strip()) for a0 in range(n): op, contact = input().strip().split(' ') if op == 'add': counter.update(grams(contact)) elif op == 'find': print(counter.get(contact, 0))
jcchuks1 + 7 comments
Hi, I implemented in python 2.7 and am getting Segmentation Fault error on test case 12 only.
Running my code on my machine works fine and passes test case 12. I understand Segmentation Fault has to do with something like acessing some unallocated memory stack. Is there another definition for Segmentation Fault specific to Hackerrank.
See code below. Please if you run it and figure out how to pass the test case 12, please kindly share your what you did differently. Thanks
EDIT: It passes all test cases including test case 12, just add
slots = ["children","char","is_word","words_count"]
class Node: __slots__ = ["children","char","is_word","words_count"] def __init__(self,char=None): self.char = char self.count = 0 self.children = [] n = int(raw_input().strip()) root = Node() def add(contact): global root node = root charNo = 0 for char in contact: noOfChildren = len(node.children) childIndex = 0 charNotFound = True while childIndex < noOfChildren : if char == node.children[childIndex].char: node.children[childIndex].count += 1 node = node.children[childIndex] charNotFound = False break childIndex += 1 if charNotFound: newCharNode = Node(char) newCharNode.count += 1 node.children.append(newCharNode) node = newCharNode def find(contact): global root node = root previous = None charNotFound = True for char in contact: noOfChildren = len(node.children) childIndex = 0 charNotFound = True while childIndex < noOfChildren: if char == node.children[childIndex].char: node = node.children[childIndex] charNotFound = False break childIndex += 1 if charNotFound: return 0 return node.count for a0 in xrange(n): op, contact = raw_input().strip().split(' ') if op == "add": add(contact) elif op == "find": print find(contact)
kuddai92 + 2 comments
My first solution was identical and it didn't pass testcase 12:
from collections import defaultdict from sys import stdin # clean and concise implementation that will fail on testcase 12 # due to python wasting too much memory on deep chains # of characters. class Node(object): def __init__(self): self.children = defaultdict(Node) self.num_usage = 0 class Trie(object): def __init__(self): self.root = Node() def add(self, word): node = self.root for char in word: node = node.children[char] node.num_usage += 1 def find(self, partial): node = self.root for char in partial: if char not in node.children: return 0 node = node.children[char] return node.num_usage n = int(raw_input().strip()) trie = Trie() for a0 in xrange(n): op, contact = stdin.readline().strip().split() if op == 'add': trie.add(contact) elif op == 'find': print trie.find(contact)
The problem here is that python is allocating too much memory on storage container for each node (in your case it is list, in my case it is defaultdict, etc.). That is why on testcase it is throwing error because grader exceeds its memory limit. The solution for python guys is to notice that we add a lot of very long (len == 21) rare (occurs only once) words, however, when we try to find partial, the length of partial doesn't exceed 5 (it wasn't told in tutorial, I found it imperically looking into testacases). It reflects normal situation with words -> the deeper we are going into the trie the smaller frequency becomes for each node. So we are wasting a lot of memory on very deep chains in our trie. One possible solution here is to store in the node the entire remaining suffix for each such long word if this suffix occurs only once. Only if we see this suffix again we split it into the usual trie structure. On example from the task it will be:
- We add 'hack'. Our trie becomes: root -> 'h' -> 'ack'
- We add 'hackerrank'. Our trie becomes: root -> 'h' -> 'a' -> 'c' -> 'k' -> 'errank'
Such scheme allows to save memory by preserving whole suffix (like 'errank') instead of creating deep chain where on each level we would have to allocate storage container for children. This is a bit stupid for this task because this optimization doesn't change memory consumption on global scale in terms of O(n) notation. Personally, I think that the grader memory limit must be slightly increased for this task.
In case you wonder here is optimized solution:
from collections import defaultdict from sys import stdin # the whole stupid split suffix functionality # is only here because very rare long words will screw memory # consumption otherwise class Node(object): def __init__(self): self.val = '' self.children = defaultdict(Node) self.num_usage = 0 @property def is_whole_suffix(self): return self.num_usage == 1 @property def children_suffix(self): assert self.is_whole_suffix # meaningful only for whole suffixes return self.val[1:] def split_suffix(self): assert self.is_whole_suffix # meaningful only for whole suffixes child = self.children[self.val[1]] child.val = self.val[1:] child.num_usage += 1 def add(self, char): if self.is_whole_suffix: # preserve memory by concatinating self.val += char # characters in suffix if the word return self # is used only once child = self.children[char] if child.is_whole_suffix and len(child.val) > 1: child.split_suffix() child.val = char child.num_usage += 1 return child def get(self, char): assert self.has(char) # because defaultdict is used return self.children[char] def has(self, char): return char in self.children def has_same_suffix(self, rest_partial): assert self.is_whole_suffix # meaningful only for whole suffixes return self.children_suffix[:len(rest_partial)] == rest_partial class Trie(object): def __init__(self): self.root = Node() def add(self, word): node = self.root for char in word: node = node.add(char) def find(self, partial): node = self.root for i, char in enumerate(partial): if node.is_whole_suffix: rest_partial = partial[i:] return int(node.has_same_suffix(rest_partial)) if not node.has(char): return 0 node = node.get(char) return node.num_usage n = int(raw_input().strip()) trie = Trie() for a0 in xrange(n): op, contact = stdin.readline().strip().split() if op == 'add': trie.add(contact) elif op == 'find': print trie.find(contact)
jcchuks1 + 1 comment
Yeah, I agree. The allocated memory needs to be increased for python. I got the intuition behind the 'hack'/'hackerrank' example.
I am trying to avoid importing libraries. I am thinking instead of using an list, I could do with a linkedlist. That way I might be able to control list size.
class LinkNode: def __init__(self,SomeNode=None): self.node = SomeNode self.next = NextNodeOfSomeNode
kuddai92 + 0 comments
It may work as lists have to use additional memory to amortize possible future 'appends'. Although the whole problem is that in Python any self creating objects are just taking too much memory upon their creation in contrast to primitives. I suspect this happens because primitives like strings because they are implemented in C internally. It will be interesting to see if LinkedList can do the job.
bitmalloc + 3 comments
I don't think you need that much complexity. As the problem states, only lowercase English characters are used, so it's a constant number and you could just use a list and append as you find the chars, only using the memory allocations required at a time, iterating through children should not be an issue. In the case of the original post by jcchuks1, I think the problem may be due to other issue.
Surprisingly the following solution worked for me using a dynamic array, after failing with dicts and sets (with the proper hashing). Probably a linked list would work too:
class TrieNode(object): def __init__(self, char): self.char = char self.children = [] self.is_word = False # the number of words this prefix is part of self.words_count = 0 def get_child(self, c): for child in self.children: if child.char == c: return child return None class Trie(object): def __init__(self): self.root = TrieNode("*") # token root char def add(self, word): curr = self.root for c in word: next_node = curr.get_child(c) if next_node is None: next_node = TrieNode(c) curr.children.append(next_node) next_node.words_count += 1 curr = next_node curr.is_word = True def find(self, prefix): curr = self.root for c in prefix: next_node = curr.get_child(c) if next_node is None: return 0 # prefix not found curr = next_node return curr.words_count
jcchuks1 + 0 comments
Nice work bitmalloc! Suprisingly, I ran your code and it still gave Segmentation fault for same test case as mine.
I just added
o = Trie() for a0 in xrange(n): op, contact = raw_input().strip().split(' ') if op == "add": o.add(contact) elif op == "find": print o.find(contact)
BTW: Linkedlist didn't pass the grader either. My LinkedList implementation had more segmenation faults and with few testcases passed. I will still stick to my original implementation till I see something different since this Hackerrank Platform Dependent.
[deleted] + 2 comments
Thanks bitmalloc! This works great with ruby for all tests: #!/bin/ruby class TrieNode attr_accessor :char, :words_count, :children, :is_word def initialize (char = nil, is_word = false) @char = char @words_count = 0 # num of words this prefix is a part of @children = [] @is_word = is_word end def get_child(c) for child in @children if child.char == c return child end end return nil end end # end TrieNode class class Trie def initialize @root = TrieNode.new("*") end def add word curr = @root word.chars.each do |w| next_node = curr.get_child(w) if next_node == nil next_node = TrieNode.new(w) curr.children << next_node end next_node.words_count += 1 curr = next_node end curr.is_word = true end def find prefix curr = @root prefix.chars.each do |c| next_node = curr.get_child(c) if next_node == nil puts "0"; return # prefix not found end curr = next_node end puts curr.words_count end end # end class Trie t = Trie.new n = gets.strip.to_i for a0 in (0..n-1) instructions = gets.strip.split(' ') t.add(instructions[1]) if instructions[0] == "add" t.find(instructions[1]) if instructions[0] == "find" end
DaBigBadNuttin + 0 comments
what is the purpose of is_word...seems like it was added in there for some idea initially but turns out it wasn't needed.
DaBigBadNuttin + 0 comments
this is a really good solution, it inspired me to create one on the same idea but without classes
#!/bin/ruby
require 'json' require 'stringio' n = gets.to_i dic = {} n.times do |n_itr| opContact = gets.rstrip.split op = opContact[0].to_s.rstrip contact = opContact[1].to_s.rstrip if op == "add" word = "" contact.split("").each do |letter| word += letter if (dic[word]) dic[word] += 1 else dic[word] = 1 end end elsif op == "find" puts dic[contact] || 0 end end
rfeng2 + 0 comments
My solution is quite similar as yours, but why do I get 'runtime error' for several testcases?
class TrieNode: def __init__(self,value): self.data = value self.count = 0 self.children = {} class Trie: def __init__(self): self.root = TrieNode('') def add(self,keys): current = self.root for key in keys: if not key in current.children: node = TrieNode(key) node.count = 1 current.children[key] = node current = current.children[key] else: current = current.children[key] current.count += 1 def find(self,keys): current = self.root for key in keys: if not key in current.children: return 0 else: current = current.children[key] return current.count
AdrielKlein + 0 comments
I had the same problem as you. All my tests were passing except for a runtime error on test case #12.
If you look at some of the other posts in this discussion, someone else points out in this comment that the HackerRank machines are running out of space because you're creating too many Nodes.
As he point out, you can greatly reduce the number of Nodes created by storing multiple letters in a single node, and then expanding it only when necessary.
I created this solution and it makes the Runtime Error go away:
class Node(object): def __init__(self, letters=None): self.letters = letters self.children = {} self.num_occurences = 1 def expand(self): if not self.letters: return self.children[self.letters[0]] = Node(self.letters[1:]) self.letters = None class Trie(object): def __init__(self): self.root = Node() def add_contact(self, contact): node = self.root for i in range(len(contact)): letter = contact[i] if letter not in node.children: new_node = Node(letters=contact[i + 1:]) node.children[letter] = new_node break else: node = node.children[letter] node.expand() node.num_occurences += 1 def get_num_occurences(self, contact): node = self.root for letter in contact: if letter not in node.children: return 0 node = node.children[letter] node.expand() return node.num_occurences n = int(input().strip()) trie = Trie() for i in range(n): op, contact = input().strip().split(' ') if op == 'add': trie.add_contact(contact) else: print(trie.get_num_occurences(contact))
I hope that helps!
tangestani + 9 comments
An easy way to save memory is to use
__slots__on your objects.
For example:
class Node: __slots__ = ['size', 'children'] def __init__(self): self.size = 0 self.children = {}
rraahhuul + 1 comment
Thanks! Adding slots to the class definition worked, but it only works for new-style classes (if using Python2) i.e.
Class should extend 'object'
class Node(object):
santagada + 0 comments
my solution was simple, if having a node costs too much memory, why not just keep the dictionaries:
root = {'childrens': 0} def add_word(word): node = root node['childrens'] += 1 for l in word: n = node.get(l, None) if n is None: n = {'childrens': 0} node[l] = n node = n node['childrens'] += 1 def search_partial(prefix): node = root for l in prefix: if l not in node: return 0 node = node[l] return node['childrens']
it went all the way from 451mb of memory to 286mb on the worst case. Maybe using an object with a list and slots or a subclass of list with slots could use even less, but now I'm tired and want to sleep :)
Paul_Denton + 1 comment
My C++ solution uses less lines of code, try to simplify.
Aditya0007 + 1 comment
Can you please put up your c++ code. I'm not able to rectify test case 13.
Paul_Denton + 0 comments
This is not the place to post solutions, after all you are suposed to come up with it yourself. If you are that desperate I think you can access the leaderboard or editorial, but will get no points after that for your solution.
ericclone + 1 comment
I think there is a small ambiguity in the editorial. The problem statement is not specific about how duplicate contact should be handled. According to the editorial, no contact will be added twice. But this is not clearly stated as an assumption. I spent quite some time handling duplicate contacts which turned out to be unnecessay.
I think the statement should be more specific for the sake of correctness.
Talasu + 0 comments
Seconded. I wasn't sure whether I should increase the count for completely duplicate items. IE "3 add bob add bob find b" = 2 or 1? I implemented 2 first (easier), and it worked. But the problem statement should really say how to handle duplicates. Especially if the editorial does.
billr7 + 5 comments
Anyone else have trouble getting the Javascript version to be accepted? I get failures on 5 and 12, although sometimes aborts on those. I've tried it with iterative tries, recursive tries and other tweaks. I downloaded #5 and it has no output, so I shouldn't be getting the wrong answer here.
spark4 + 1 comment
I am having issues getting my JS solution getting accepted as well, but am getting timeout errors on 3,4 & 7.
Here is my code. Do you mind sharing what you did so that we can compare?
function main() { var trie = {}, n = parseInt(readLine()), resp = ''; for(var a0 = 0; a0 < n; a0++){ var op_temp = readLine().split(' '), op = op_temp[0].trim(), contact = op_temp[1].trim(); switch(op){ case 'add': addName(contact, trie); break; case 'find': resp += (resp.length === 0) ? findPartial(contact, trie) : '\n'+findPartial(contact, trie); break; default: break; } } console.log(resp); return; } function addName(name, trie){ var tree = trie; for(var i = 0; i < name.length; i ++){ if(!tree[name[i]]) tree[name[i]] = {}; tree = tree[name[i]]; if(i === name.length - 1 && !tree["*"]) tree["*"] = "*"; } return; } function findPartial(partial, trie){ var prefixTree = trie, numPartials = 0; for(var i = 0; i < partial.length; i ++){ if(prefixTree[partial[i]]) prefixTree = prefixTree[partial[i]]; else return numPartials; } var stack = [prefixTree]; while(stack.length > 0){ var currTree = stack.pop(); for(var letter in currTree){ if(letter === "*") numPartials++; else stack.push(currTree[letter]); } } return numPartials; }
billr7 + 2 comments
This is my most pared down iterative version.
var Trie = function() { this.children = {}; this.count = 0; }; Trie.prototype.insert = function(str) { var iterator = this; for (var i = 0; i < str.length; i++) { var c = str.charAt(i); if (iterator.children[c] === undefined) { iterator.children[c] = new Trie(); } iterator.count++; iterator = iterator.children[c]; } iterator.count++; }; Trie.prototype.find = function(str) { var iterator = this; for (var i = 0; i < str.length; i++) { var c = str.charAt(i); if (iterator.children[c] === undefined) { return 0; } iterator = iterator.children[c]; } return iterator.count; };
davencia1 + 0 comments
here is a recursive solution for you as well:
function main() { var n = parseInt(readLine()); var contactList = {'words': 0}; var addWord = function(contactList, letters){ if (!letters[0]) {return} if (!contactList[letters[0]]) { contactList[letters[0]] = {'words': 1}; } else { contactList[letters[0]].words += 1; } var newList = contactList[letters[0]]; letters.shift(); addWord(newList, letters); } var checkWord = function (contactList, contact){ if (contact.length === 1){ if (contactList[contact]) { return console.log(contactList[contact].words); } else { return console.log(0); } } if (!contactList[contact[0]]){ return console.log(0); } else { contactList = contactList[contact[0]]; contact.shift(); checkWord(contactList, contact) } } for(var a0 = 0; a0 < n; a0++){ var op_temp = readLine().split(' '); var op = op_temp[0]; var contact = op_temp[1]; if (op === 'add') { contactList.words += 1; contact = contact.split(''); addWord(contactList, contact); } else { contact = contact.split(''); checkWord(contactList, contact); } } }
fforres + 1 comment
I got a similar solution, but it's giving me an error on tests 5 and 12. I'm guessing because of memory errors.
Both mine and your solution are giving me the same errors though. How about you?
Here's my solution
function analizeContacts(){ this.contactObject = { letters: {}, data: 0 }; this.currentFinds = []; } analizeContacts.prototype.find = function(term){ const termChars = term.split(''); let currentWords = null; let searchObject = this.contactObject.letters; for( let i = 0; i < termChars.length; i++) { const el = termChars[i]; if(searchObject[el]) { currentWords = searchObject[el].data; searchObject = searchObject[el].letters; } else { currentWords = 0; break; } } if(currentWords !== null) { this.currentFinds.push(currentWords); } } analizeContacts.prototype.add = function(term) { const termChars = term.split(''); let searchObject = this.contactObject; for( let i = 0; i < termChars.length; i++) { const el = termChars[i]; if (!searchObject.letters[el]) { searchObject.letters[el] = {}; searchObject.letters[el].letters = {}; searchObject.letters[el].data = 1; } else { searchObject.letters[el].data++ } searchObject = searchObject.letters[el]; } } analizeContacts.prototype.results = function() { this.currentFinds.forEach((el) => { console.log(el); }); } function main() { var n = parseInt(readLine()); const analize = new analizeContacts(); for(var a0 = 0; a0 < n; a0++){ var op_temp = readLine().split(' '); var op = op_temp[0]; var contact = op_temp[1]; analize[op](contact); } analize.results() }
will_shaver + 4 comments
I solved it in a very different way, and very much a "hack" that is solving it only based on the problem given, not a real-world solution. We don't actually store contacts in our system, we're storing indexes. So instead of expecting to actually get the contacts back, we're just getting a count. That plus knowing searches aren't more than 6 chars long makes things trivial:
function main() { var contacts = []; var n = parseInt(readLine()); var tree = {}; for(var a0 = 0; a0 < n; a0++){ var op_temp = readLine().split(' '); var op = op_temp[0]; var contact = op_temp[1]; switch(op){ case 'find': console.log(findContacts(tree, contact)); break; case 'add': addContact(tree, contact); break; } } } function addContact(tree, contact){ const len = Math.min(contact.length + 1, 7); for(var i = 1; i < len; i ++){ const idx = contact.substr(0,i); if(tree[idx]){ tree[idx]++; } else { tree[idx] = 1; } } } function findContacts(tree, search){ return tree[search] || 0; }
This gives trees like:
{ h: 2, ha: 2, hac: 2, hack: 2, hacke: 1, hacker: 1 }
Which obvously the findContacts() function executes super fast as it is using a native primitive for the hash lookup. Passing on all accounts. No fancy tree/splitting needed.
aaronranard + 1 comment
How do you know searches are no more than 6 characters long? In the constraints listing it says 1 <= partial <= 21, so in theory 21 should be the limit, no?
I'm implementing an almost identical soution but it's running out of memory creating an index 21 characters long * 100000 words.
ion01 + 0 comments
@kuddai92 reported they were only five characters.
tb003 + 1 comment
I've re-implemented this in Java and it works great!
if(op.equals("add")) { for(int i = 1; i <= contact.length(); i++) { String sub = contact.substring(0, i); Integer currentCount = contacts.get(sub); if(null == currentCount) { contacts.put(sub, 1); } else { contacts.put(sub, 1 + currentCount); } } } else if(op.equals("find")) { Integer maybeCount = contacts.get(contact); int count = null == maybeCount ? 0 : maybeCount; System.out.println(count); }
julianabsatz + 0 comments
I tried reimplementing this and it worked, but I had to download a couple of tests. Be careful as there are tests that only do 'find' and tests that try to look for common array functions like 'pop' or 'map'
tony_z + 1 comment
My solution is pretty simple but it times out on half the tests, so I guess it's no good.
function main() { var n = parseInt(readLine()); var contacts = []; for(var a0 = 0; a0 < n; a0++){ var op_temp = readLine().split(' '); var op = op_temp[0]; var contact = op_temp[1]; if (op === 'add') { contacts.push(contact); } else if (op === 'find') { var matches = contacts.filter(el => el.indexOf(contact) == 0); console.log(matches.length); } } }
chrisjaynes + 0 comments
I'm pretty happy with my functional version. It passes all of the tests.
const root = {} function main() { var n = parseInt(readLine()); for(var a0 = 0; a0 < n; a0++){ var op_temp = readLine().split(' '); var op = op_temp[0]; var contact = op_temp[1]; if (op === "add"){ add(contact, root) } if (op === "find"){ const result = find(contact, root) console.log(result) } } } const add = (value, node)=>{ const char = value.substring(0,1) if (node[char]){ node[char].count++ } else { node[char] = { count: 1} } if (value.length > 1){ const rest = value.substring(1) add(rest, node[char]) } } const find = (value, node)=>{ if (value.length == 1){ if (node[value]){ return node[value].count } return 0 } const char = value.substring(0,1) if (node[char]){ const rest = value.substring(1) return find(rest, node[char]) } return 0 }
km003 + 0 comments
I was able to get it after building a map of all the possible partials, approaches that just tried to iterate over all the names and check matches with substring or regex would timeout.
class ContactBook { constructor() { this.tree = new Map(); } add(name) { let p = ""; for (let c of name) { p += c; let node = []; if (this.tree.has(p)) { node = this.tree.get(p); } node.push(name); this.tree.set(p, node); } } find(partial) { if (!this.tree.has(partial)) { return 0; } return this.tree.get(partial).length; } }
gussy + 3 comments
Python 3 - bit of a hack but it works.
import bisect n = int(input().strip()) contacts = [] for a0 in range(n): op, contact = input().strip().split(' ') if op == 'add': bisect.insort_left(contacts, contact) if op == 'find': left = bisect.bisect_left(contacts, contact) right = bisect.bisect_left(contacts, contact + 'zzzzzzz', left) print(right - left)
aod7br + 1 comment
Yeah it works and passes all testcases, but the right cota is wrong... there could be
contact+'zzzzzzzaaa'in the list and you would not count it. I think the correct way to calculate the rightmost element should be
right=bisect_right(contacts, chr(ord(contact[0]+1)), left)
and having stablished the right cota, you would still need to check if all elements between
leftand
rightdo start with contact:
for s in islice(contacts, left, right): if s.startswith(contact): total+=1 return total
please someone correct me if I am wrong
DerPferd + 1 comment
I think here is a simpler way to solve this problem. Change the last character to the next character than subtract left from right:
left = bisect.bisect_left(contacts, contact) prefix_next = contact[:-1] + chr(ord(contact[-1])+1) right = bisect.bisect_left(contacts, prefix_next) print right-left
marc_torsoc + 4 comments
No need of any complicated structure, just create a map with each of the possible substrings, and counters that increment for each new contact. My solution in C++ (in Python doing the same is straightforward):
int main(){ int n; map<string,int> agenda; cin >> n; for(int a0 = 0; a0 < n; a0++){ string op; string contact; cin >> op >> contact; if(op == "add"){ for(int i=0;i<=contact.length();i++){ agenda[contact.substr(0,i)]++; } } else{ cout << agenda[contact] << endl; } } return 0; }
wrinl3 + 0 comments
Here's a similar stupid (but working) solution in Python 3. I doubt you should solve this problem using anything other than a Trie during an actual interview, tho - the problem is way too obviously geared towards it.
from collections import Counter c = Counter() def add(name): for i in range(1, len(name)+1): c[name[:i]]+=1 def find(partial): print(c[partial]) n = int(input().strip()) for a0 in range(n): op, contact = input().strip().split(' ') if(op == 'add'): add(contact) else: find(contact)
scottmk + 0 comments
But this is far less memory efficient than implementing a trie. You're creating way too many strings. If you implemented a simple trie where each node had the letter it represented, an integer of all complete words its children makes, and a set of all its child nodes, the space would be significantly smaller and the lookup still linear.
EDIT: I take it back. Lookup in your solution is constant, not linear. So your solution is faster but uses more space. The map you wrote has a space complexity of roughly O(kn), where k is the average number of letters in a word and n is the number of words. In a solution with a trie, the space complexity is O(n), where n is the total number of unique letters. More or less. I think in this case it's worth saving on space and sacrifice lookup.
Paul_Denton + 0 comments
Haha maybe they should have a very large test case that exceeds the memory limit in this case.
cool_shark + 5 comments
My solution in C++
struct Node { unordered_map<char, Node*> children; int count; Node() { count = 0; } }; class Trie { private: Node *_root; public: Trie() { _root = new Node(); } void add(string str) { Node *node = _root; for (char c : str) { if (node->children.find(c) == node->children.end()) { Node *newNode = new Node(); node->children[c] = newNode; node = newNode; } else { node = node->children[c]; } node->count++; } } int find(string str) { Node *node = _root; for (char c : str) { if (node->children.find(c) == node->children.end()) { return 0; } else { node = node->children[c]; } } return node->count; } };
melroy_tellis + 5 comments
This wouldn't work if contact names are repeated for the ADD operation.
ameyajoshi + 1 comment
That can be resolved by having a boolean variable to indicate the end of the word.
elenaa_ursu + 2 comments
I solved this by returning from the add operation if the string already exists in the trie.
void add(string str) { if (find(str) != 0) { return; } ... }
ameyajoshi + 0 comments
Well this wont work.
we add "facebook" to trie , at this point node->count is 1 .
Next we try to add "face" , in your case it wont add this word , by that i mean the node->count will remain 1 , when i expect node count for nodes f,a,c,e to increment to 2.
so if try to do a substring match fa, yours will always return 1 ,when i expect it to be 2
melroy_tellis + 0 comments
Yes, your solution would work only if the words to be added appear in lexicographical order. The most general way is to add an end-of-word flag for each node and update the count only if it's not set for the last node of the word.
rishabh_malviya + 1 comment
I don't understand how the new Nodes you create still live after you exit the function.
All the
Node* newNode = new Node()
create new Node pointers in the scope of the function, but as soon as you exit, they should be destroyed, right?
richa_dohre + 1 comment
no, it can only be deleted if we delete it using 'delete' operator(since it is allocated in heap area not in stack), or at the termination of program.
rishabh_malviya + 2 comments
Okay, so a variable constructed with the
new
command can only be deleted by
delete
irrespective of the scope in which it was declared.
Is this correct?
melroy_tellis + 0 comments
Yes. That's how dynamic memory allocation works in C++. If the memory is not deallocated by the program, it will continue to occupy heap space and cause a memory leak.
Paul_Denton + 0 comments
I ended up with:
class Trie { struct Node { int nWordsWithThisPrefix = 0; bool isCompleteWord = false; map<char, Node> children; }; public: void addWord(string word) { Node* node = &_root; for (char c: word) { node = &(node->children[c]); node->nWordsWithThisPrefix++; } node->isCompleteWord = true; } int countWordsWithPrefix(string prefix) const { const Node* node = &_root; for (char c: prefix) { auto result = node->children.find(c); if (result == node->children.end()) { return 0; } else { node = &(result->second); } } return node->nWordsWithThisPrefix; } private: Node _root; }; int main() { int n = 0; cin >> n; Trie trie; for (int i=0; i<n; ++i) { string command, s; cin >> command; cin >> s; if ("add" == command) { trie.addWord(s); } else if ("find" == command) { cout << trie.countWordsWithPrefix(s) << "\n"; } } return 0; }
Paul_Denton + 1 comment
This has a memory leak, right? Unless you implement a destructor for the Trie class. Why not keep the Nodes in the map instead of just pointers to the nodes, containers allocate memory anyways.
Paul_Denton + 0 comments
Ah just noticed that it already had been discussed. You should read up on RAII ;)
nfollett89 + 1 comment
Is it just me, or is the limitation on resources forcing the solutions to be overly hacky/ugly?
Obviously if you aren't using tries then you're not going to make it, but my best implementation had a few test cases fail due to segmentation faults. I suspected this might have to do with my heavy use of dictionaries (Python 2) to store the children in a children[char] = trie_node() fashion, so I turned it into a list... I no longer got segmentation faults but now it times out even though I have a lookup array so that I don't have to iterate through the children like crazy.
After reading through some discussion posts that passed, it seems most/all of them had to use some method which goes against coding best practices in the name of simply getting it to work.
I hope I'm wrong - perhaps I'm just not seeing the right optimization opportunity. Anyone else struggling with this aspect of the problem?
davissallen + 0 comments
One year later... same problem. Using Python 2 with tries and dictionaries to hold children seems like a great approach but still does not pass 3/15 test cases.
sheva_ukraine97 + 1 comment
Thanks to C++'s STL, there're only 2 lines of code.
int main(){ int n; cin >> n; set<string> contacts; for(int a0 = 0; a0 < n; a0++){ string op; string contact; cin >> op >> contact; if (op == "add") contacts.insert(contact); else cout << distance(contacts.lower_bound(contact), contacts.upper_bound(contact + char('z' +1))) << endl; } return 0; }
Sort 498 Discussions, By:
Please Login in order to post a comment | https://www.hackerrank.com/challenges/ctci-contacts/forum | CC-MAIN-2019-43 | refinedweb | 7,866 | 59.19 |
Monitoring server usage
You can monitor activity in your server using Amazon CloudWatch and AWS CloudTrail. For further analysis, you can also record server activity as readable, near real-time metrics.
Topics
Enable AWS CloudTrail logging
You can monitor AWS Transfer Family API calls using AWS CloudTrail. By monitoring API calls, you can get useful security and operational information. For more information about how to work with CloudTrail and AWS Transfer Family, see Logging and monitoring in AWS Transfer Family.
If you have Amazon S3 object level
logging enabled,
RoleSessionName is contained in
principalId as
[AWS:Role Unique
Identifier]:username.sessionid@server-id. For more information about
AWS Identity and Access Management (IAM) role unique identifiers, see Unique
identifiers in the AWS Identity and Access Management User Guide.
The maximum length of the
RoleSessionName is 64 characters. If the
RoleSessionName is longer, the
will be truncated.
server-id
Logging Amazon S3 API calls to S3 access logs
If you are using Amazon S3
access logs to identify S3 requests made on behalf of your file transfer
users,
RoleSessionName is used to display which IAM role was assumed to
service the file transfers. It also displays additional information such as the user
name, session id, and server-id used for the transfers. The format is
[AWS:Role
Unique Identifier]:username.sessionid@server-id and is contained in
principalId. For more information about IAM role unique identifiers,
see Unique
identifiers in the AWS Identity and Access Management User Guide.
Log activity with CloudWatch
To set access, you create a resource-based IAM policy and an IAM role that provides that access information.
To enable Amazon CloudWatch logging, you start by creating an IAM policy that enables CloudWatch logging. You then create an IAM role and attach the policy to it. You can do this when you are creating a server or by editing an existing server. For more information about CloudWatch, see What Is Amazon CloudWatch? and What is Amazon CloudWatch Logs? in the Amazon CloudWatch User Guide.
To create an IAM policy
Use the following example policy to create your own IAM policy that allows CloudWatch logging. For information about how to create a policy for AWS Transfer Family, see Create an IAM role and policy.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:CreateLogGroup", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:*:*:log-group:/aws/transfer/*" } ] }
You then create a role and attach the CloudWatch Logs policy that you created.
To create an IAM role and attach a policy
In the navigation pane, choose Roles, and then choose Create role.
On the Create role page, make sure that AWS service is chosen.
Choose Transfer from the service list, and then choose Next: Permissions. This establishes a trust relationship between AWS Transfer Family and the IAM role.
In the Attach permissions policies section, locate and choose the CloudWatch Logs policy that you just created, and choose Next: Tags.
(Optional) Enter a key and value for a tag, and choose Next: Review.
On the Review page, enter a name and description for your new role, and then choose Create role.
To view the logs, choose the Server ID to open the server configuration page, and choose View logs. You are redirected to the CloudWatch console where you can see your log streams.
On the CloudWatch page for your server, you can see records of user authentication
(success
and failure), data uploads (
PUT operations), and data downloads
(
GET operations).
Using CloudWatch metrics for Transfer Family
You can get information about your server using CloudWatch metrics. A metric represents a time-ordered set of data points that are published to CloudWatch. When using metrics, you must specify the Transfer Family namespace, metric name, and dimension. For more information about metrics, see Metrics in the Amazon CloudWatch User Guide.
The following table describes the CloudWatch metrics for Transfer Family. These metrics are measured in 5-minute intervals.
Transfer Family dimensions
A dimension is a name/value pair that is part of the identity of a metric. For more information about dimensions, see Dimensions in the Amazon CloudWatch User Guide.
The following table describes the CloudWatch dimension for Transfer Family. | https://docs.aws.amazon.com/transfer/latest/userguide/monitoring.html | CC-MAIN-2020-50 | refinedweb | 699 | 55.64 |
W3C Says Final HTML5 Spec is Due in 2014
W3C Says Final HTML5 Spec is Due in 2014
Join the DZone community and get the full member experience.Join For Free
Verify, standardize, and correct the Big 4 + more– name, email, phone and global addresses – try our Data Quality APIs now at Melissa Developer Portal!
Silverlight 4 Gets Big UpgradeGDR3, the newest update to Silverlight 4, has been released this week by Microsoft. Timestamp issues, Visual Studio IDE crashes, and media playback errors are only a few of the problems addressed in this latest update. Developers are encouraged to request that their clients upgrade to the new Silverlight release to fix the memory leak issues common to previous versions of Silverlight.
The GA release of
Google's Plugin for Eclipse and Google Web Toolkit 2.2 is out this week and it includes HTML5 support and GWT Designer integration. Google's plugin also showcases an enhanced CellTable widget and a sweet
Canvas demo. Developers should begin updating their version of Java to 1.6 to avoid future compatibility issues.
Announcing Google's New Plugin for Eclipse and GWT 2.2
The new, fully-interactive, version of PhpStorm provides support for PHP 5.3 namespaces and closures. It also has support for
ECMAScript 5. It's newly extended debugging features provide a solid foundation for software development efforts. PhpStorm 2.0 can be downloaded
here.
JetBrains Releases version 2 of its "Intelligent PHP IDE"
Spring 3.1- What's Up and ComingMore exciting Spring news! A quick look at Spring 3 }} | https://dzone.com/articles/w3c-says-final-html5-spec-due | CC-MAIN-2018-43 | refinedweb | 260 | 58.28 |
the code is :
using UnityEngine;
using UnityEngine.UI;
public class InventorySlot : MonoBehaviour { // Scrpit A
public bool[] buttonState = new bool[2];
void Update ()
{
if (Input.GetKeyDown (KeyCode.Alpha1) || Input.mouseScrollDelta.y > 0)
{
buttonState [0] = true;
buttonState [1] = false;
}
if (Input.GetKeyDown (KeyCode.Alpha2) || Input.mouseScrollDelta.y < 0)
{
buttonState [0] = false;
buttonState [1] = true;
}
public bool ButtonIsActive (int index)
{
return buttonState[index]; // true or false from the update funaction
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
using System.Collections;
using UnityEngine;
public class CollectShiled : MonoBehaviour { // Scrpit B
public InventorySlot inventory; // I put it in the incpector
void Update ()
{
if (Input.GetKeyDown ("space") && inventory.ButtonIsActive(index)) // index =1 or 0
{
// Do something
}
}
the problem is in the script A always return true !
public bool ButtonIsActive (int index)
{
return buttonState[index];
}
Might not be relevant but where's the variable index stored in your CollectShiled class? There seem to be some code missing as that wouldn't compile as is.
Yes , variable index in CollectShiled class , but i removed unnecessary part to make it more clear if you want to see full code i will send it
int index = Shiled.index; // where index is 1 or 0
Answer by gamingpugsstudios
·
Sep 08, 2018 at 11:51 AM
If you want it to be only true if the key is held down you have to Input.GetKey Instead and create an else statement that sets the bools to false.
I want to this ButtonIsActive() to return the current value of buttonState[] but it's always return true !
So it does return true no matter what key you pressed? Have you tried Debug.Log the mouse.Scrolldata?
It's work correctly when i print the value from the script A and the mouse.scroll work correctly but when i call the function ButtonIsActive() from script B it's return true always . I think the problem in the array do you have any idea ?
I tried it on my pc and it works perfectly fine for me. If I press Alpha1 the element 0 gets true and element 1 gets false. If I press Alpha2 the element 1 gets false and element 1 gets true. I made a Debug if I press space and the current index is true, and it works.
What does the scene look like? Are the two scripts on the same gameobject? Maybe you acess a disabled inventorySlot script and because of that the buttonState never changes? Or does maybe another script change the buttonState? I dont really see another reason why it should only return.
Check if array contains combinations of elements
0
Answers
Finding the amount of true booleans - Arrays
1
Answer
how to check all elements in array
2
Answers
[solved] c# script : get a return val (waypoint example)
2
Answers
Activating Bool Array in sequence.
2
Answers | https://answers.unity.com/questions/1551004/array-of-bool-doesnt-change-in-other-script.html?sort=oldest | CC-MAIN-2019-35 | refinedweb | 461 | 64.71 |
16 April 2012 03:57 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The cracker was shut on the morning of 6 April following a pipeline leak that led to an explosion at the site.
After the outage, CPC raised the operating rates at its other two naphtha crackers in Linyuan to 100% of capacity.
The company source said the eventual run rates for the No 2 and No 3 crackers in Linyuan will depend on how the No 5 cracker operates, after it is restarted.
CPC was forced to reduce ethylene and propylene supplies to domestic customers in the wake of the outage, but the source said the producer did not import any ethylene and propylene spot cargoes to make up for the | http://www.icis.com/Articles/2012/04/16/9550486/taiwans-cpc-corp-begins-process-of-restarting-no-5.html | CC-MAIN-2015-18 | refinedweb | 123 | 64.04 |
This issue described here is actually more of a Visual Basic issue than SharePoint, but it often pops up in SharePoint web part deployment.
The issue: You create a web part in C# with a “namespace Max.Demo”, build it, deploy the DLL, add it to safe controls and go to Site Actions, Site Settings, Web Part Gallery, New Web Parts. It shows up in the list just fine with a name like: assemblyname.namespacename.classname.
You try the exact same steps above using Visual Studio, and the web part does not show up in New Web Parts.
The problem is with the namespace and how you entered it into <SafeControls>, actually it’s because you did not know the real namespace name.
In C# when you add “namespace Max.Demo” to a class then that is the namespace name generated. In VB when you add “namespace Max.Demo” you are creating a namespace name that is built from two parts, what you typed added to what is specified in the project’s properties screen as the “Root namespace”.
So in VB if your properties look like this:
and your code looks like this:
Namespace Max.Demo
Public Class VBWebPart
Inherits WebPart
VB creates a namespace named: MyWebPartProject.Max.Demo
So for VB projects:
- Only set your namespace in Project Properties
- Or delete the namespace in Project Properties (leave it blank) and set the namespace in code
- Or remember that the namespace in Project Properties is combined with what you set in code
If you want to see just how everything got named then create a little project with this code:
Imports System.Reflection
Module Module1
Sub Main()
Dim a As Assembly = Assembly.LoadFrom("C:\yourprojectpath\vbwebpart.dll")
Console.WriteLine("Fullname:" & a.FullName)
Console.WriteLine()
For Each t As Type In a.GetExportedTypes
Console.WriteLine(t.Namespace + " " + t.Name)
Console.ReadLine()
End Sub
End Module
.
1 comment:
Thanks Mike, I have spent many hours on google trying to find a solution. Never thought of placing the project namespace in front of those in the class, as all web examples are founded on C#. Paul | http://techtrainingnotes.blogspot.com/2010/01/sharepoint-new-vb-web-part-not-showing.html | CC-MAIN-2018-47 | refinedweb | 354 | 61.46 |
Another python newbie here.
Currently, I’m using Jupypter notebook in anaconda framework.
In order to proceed my projects using iPython Notebook,
I need to run some of my python scripts (tp.py file) on the notebook.
from tp import wordtoplural
Since, it makes life a lot easier instead of defining all function in notebook itself.
How can I do it, currently importerrors occurs on my code.
ImportError: cannot import name wordtoplural
- iPython notebook and python script(.py) are in the same folder.
- Added empty
__init.py__file on that directory too.
Best answer
Make sure your ipython notebook is in the same folder as your python script. Also, you may have to create an empty
__init__.py file in the same folder as your python script to make the import work.
Since you will probably be modifying your python script and test it directly on your notebook, you may be interested in the autoreload plugin, which will automatically update the imported modules with the changes you have just made in your python scripts:
%load_ext autoreload %autoreload 2
Note that you need to place your imports after having called the autoreload plugin.
Note also that in some cases you may need to add this in your IPython notebook at the very top of the first cell (after the % magics):
from __future__ import absolute_import
Limitations: autoreload works well in general to reload any part of a module’s code, but there are some exceptions, such as on new class methods: if you add or change the name of a method, you have to reload the kernel! Else it will continue to either load the old definition of this method or fail in the resolution (method not found), which can be particularly confusing when you are overloading magic methods (so in this case the default magic method will be called instead of your definition!). Then once the method name is defined and the kernel reloaded, you can freely modify the code of this method, the new code will be automagically reloaded.
Also it will fail if you use super() calls in the classes you change, you will also have to reload the kernel if this happens. | https://pythonquestion.com/post/want-to-save-run-python-script-in-jupyter-notebook-anaconda/ | CC-MAIN-2020-45 | refinedweb | 363 | 68.3 |
core java - Java Beginners
core java When we will use marker interface in our application? Hi friend,
Marker Interface :
In java language programming... an error. To make more clearly understand the concept of marker interface you
Core java linked list example
Core java linked list example What is the real time example for linked list... this with one example
Core Java
Core Java Is Java supports Multiple Inheritance? Then How ?
There is typo the question is ,
What is Marker Interface and where it can... Interface.
Here is an example:
interface markerImp {
}
class MarkerTest
core java
core java public class Sample{
public static void main(String args[]){
int a;
}
}
Q.why the above code is not compiled ?
Q.why the below... as there is no error in code, you have created a class Sample and in main method, you have
core java
core java java program using transient variable
Hi... be saved when the object is archived.
Here is an example:
public class..., visit the following link:
core java
core java what does the term web container means exactly?please also give some examples
Hi,
In Java Platform, Enterprise Edition..., which is used to deploy and execute the servelet and JSP. Example of web
Java util package Examples
core java
core java hello sir....i have one table in which i have 3 columns First Name,Middle Name,Last Name...............but i have to show single name in datagrid view in jsp page using concat string tokenizer......for example-
First
core java
core java class A
{
int a=3;
int b=4;
public A(int a,int b)
{
this.a=a;
this.b=b;
}
void display()
{
System.out.println(a);
System.out.println(b... the parrentheses .............example is--------suppose display is looking like
Core Java
Core Java Hi,
I have written a board program using Java Swing... class Example extends JFrame implements MouseMotionListener{
JPanel panel... Example();
}
public Example() {
super("Example");
panel = new JPanel(new
core java
core java
class Arrayd
{
static int max(int x[]){
int i,max;
max=x[0];
for(i=0;i<x.length;i... with this error! You have written main method outside the class. Put
Core Java
Core Java Hi,
can one please share the code to count the occurance of each charaters in a given String??
For example: String is "aabbcad"
o/p will be:- a3b2c1d1
or a=3, b=2,c=1,d=1
public static void main(String
Core Java
Core Java Hi,
I am trying to remove duplicated charater from a given string without using built in function, but getting some issue in that. can...;Here is an example that removes the duplicate character from the string.
import,
can any one please share the code to find the occurance of characters in a string??
ex:- aaabb
o/p:
a=3
b=2
Here is an example that count accepts the string from the user and count the occurrence
Core Java
Core Java Q. A producer thread is continuously producing integers...() {
while(true) {
th.get();
}
}
}
class Example {
public static void main...);
}
}
For more information, visit the following link:
core java - Java Interview Questions
core java What is the purpose of the System class? what... provided by the System class are standard input, standard output, and error... before you can define class methods.
The following example defines
Map | Business Software
Services India
Java Tutorial Section
Core Java... |
Java Swing
Tutorials | Java Servlet Tutorials |
J2EE Tutorials
Core Java... example |
HashMap class in java |
contains method of hashset in java
core java - Java Beginners
core java what is object serialization ?
with an example
Hi Friend,
Please visit the following link:
Thanks
Java Util Package - Utility Package of Java
Java Util Package - Utility Package of Java
Java Utility package is one of the most commonly used packages in the java
program. The Utility Package of Java consist
core java collection package - Java Interview Questions
....
Thanks...core java collection package why collection package doesnot handle..., Java includes wrapper classes which convert the primitive data types into real
Core java - Java Beginners
Core java Hello sir/madam,
Can you please tell me why multiple inheritance from java is removed.. with any example..
Thank you...://
Thanks Hi
core java - Java Beginners
core java can we write a program for adding two numbers without... this example
it's about calculating two numbers
Java Program Code for calculating
Core Java - Java Beginners
Core Java Can u give real life and real time examples of abstraction, Encapsulation,Polymarphism....? I guess you are new to java and new to roseindia as well..
anyways given is the encapsulation example http
core java - Java Beginners
core java catch(Exception e)
{
System.out.println(e);
}
what... to handle a run-time error. A try block must have at least one catch block... on as if the error had never happened.
For more information, visit
core
core where an multythread using
Please go through the following link:
Java Multithreading
core java - Java Beginners
core java
How to reverse the words in a given sentence...{
System.out.println("This is reverse Example!") ;
BufferedReader buff = new...);
}
}
-------------------------------------------------------
Read for more information.
Thanks
Core Java - Java Beginners
Core Java How can we explain about an object to an interviewer ... and class methods.
For example :
class Myclass {
String str... to :
Core Java Hello World Example
are used in this program. This is a basic example of core Java that
explains how... a simple core Java "Hello World"
application. The Hello World application... explains you how to start writing of your first Java class.
Example
First we
core java - Java Beginners
core java pl. tell me about call by value and call by reference with example program.
thanks Hi Friend,
call By value
When you... change the fields in the caller?s objects they point to. In Java, you cannot
core java - Java Beginners
core java how to reverse a the words in the sentence for example.....
prashu
prashobvee@gmail.com Hi friend,
i am sending running example...://
Thanks & Regards
Amardeep
Core Java-ArrayList
Core Java-ArrayList How do i find duplicates in ArrayList. First i add some elements to ArrayList, then how do i find the duplicates and show the duplicate elements. Give an example
core java - Java Beginners
core java Hi Guys,
what is the difference between comparable and comparator i want with source code with example?????????????
plzzzzzzzzzzz help me its very urgent in advance thanks If you want to sort
core java
core java how to display characters stored in array in core java
Core Java Interview Questions!
Core Java Interview Questions
...: Transient variable can't be serialize. For example if a
variable is declared... if effectively replaces.
Example of classes: HashSet, HashMap, ArrayList
core java
core java basic java interview question
core java - Development process
core java what is an Instanciation? Hi friend,
When....
The process is known instantiation
For example :
class... to :
core java
core java i need core java material
Hello Friend,
Please visit the following link:
Core Java
Thanks
CORE JAVA
CORE JAVA CORE JAVA PPT NEED WITH SOURCE CODE EXPLANATION CAN U ??
Core Java Tutorials
Core Java Exceptions - Java Beginners
Core Java Exceptions HI........
This is sridhar... Error? How can u justify? Hi friend,
Read for more information.
Thanks
Java compilation error - Java Beginners
Java compilation error Hello,
i am getting an error while running simple core java program on command prompt.java is installed on my pc.... For example if java is installed at c:\Program Files\Java\jdk1.6.0_01 a class
core java
core java how can we justify java technology is robust
core java - Java Interview Questions
core java Is there any Compile time exceptions in java? Hi Friend,
If the error occurs at runtime then it is known as Exception like class not found exception. IF the error occurs at compile time then it is said
Core JAVA - Development process
Core JAVA hai
This is jagadhish.I have a doubt in core java.The...);
}
}
------------------------------------------------------
Read for more information with Example.
Thanks & regards
Amardeep
Core Java - Java Interview Questions
Throw Keyword in Core Java Why to use Throw Keyword in Core Java... that function, which rises an checked exception it gives the compile time error. Use...-catch, otherwise it gives the compile time error
core java
core java surch the word in the given file
CORE JAVA
CORE JAVA What is called Aggregation and Composition
Core Java Topics
Core Java Topics
Following are the topics, which are covered under Core Java. In the roseindia website for core java, programmer will find completed details of every core java topic and simple example of core java that will help you learn
core Java programming
core Java programming Hi,
Thanks for ur previous answers.... I...;TIME:HHMMSS;ITEM1:QTY:PRICE;ITEM2:PRICE2;]
They are looking for a Java Program... given dates
For Example
ITEM NOOFCUSTOMERS
TOT_QTY
22 YYMMDD YYMMDD ITEM - Development process
core java Hi
i want core java code for this problem.its urgent... is divided up into a grid to simplify navigation. An
example position might... visit to :
Thanks.
java util date - Time Zone throwing illegal argument exception
java util date - Time Zone throwing illegal argument exception Sample Code
String timestamp1 = "Wed Mar 02 00:00:54 PST 2011";
Date d = new Date...());
The date object is not getting created for IST time zone. - Java Interview Questions
Core Java is there any use of private constructors?is yes ,how to use those, example please, Hi
public final class MyClass...
}
}
----------------------------
Read for more details.
core java
core java please give me following output
core java
core java what is the use of iterator(hase next
core java
core java In java primitive variables will get its default value automatically after declaration. Then why it is mandatory to initialize a variable before using
Core Java
Core Java Hi,
Can any one please share the code for Binary search in java without using builtin function
Core Java
Core Java Please write a Java Program to design login form and store the values in file & validate and display the MainForm
Core java - Java Interview Questions
Core java Hai this is jagadhish.Iam learning core java.In java1.5 I...;
assert expression1 : expression2;
For example
class AssertionDemo...;= amount;
return balance - amount;
}
}
In above given example, main
core java - Java Beginners
core java i want to get java projects in core java
core - Java Interview Questions
core is java is passed by value or pass by reference? hi... variables affect the caller?s original variables. Java never uses call by reference. Java always uses call by value.
public class CallByValue
Core Java
Core Java How to load class dynamically in java ?
To load class dynamically you can use Class class method
Class.forName("abc.xyz.MyClass");
This method load the given class at run time - Java Interview Questions
core java What are transient variables in java? Give some examples Hi friend,
The transient is a keyword defined in the java... relevant to a compiler in java programming language likewise the transient Tell me some Scenarios why you go for Abstract Class and Interface
core java
core java how to compare every character in one string with every character in other string
core java
core java Hi,
Can any one please share a code to print the below:
1
121
12321
1234321
Core java
Core java how to convert reverse of String without using String function
core java
core java Is it possible to create a shallow copy of an arraylist instance? if yes then please post the code
Core Java - Java Interview Questions
Core Java Hi
What is the use of private static in java and when... value of
the variable is provided to you. For example in the class... side the class :
In the below example "staticMethodA()" not accessible
Core Java - Java Interview Questions
Core Java Hi
What is the use of private static in java and when... the new value of
the variable is provided to you. For example in the class... not out side the class :
In the below example "staticMethodA()" not accessible
error in sample example
error in sample example hi can u please help me
XmlBeanFactory class is deprecation in java 6.0
so, pls send advance version class
Use XmlBeanFactory(Resource) with an InputStreamResource parameter
Core java
Core java How to use hyperlink that is href tag in core java without swing, frames etc.
My code is
StringBuffer oBodyStringBuffer = new StringBuffer("Message Classification: Restricted.\n\n | http://www.roseindia.net/tutorialhelp/comment/20271 | CC-MAIN-2013-48 | refinedweb | 2,073 | 55.95 |
I'm trying to get my head around pointers so I have been reading and trying things.
Currently I'm trying to put member functions outside the class, but be able to access a private variable from inside the main function. From what I've read this can't be done, unless I have a pointer inside the class.
So I have tried putting a function that returns a pointer to my private variable inside the class, with all other member functions outside the class.
However when I try to read the private variable I run into compile problems.
I've tried reading other posts and trying their solutions, but I'm just getting more confused.
Ideally, I would like to have a member function named "area" outside the class that is passed a pointer to the private variable "radius" and returnss the area.
This is what I currently have:
#include <iostream>
#define PI 3.14159;
using namespace std;
class Circle{
private:
float radius;
protected:
float *radPtr = &radius;
public:
float getRadiusPtr(){
return *radPtr;
}
void getRadius();
void showRadius();
};
void Circle::getRadius() {
cout << "Enter Radius: "<< endl;
cin >> radius;
}
void Circle::showRadius(){
cout << "Radius: " << endl;
}
float Circle::area(*radPtr){
float ar;
float r = this.getRadiusPtr();
ar = PI * r * r;
}
int main(){
Circle c1;
c1.getRadius();
c1.showRadius();
cout << "Area: " << a << endl;
return 0;
}
You can't use
&radius outside a member function, since there's no specific object whose
radius member it should get the address of.
Furthermore, your
getRadiusPtr() function isn't even returning a pointer. It's just returning the value of
radius.
Try:
float *getRadiusPtr() { return &radius; }
Then you can use it like:
int main() { Circle c1; c1.getRadius(); c1.showRadius(); float *r = c1.getRadiusPtr(); float a = PI * *r * *r; cout << "Area: " << a << endl; return 0; } | https://codedump.io/share/KTL5QMDK77W8/1/trying-to-use-pointer-to-access-private-variable-outside-class-with-member-functions-outside-class | CC-MAIN-2017-34 | refinedweb | 296 | 62.48 |
Mizu WebPhone
Contents
Features, technology and licensing. 4
Integration and customization. 14
User interface Skin/Design. 15
Integration with server side applications. 18
Engine related settings. 26
Call divert and other settings. 35
User interface related settings. 40
The Mizu WebPhone is a universal SIP client to provide VoIP capability for all browsers using a variety of technologies compatible with most OS/browsers. Since it is based on the open standard SIP and RTP protocols, it can inter-operate with any other SIP-based network, allowing people to make true VoIP calls directly from their browsers. Compatible with all SIP softphones (X-Lite, Bria, Jitsi others), devices (gateways, ATA’s, IP Phones, others), proxies (SER, OpenSIPS, others), PBX (Asterisk, Elastix, Avaya, 3CX, Broadsoft, Alcatel, NEC, others), VoIP servers (Mizu, Voipswitch, Cisco, Huawei, others), service providers (Vonage, others) and any SIP capable endpoint (UAC, UAS, proxy, others).
The Mizu WebPhone is truly cross-platform, running from both desktop and mobile browsers, offering the best browser to SIP phone functionality in all circumstances, using a variety of built-in technologies referred as “engines”:
· NS (Native Service/Plugin)
· WebRTC
· Java applet
· Flash
· App
· P2P/Callback
· Native Dial
The engine to use is automatically selected by default based on OS, browser and server availability (It can be also set manually from the configuration or priorities can be changed). This multi-engine capability has considerable benefits over other naive implementations.
The webphone can be used with the provided user interface (as a ready to use softphone or click to call button) or as a JavaScript library via its API.
The provided user interfaces are implemented as simple HTML/CSS and can be fully customized, modified, extended or removed/replaced.
The webphone is an all-in-one VoIP client module which can be used as-is (as a ready to use softphone or click to call) or as a JavaScript library (to implement any custom VoIP client or add VoIP call capabilities to existing applications). You can create custom VoIP solutions from scratch with some JavaScript knowledge or use it as a turn-key solution if you don’t have any programming skills as the webphone is highly customizable by just changing its numerous settings.
1. Download
The package can be downloaded from here: webphone download.
It includes everything you need for a browser to SIP solution: the engines, the JavaScript API, the skins and also a few usage examples.
2. Deploy
You can find the requirements here which need to be fulfilled to be able to use the webphone.
Unzip and copy the webphone folder into your webserver and refer it from your html (for example from your main page) or open one of the included html in your browsers by specifying its exact URL. For example:
Note1: You might have to enable the .jar, .exe, .swf, .dll, .so, .pkg, .dmg and .dylib mime types in your webserver if not enabled by default (these files might be used in some circumstances depending on the client OS/browser).
Note2: If you wish to use (also) the WebRTC engine then your site should be secured (HTTPS with a valid SSL certificate). Latest Chrome and Opera requires secure connection for both your website (HTTPS) and websocket (WSS). If your website doesn’t have an SSL certificate then we can host the webphone for you for free or you can install a cheap or free certificate.
Alternatives:
o You can also test it without a webserver by launching the html files from your desktop, although some engines might not work correctly this way
o You can also test it by using the online demo hosted by Mizutech website, but in this case you will not be able to change the configuration (you can still set any parameters from the user interface or from URL)
3. Integrate
The webphone can be used as a turn-key ready to use solution or as a Java-Script library to develop custom software.
There are multiple ways to use it:
o Use one of the supplied templates (the “softphone” or the “click to call”) and customize it after your needs. You can place one of them as an iframe or div to your website
o Integrate the webphone with your webpage, website or web application
o Integrate the webphone with your server side application (if you are a server side developer)
o Create your custom solution by using the webphone as a JavaScript library (if you are a JavaScript developer)
4. Settings
The webphone has a
long list of parameters which you can set to customize it after your needs.
You can set these parameters multiple ways (in the webphone_api.js file, pass by URL parameter or via the setparameter() API).
If you are using the webphone with a SIP server (not peer to peer) then you must set at least the “serveraddress” parameter.
The easiest way to start is to just enter the required parameters (serveraddress, username, password and any other youm might wish) in the webphone_api.js file.
5. Design
If you are a designer then you might create your own design or modify the existing HTML/CSS. For simple design changes you don’t need to be a designer. Colors, branding, logo and others can be specified by the settings for the supplied “softphone” and “click to call” skins.
Mizutech can also supply a ready to use pre-customized build of the softphone skin with your settings and branding for no extra cost (ask for it).
Please note that the webphone also works without any GUI.
6. Launch
Launch one of the examples (the html files in the webphone folder) or your own html (from desktop by double clicking on it or from browser by entering the URL). You might launch the “index.html” to see the included templates.
At first start the webphone might offer to enable or download a native plugin if no other suitable engine are supported and enabled by default in your browser.
It will also ask for a SIP username/password if you use the default GUI and these are not preset.
On init, the webphone will register (connect) to your VoIP server (this can be disabled if not needed).
Then you should be able to make calls to other UA (any webphone or SIP endpoint such as X-Lite or other softphone) or to pstn numbers (mobile/landline) if outbound call service is enabled by your server or VoIP provider.
Examples and ready to use solutions (found in the webphone folder):
· index.html: just an index page with direct links to the below examples for your convenience
· minimal_example.html: shortest example capable to make a call
· basic_example.html: a basic usage example
· techdemo_example.html: a simple tech demo. You might make any tests by using this html or change/extend it to fit your needs
· softphone.html: a full featured, ready to use browser softphone. You can use it as is on your website as a web dialer. For example you can include it in an iframe or div on your website. Change the parameters in the webphone_api.js). You can further customize it by changing the parameters or changing its design.
· softphone_launch.html: a simple launcher for the above (since the softphone.html is used usually in an iFrame)
· click2call_example.html: a ready to use browser to SIP click to call solution. You might further customize after your needs
· linkify_example.html: can be used to convert all phone number strings on a website to click to call
· custom: you can easily create any custom browser VoIP solution by using the webphone java script library
More details about customization can be found here.
You can find how it works from here.
Another quick start guide can be found here.
The webphone package contains a ready to use web softphone.
Just copy the webphone folder to your webserver and change the “serveraddress” setting in the in webphone_api.js file to your SIP server IP or domain to have a fully featured softphone presented on your website. You can just simply include (refer to) the softphone.html via an iframe (this way you can even preset the webphone parameters in the iframe URL) div or on demand.
Note: you might have to enable the following mime types in your web server if not enabled by default: .jar, .swf, .dll, .dylib, .so, .pkg, .dmg, .exe
The web softphone can be configured via URL parameters or in the "webphone_api.js" file, which can be found in the root directory of the package. The most important configuration is the “serveraddress” parameter which should be set to your SIP server IP address or domain name. More details about the parameters can be found below in this documentation in the “Parameters” section.
We can also send you a build with all your branding and settings pre-set: contact us.
See the “User interface Skin/Design” chapter for more details.
The webphone package contains a ready to use click to call solution.
Just copy the whole webphone folder to your website, set the parameters in the webphone_api.js file and use it from the click2call_example.html.
Rewrite or modify after your needs with your custom button image or you can just use it via a simple URI or link such as:
You can find more details in the click to call section.
Developers can use the webphone as a JavaScript library to create any custom VoIP solution integrated in any webpage or web application.
Just include the "webphone_api.js" to your project or html and start using the webphone API.
See the development section for the details.
If you are a designer, you can modify all the included HTML/CSS/images or create your own design from scratch using any technology that can bind to JS such as HML5/CSS, Flash or others.
For simple design changes you don’t need to be a designer. Colors, branding, logo and others can be set by the settings.
See the “User Interface Skin/Design” section for more details.
The WebPhone is a cross-platform SIP client running entirely in clients browsers compatible with all browsers and all SIP servers, IP-PBX or softswitch. The webphone is completely self-hosted without any cloud dependencies, completely owned and controlled by you (just copy the files to your Web server).
· Standard SIP voice calls (in/out), video, chat, conference and others (Session Initiation Protocol)
· Maximum browsers compatibility. Runs in all browsers with Java, WebRTC or native plugin support (Chrome, Firefox, IE, Edge, Safari, Opera)
· Includes several different technologies to make phone calls (engines): Java applet, WebRTC, NS (Native Service or Plugin), Flash, App, Native and server assisted conference rooms, P2P and callback
· SIP and RTP stack compatible with any standard VoIP servers and devices like Cisco, Voipswitch, Asterisk, softphones, ATA and others
· Transport protocols: UDP, TCP, HTTP, RTMP, websocket (uses UDP for media whenever possible)
· Encryption: SIPS, TLS, DTLS, SRTP, end to end encryption for webphone to webphone calls
· NAT/Firewall support: auto detect transport method (UDP/TCP/HTTP), stable SIP and RTP ports, keep-alive, rport support, proxy traversal, auto tunneling when necessary, ICE/STUN/TURN protocols and auto configuration, firewall traversal for corporate networks, VoIP over HTTP/TCP when firewalls blocks the UDP ports with full support for ICE TCP candidates
· Works over the internet and also on local LAN’s (perfectly fine to be used with your own internal company PBX)
· RFC’s: 2543, 3261, 7118, 2976, 3892, 2778, 2779, 3428, 3265, 3515, 3311, 3911, 3581, 3842, 1889, 2327, 3550, 3960, 4028, 3824, 3966, 2663, 6544, 5245 and others
· Supported methods: REGISTER, INVITE, re-INVITE, ACK,PRACK, BYE, CANCEL, UPDATE, MESSAGE, INFO, OPTIONS, SUBSCRIBE, NOTIFY, REFER
· Audio Codec: PCMU, PCMA, G.729, GSM, iLBC, SPEEX, OPUS (including wide-band HD audio)
· Video codec: H.263, H.264 and VP8 for WebRTC only
· SIP compatible codec auto negotiation and adjustment (for example G.729 - wideband or WebRTC G.711 to G.729 transcoding if needed)
· Call divert: rewrite, redial, mute, hold, transfer, forward, conference
· Call park and pickup, barge-in (with NS)
· Voice call recording
· IM/Chat (RFC 3428), SMS, file transfer, DTMF, voicemail (MWI)
· Multi-line support
· Contact management: flags, synchronization, favorites, block, presence (DND/online/offline/others)
· Balance display, call timer, inbound/outbound calls, caller-id display
· High level JavaScript API: web developers can easily build any custom VoIP functionality using the webphone as a JS library
· Stable API: new releases are always backward compatible so you can upgrade with no changes in your code
· Integration with any website or application including simple static pages, apps with client side code only (like a simple static page) or any server side stack such as PHP, .NET, java servlet, J2EE, Node.js and others (sign-up, CRM, callcenter, payments and others)
· Phone API accessible from any JavaScript framework (such as AngularJS, React, jQuery and others) or from plain/vanilla JS or not use the JS API at all
· Branding and customization: customizable user interface with your own brand, skins and languages (with ready to use, modifiable themes)
· Flexibility: all parameters/behavior can be changed/controlled by URL parameters, preconfigured parameters, from javascript or from server side
Server side:
· Any web hosting for the webphone files (any webserver is fine: IIS, nginx, Apache, NodeJS, Java, others; any OS: Windows, Linux, others).
Chrome and Opera requires secure connection for WebRTC engine to work (otherwise this engine will be automatically skipped). We can also host the webphone for free if you wish on secure http. Note that the web phone itself doesn’t require any framework, just host it as static files (no PHP, .NET, JEE, NodeJS or similar server side scripting is required to be supported by your webserver)
· At least one SIP account at any VoIP service provider or your own SIP server or IP-PBX (such as Asterisk, Voipswitch, 3CX, FreePBX, Trixbox, Elastix, SER, Cisco and others)
· Optional: WebRTC capable SIP server or SIP to WebRTC gateway (Mizutech free WebRTC to SIP service is enabled by default. The webphone can be used and works fine also without WebRTC, however if you prefer this technology then free software is available and Mizutech also offers WebRTC to SIP gateway (free with the Advanced license) and free service tier. Common VoIP servers also has built-in WebRTC support nowadays)
Client side:
· Any browser supporting WebRTC OR Java OR native plugins with JavaScript enabled (most browsers are supported)
· Audio device: headset or microphone/speakers
Compatibility:
· OS: Windows (XP,Vista,7,8,10) , Linux, MAC OSX, Android, iOS (app only), BlackBerry, Chromium OS and others
· Browsers: Firefox, Chrome, IE (6+), Edge, Safari, Opera and others
· Different OS/browser might have different compatibility level depending on the usable engines. For example the rarely used Flash engine doesn’t implement all the functionalities of the WebRTC/Java/NS engines (these differences are handled automatically by the webphone API)
If you don't have an IP-PBX or VoIP account yet, you can use or test with our SIP VoIP service.
· Server address: voip.mizu-voip.com
· Account: create free VoIP account from here or use the following username/passwords: webphonetest1/webphonetest1, webphonetest2/webphonetest2 (other people might also use these public accounts so calls might be misrouted)
The goal of this project is to implement a VoIP client compatible with all SIP servers, running in all browsers under all OS with the same user interface and API. At this moment no technology exists to implement a VoIP engine to fulfill these requirements due to browser/OS fragmentation. Also different technologies have some benefits over others. We have achieved this goal by implementing different “VoIP engines” targeting each OS/browser segment. This also has the advantage of just barely run a VoIP call, but to offer the best possible quality for all environments (client OS/browser). All these engines are covered by a single, easy to use unified API accessible from JavaScript. To ease the usage, we also created a few different user interfaces in HTML/JS/CSS addressing the most common needs such as a VoIP dialer and a click to call user interface.
More details about how it works can be found here.
Each engine has its advantages and disadvantages. The sip webphone will automatically choose the “best” engine based on your preferences, OS/Browser/server side support and the enduser preferences (this can be overridden by settings if you have some special requirements): VoIP availability in browsers.
Our native VoIP engine implemented as a native service or browser plugin. The engine works like a usual SIP client, connecting directly from client PC to your SIP server, but it is fully controlled from web (the client browser will communicate in the background with the native engine installed on the client pc/mobile device, thus using this natively installed sip/media stack for VoIP).
Pros:
· All features all supported, native performance
Cons
· Requires install (one click installer)
A new technology for media streaming in modern browsers supporting common VoIP features. WebRTC is a built-in module in modern browsers which can be used to implement VoIP. Signaling goes via websocket (unencrypted or TLS) and media via encrypted UDP (DTLS-SRTP). These are then converted to normal SIP/RTP by the VoIP server or by a gateway.
Pros:
· Comfortable usage in browsers with WebRTC support because it is has no dependencies on plugins
Cons:
· It is a black-box in the browser with a restrictive API
· Lack of popular VoIP codec such as G.729 (this can be solved by CPU intensive server side transcoding)
· A WebRTC to SIP gateway may be required if your VoIP server don’t have built-in support for WebRTC (there are free software for this and we also provide a free service tier, included by default)
A browser plugin technology developed by Adobe with its proprietary streaming protocol called RTMP.
Pros:
· In some old/special browsers only Flash is available as a single option to implement VoIP
Cons:
· Requires server side Flash to SIP gateway to convert between flash RTMP and SIP/RTP (we provide free service tier)
· Basic feature set
Based on our powerful JVoIP SDK, compatible with all JRE enabled browsers. Using the Java Applet technology you can make SIP calls from browsers the very same way as a native dialer, connecting directly from client browser to SIP server without any intermediary service (SIP over UDP/TCP and RTP over UDP).
Pros:
· All SIP/media features are supported, all codecs including G.729, wideband and custom extra modules such as call recording
· Works exactly as a native softphone or ip phone connecting directly from the user browser to your SIP capable VoIP server or PBX (but with your user interface)
Cons:
· Java is not supported by some browser, most notable mobile devices and Chrome which has dropped NPAPI support required for the Java plugin (in this case the webphone will use WebRTC, Flash or Native engine instead of Java)
· Some browsers may ask for user permission to activate the Java plugin
Some platforms don’t have any suitable technology to enable VoIP in browsers (a minor percentage, most notably iOS/Safari). In these cases the webphone can offer to the user to install a native softphone application. The apps are capable to fully auto-provision itself based on the settings you provide in your web application so once the user installs the app from the app store, then on first launch it will magically auto-provision itself with most of your settings/branding/customization/user interface as you defined for the webphone itself.
Pros:
· Covering platforms with lack of VoIP support in browsers (most notable: iOS Safari)
Cons:
· No API support. Not the exactly same HTML user interface (although highly customized based on the settings you provided for the webphone)
These are just “virtual” engines with no real client VoIP stack.
· P2P means server initiated phone to phone call initiated by an API call into your VoIP server. Then the server will first call you (on your regular mobile/landline phone) and once you pick it up it will dial the other number and you can start talking (Just set the “p2p” setting to point to your VoIP server API for this to work)
· Callback is a method to make cheap international calls triggering a callback from the VoIP server by dialing its callback access number. It might be used only as a secondary engine if you set a callback access number (Just set the “callback” setting to point to your VoIP server API for this to work)
These are treated as a secondary (failback) engines and used only if no other engines are available just to be able to cover all uncommon/ancient devices with lack of support for all the above engines which is very rare. However it might be possible that these fits into your business offer and in that case you might increase their priority to be used more frequently.
This means native calls from mobile using your mobile carrier network. This is a secondary “engine” to failback to if no any VoIP capabilities were found on the target platform or there is no network connection. In these circumstances the webphone is capable to simply trigger a phone call from the user smartphone if this option is enabled in the settings. Rarely used if any.
The most important engines are: Java, WebRTC, NS and Flash. The other engines are to provide support for exotic and old browsers maximizing the coverage for all OS/browser combinations ensuring that enduser has call capabilities regardless of the circumstances.
All the above engines are covered with an easy to use unified Java Script API, hiding all the differences between the engines as described below in the “JavaScript API” section.
The webphone can be used with or without a user interface.
The user interface can be built using any technology with JS binding. The most convenient is HTML/CSS, but you can also use any others such as Flash.
The webphone package comes with a few prebuilt feature rich responsive user interfaces covering the most common usage such as a full featured softphone user interface and a click to call implementation. You are free to use these as is, modify them after your needs or create your own from scratch. For more details check the “User interface Skin/Design” section.
Major changes by release are listed here:
-internal beta version with basic skin and basic call functionality
Version 0.6 (July 3, 2015)
-engines: WebRTC beta and Java Applet v.1.0
-more SIP settings
-early beta version with basic SIP call functionality
-call-divert functionalities (voicemail, transfer, others)
-conference
-NS (Native Service or Plugin)
-examples and documentation
-better skinning
-better OS/browser handling
-automatic engine selection
-WebRTC stable incoming and outgoing calls
-upgrade to latest Java Applet and WebRTC engine
-more JavaScript SIP API
-new: flash engine
-WebRTC improvements
-stable API, modules and file structure
-improved auto engine selection
-chat
-app engine
-secondary engines (p2p, native dial, callback)
-custom builds based on customer settings
-callback API for simplified API (use simple call back instead of notification string parsing)
-server API for the webphone state machine (so you can easily catch all important events from server code)
-WebRTC engine upgrade to latest version
-presence (not fully standard compliant yet but working)
-added file transfer
-new missed call/chat notifications
-http vs https bug fixes
-NS engine availability from https
-reset setting parameter and API
-last call detailed statistics
-called number normalization
-one-way audio fix on WebRTC
-WebRTC fix for Android
-many other improvements and bug fixes
-new: audio device selection
-new: favorite or block contact
-new: setsipheader/getsipheader
-improved: capability call special url's on events (server API integration)
-improved: number rewrite rules
-improved: feedback for file transfer
-fix: ns engine unregister on webpage close
-fix: increase cseq for re-invite
-other improvements and bug fixes
-new: WebRTC to SIP gateway (free as both software and service for all our web sip library customers)
-new: TURN (WebRTC works now even if all UDP is blocked and only port TCP 80 is allowed)
-new: auto codec convert when necessary (for example to G.729 from WebRTC)
-new: App engines for iOS and Android
-new: WebRTC on Android
-new: HTTP to HTTPS gateway (used automatically if hosting website is not secure which is required by Chrome for WebRTC)
-new: WebRTC caller-id
-improved: WebRTC NAT handling
-improved: STUN
-improved: end to end encryption
-improved: softphone skin
-fix: java freezing improvements
-fix: WebRTC caller-id
-new: call recording (voicerecupload)
-new: 8 new call-divert related settings and API’s
-new: callcenter integration
-improved: engine selection
-improved: VoIP over TCP using TURN only when necessary
-improved: usage on local LAN’s
-improved: WebRTC (various fixes)
-fix: auto engine select related bugs, unnecessary java popups
-fix: NS engine discover issues
-fix: mute/unmute, hold/unhold
-fix: CTRL+C, CTRL+V in the softphone skin
-more than 20 other bug fixes and small improvements especially engine detect/choose related
-new: video
-new: conference rooms (server assisted)
-new: audio device list, get, set functions
-new: web call-me
-new: peer to peer media auto discover
-new: call forward
-new: auto WebRTC server discovery (for example it can detect automatically if webrtc is enabled in Asterisk and other servers)
-new: softphone skin now can be inserted also in a DIV (previously it was working only in iframe)
-new: callback (you can specify a callbacknumber parameter if your server has a callback access number)
-new: sip outbound proxy setting
-new: call transfer options
-new: CDR records after calls (can be easily posted to server API)
-new: group chat
-improved: webrtc engine
-improved: click to call
-improved: presence
-improved: voicemail
-improved: conference
-improved: voice recording
-improved: android native dialer auto-configuration
-improved: themes (color theme/skinning)
-improved: TURN and STUN handling and auto-discovery
-improved: user interface integration (div, popup, flying, others)
-improved: chat (reliability, smiles, file-transfer, groups)
-improved: NS engine versioning and auto upgrade
-fix: voip engine auto select related issues, settings save/restore
-fix: init delay
-fix: ns engine localhost certificate, https/wss issues
-fix: more than 44 bug-fixes mostly based on customer feedback and additional tests
-the old documentation for this version can be found here
-new: MAC OS WebRTC plugin
-new: multi-line (manage multiple simultaneous calls)
-new: ICE TCP candidate (RFC 6544)
-new: UPNP NAT for NS and Java engines (better NAT handling behind UPNP capable routers)
-new: redial or re-INVITE on fast call failure or on no media with changed stun and codec
-new: auto call forward on no answer (“callforwardonnoanswer” setting)
-new: more settings such as language, disablesamecall, checkvolumelevel, inbounddtmf , outbounddtmf, usecommdevice, etc
-new: stop() and getworkdir() api
-new: auto NS service upgrade in background (only if NS is actually used and only to known good new versions)
-improved: DTMF between SIP and WebRTC (both INFO and RFC 2866 are supported)
-improved: stun and turn (earlier public IP discovery, doesn’t use on local LAN’s)
-improved: more native audio features on Windows
-improved: http/https view/api/download/upload/autoprovisioning (autodetect, https-http proxy and ssl bypass options)
-improved: fast init with no more delays when coming from settings and engine init speedups
-improved: fast cleanup and exit for the java engine
-improved: conference for WebRTC
-improved: NS engine auto upgrade
-improved: iOS app engine via SIP softphone
-improved: various transfer related improvements including WebRTC to SIP call transfer
-improved: usage from behind NAT or firewalls (now capable to use both TURN and TCP candidates if UDP is blocked)
-improved: various other WebRTC-SIP related improvements
-improved: documentation
-fix: embed in webpage related issues and webphonebasedir
-fix: onRegistered to catch all SIP register events
-fix: settings save/load, keep last good VoIP method
-fix: chat between SIP and WebRTC
-fix: file transfer related bugs
-fix: flash VoIP engine only when really necessary (no any other options)
-fix: DNS SRV record timeout handling
-fix: fixed problem with AEC for wideband speex and opus with NS and Java
-fix: audio device list on Windows
-fix: ptime settings for G.729
-fix: reject double outbound calls to same destination
-fix: various GUI related bugs on the softphone skin
-fix: various auto engine detect, prioritization and usage related bugs
-fix: more than 110 other minor fixes and improvements
-new: no need to explicitly set the webphonebasedir anymore as it is always guessed correctly from now
-new: call forward and transfer now works between SIP and WebRTC endpoints
-new: NS self-upgrade in background capability (with no user interaction required)
-new: checkmicrophone setting
-new: global instance (ability to use the same webphone instance on multiple webpages opened in different tabs/windows in the client browser)
-new: call recover with redial on no or bad response
-new: MAC OS webrtc plugin as pkg (webrtcplugin.pkg)
-new: capability to initiate calls even if not registered, registrar disabled or register failed
-new: more call details onCdr such as displayname and call disconnect reason
-improved: better audio/camera recording permission handling
-improved: TURN and TCP candidates (now it works in all circumstances with WebRTC)
-improved: WebRTC-SIP protocol conversion
-improved: WebRTC-SIP codec conversion (for example Opus to G.729 and inverse)
-improved: Android app engine
-improved: better aec and denoise
-improved: update to latest OpenSSL for the WebRTC DTLS and websocket TLS
-improved: settings management (now the server side settings from webphone_js.api are applied immediately and exactly as-is)
-improved: more flexible parameter handling (handle when pass number as string or bool as number and others)
-improved: auto hide disconnected call page after some time
-improved: click-to-call related improvements
-improved: Asterisk WebRTC auto discover
-improved: registerless usage (register=0)
-improved: permissible demo license limitations
-improved: WebRTC trickle ICE
-improved: chat reliability
-improved: number rewrite rules
-improved: call divert now propagated also to server side (will safely handle servers with no such support)
-improved: usage without sipserver (call to sip uri should work; serverless peer to peer functionality)
-fix: click-to-call related bugs
-fix: autostart: 0, start and register only when clicked
-fix: mute() should mute also the speaker if called with parameter 0 or 1; also webrtc mute is fixed now
-fix: on hold fail, call is disconnected, but the disconnect is not discovered by the GUI
-fix: send dtmf while in webrtc call doesn't display any feedback
-fix: keypress events
-fix: loading cached old settings problem
-fix: ERROR, catch on common: ParamAsBool ReferenceError: isNumber is not defined (isNumber is not defined)
-fix: ERROR, catch on notifications: ProcessNotifications, not: STATUS,1,Finished ReferenceError: GetParameter is not defined
-fix: WRTC, ERROR, InvalidAccessError: RTCPeerConnection constructor passed invalid RTCConfiguration
-fix: flash engine offer only if no any other better choice
-fix: java no applethandle after going to settings and back (maybe go to settings and select java engine even if selected)
-fix: settings not always read correctly if webphone is used as SDK and this causes discrepancies in engine selection
-fix: detailed loglevel fixed
-fix: call start while webphone is initializing
-fix: video chat one side only now fixed
-fix: call recording
with WebRTC engine
-new: getsipmessage API
-new: allowcallredirect parameter
-new: playdtmfsound parameter
-new: onDisplay callback
-new: earlymedia parameter
-new: server/user-agent based licensing for the gold version
-new: option to disable all toasts/popups
-new: muteholdalllines parameter
-improvement: multi-line; a lot of improvements regarding line management
-improved: setline() now accepts also peer phonenumber or sip call-id
-improved: conference API add parameter and other conference related
-improved: call transfer and forward between SIP and WebRTC (and inverse)
-improved: more robust un-register
-improved: cookie and indexDB localforage
-improved: call setup without recording device (no microphone)
-improved: NS engine once click installer improvements and auto-configuration
-improved: Safari compatibility
-improved: handle Firefox 52+ no Java/NPAPI support
-improved: playsound API
-improved: get the call disconnect reason on hangup (from SIP disc. code but also from Reason and Warning headers)
-fix: WebRTC-SIP converter blind accept any username/password in registrations (now properly forward as SIP REGISTER with digest authentication)
-fix: ice timeout
-fix: IsRegistered
-fix: don't touch the NS engine if not needed
-fix: globalline defaults to -1 if not multiline
-fix: webphone_api.voicerecord
-fix: getsipheader mixed up bug
-fix: call timer display
-fix: accidental call disconnects
-fix: IE 7 and IE 8 compatibility
-fix: password sometime encoded incorrectly
-fix: username vs sipusername
-fix: SendDtmf ReferenceError: message is not defined
-fix: settings management
-fix: garbage characters in balance display (credit/currency)
-fix: NS engine XP and vista compatibility
-fix: NS engine compatibility with x32 (32 bit) OS versions
-fix: NS engine compatibility with non-english Windows versions
-fix: autologin not working if server/user/password is set
-fix: if username or password is preset then don't display user/pwd input for the softphone skin
-numerous other improvements and minor bug fixes
The webphone is
sold with unlimited client license (Advanced and Gold) or restricted number of
licenses (Basic and Standard). You can use it with any VoIP server(s) on your
own and you can deploy it on any webpage(s) which belongs to you or your
company. Your VoIP server(s) address (IP or domain name) and optionally your
website(s) address will be hardcoded into the software to protect the
licensing. You can find the licensing possibilities on the pricing page. After successful
tests please ask for your final version at [email protected]. Mizutech will
deliver your webphone build within one workday after your payment.
Release versions don’t have any limitations (mentioned below in the “Demo version” section) and are customized for your domain(s). All “mizu” and “mizutech” words and links are removed so you can brand it after your needs (with your company name, brand-name or domain name), customize and skin (we also provide a few skin which can be freely used or modified).
Your final build must be used only for you company needs (including your direct sip endusers or service customers).
Title, ownership
rights, and intellectual property rights in the Software shall remain with
MizuTech.
The agreement and the license granted hereunder will terminate automatically if you fail to comply with the limitations described herein. Upon termination, you must destroy all copies of the Software. The software is provided "as is" without any warranty of any kind. You must accept the software SLA before to use the webphone.
You may:
· Use the webphone on any number of computers, depending on your license
· Give the access to the webphone for your customers or use within your company
· Offer your VoIP services via the webphone
· Integrate the webphone to your website or web application
· Use the webphone on multiple webpage’s and with multiple VoIP servers (after the agreement with Mizutech). All the VoIP servers must be owned by you or your company. Otherwise please contact our support to check the possibilities
You may not:
· Resell the webphone as is
· Sell “webphone” services for third party VoIP providers and other companies usable with any third-party VoIP servers (except if coupled with your own VoIP servers)
· Resell the webphone or any derivative work which might compete with the Mizutech “webphone” software offer
· Reverse engineer, decompile or disassemble or modify the software in any way except modifying the settings and the HTML/CSS skins or the included JavaScript examples
Note: It is perfectly fine to sell or promote it as a “webphone service” if that is tied to your own SIP servers. But if you sell it as webphone software which can be used with any server than you are actually selling the same as Mizutech and this is not allowed by the license.
There are the following legal ways to use the webphone:
-you have your own SIP server(s) and the webphone will be used with these server(s) (your customers can integrate the webphone into any application or website but they will use the webphone via your VoIP server)
-you are building some application or website (such as a CRM), so the webphone will be tightly integrated with your solution
-both of the above (webphone used via your own SIP server from within your own website or application)
Let us know if you wish to use the webphone in some other way, not covered by this license.
Demo version
The downloadable demo version can be used to try and test before any purchase. The demo version has all features enabled but with some restrictions to prevent commercial usage. The limitations are the followings:
· maximum 10 simultaneous webphone at the same time
· will expire after several months of usage (usually 2 or 3 months)
· maximum ~100 sec call duration restriction
· maximum 10 calls / session limitation. (After ~10 calls you will have to restart your browser)
· will work maximum ~20 minutes after that you have to restart it or restart the browser
· can be blocked from Mizutech license service
In short: the demo version can be used for all kind of tests or development, but it can’t be used for production.
Note: for the first few calls and in some circumstances the limitations might be weaker than described above, with fewer restrictions.
On request we can also provide test builds with only trial period limitation (will expire after ~3 weeks of usage) and without the above demo limitations.
See the pricing and order your licensed copy from here.
The webphone is a flexible VoIP web client which can be used for various purposes such as a dialer on your website, a click to call button for contacts or integrated with your web application (contact center, CRM, social media or any other application which requires VoIP calls).
The webphone can be customized by its numerous settings, webphone API’s and by changing its HTML/CSS.
Deploy:
The webphone can be deployed as a static page (just copy the webphone file to your website), as a dynamic page (with dynamically generated settings) or used as a JavaScript VoIP library by web developers. You can embed the webphone to your website in a div, in an iFrame or on demand, as a module or as a separate page. The webphone settings can be set also by URL parameters so you can just launch it from a link with all the required settings specified.
VoIP platform:
All you need to use the webphone is a SIP account at any VoIP service provider or your own softswitch/IP-PBX.
Free SIP accounts can be obtained from numerous VoIP service providers or you can use our service. (Note that free accounts are free only for VoIP to VoIP calls. For outbound pstn/mobile you will need to top-up your account).
If you wish to host it yourself then you can use any SIP server software. For example FreePBX for linux or the advanced / free VoIP server for windows by Mizutech. We can also provide our WebRTC to SIP gateway (for free with the Advanced or Gold license) if your softswitch don’t have support for WebRTC and you need a self-hosted solution.
Technical settings:
The most important parameter that you will need to set is the “serveraddress” which have to be set to the domain or IP:port of your SIP server.
If you wish, you might change also other sip account, call-divert or VoIP engine related settings after your needs.
Integration:
You can integrate the webphone with your web-site or web-application:
-using your own web server API
-and/or using the webphone client side JavaScript API to insert any business logic or AJAX call to your server API
The webphone library doesn’t depend on any framework (as it is a pure client side library) but you can integrate it with any server side framework if you wish (PHP, .NET, NodeJS, J2EE or any server side scripting language) or work with it only from client side (from your JavaScript).
On the client side you can use the webphone API from any JavaScript framework (such as AngularJS, React, jQuery and others) or from plain/vanilla JS or not use the JS API at all.
Design
You can completely change any of the included skins (click to call button, softphone), or change the softphone colors or create your user interface from scratch with your favorite tool and call the webphone API from there.
Custom application:
For deep changes or to create your unique VoIP client or custom application you will need to use the JavaScript API.
See the development section for more details.
Branding:
Since the webphone is usually used within your website context, your website is already your brand and no additional branding is required inside the webphone application itself. However the softphone skin (if you are using this turn-key GUI) has its own branding options which can be set after your requirements.
Additionally you can change the webphone HTML/CSS design after your needs if more modifications are required.
On request, we can send your webphone build already preconfigured with your preferences.
For this just answer the points from the voip client customization page (as many as possible) and send to us by email. Then we will generate and send your webphone build within one work-day. All the preconfigured parameters can be further changed by you via the webphone settings.
Of course, this is relevant only if you are using a skin shipped with the webphone, such as the softphone.html. Otherwise you can create your custom solution using the webphone library with your unique user interface or integrate into your existing website.
You can use the webphone with or without a user interface.
The webphone is shipped with a few ready to use open source user interfaces such as a softphone and click to call skins. Both of these can be fully customized or you can modify their source to match your needs. You can also create any custom user interface using any technique such as HTML/CSS and bind it to the web phone javascript API.
The default user interface for the softphone and other included apps can be easily changed by modifying parameters or changing the html/css. For simple design changes you don’t need to be a designer. Colors, branding, logo and others can be set by the settings parameters.
Also you can easily create your own app user interface from scratch with any tool (HTML/CSS or others) and call the webphone Java Script API from your code.
In short, there are two ways to achieve your own (any kind of) custom user interface:
A. Use one of the skins provided by the webphone
Here you also have two possibilities:
o Quick customization changing the webphone built-in user-interface related settings (you can change the colors, behaviors and others)
o If you are a web developer, then have a look at the html and JavaScript source code and modify it after your needs (we provide all the source code for these; it can be found also in the downloadable demo)
B. Create your own user web VoIP user interface and use the webphone as a JavaScript library from there.
The webphone has an easy to use API which can be easily integrated with any user interface. For example from your “Call” button, just call the webphone webphone_api.js.call(number) function. Have a look at the “minimal_example.html”, “basic_example.html” or “techdemo_example.html” (You can also use the provided samples as a template to start with and modify/extend it after your needs)
Just use the “colortheme” parameter to make quick and wide changes.
Then have a look at the “User interface related” parameters (described in the “Parameters” section) and change them after your needs (set logo, branding, translate and others).
We can also send you a web softphone with your preferred skin. For this just set your customization on the online designer form and send us the parameters.
We can also send you fully customized and branded web softphone with your preferences. For this just send us the customization details.
Web developers/designers can easily modify the existing skins or create their own.
For the softphone application all the HTML source code can be found in "softphone.html" file as a single-page application model.
There are a few possibilities to change the skins:
· If you need only minor/color changed, then just change the color theme
· You might also change the jQuery theme:
The jQuery mobile Theme Roller generated style sheet can be found in this file: "css\themes\wphone_1.0.css".
Current jQuery mobile version is 1.4.2. Using the Theme roller, you can create new styles:
The style sheet which overrides the "generated" one (in which all the customizations are defined) is "css/mainlayout.css".
· You can also manually edit the html and css file with your favorite editor to change it after your needs
· Or just create your design with your favorite tools and call the web sip phone API from there
Note: If you are using the webphone as a javascript library then you can customize the “choose engine” popup in "css\pmodal.css".
If you have different needs or don’t like the default skins, just create your own from scratch and call the webphone JavaScript API from your code. Using the API you can easily add VoIP call capabilities also to existing website or project with a few function calls as described in the “Java Script API” section below.
You can use the webphone library to implement your custom click-to-call solution or use one of the skin templates for click to call.
There are multiple ways to achieve click to call functionality with the sip webphone:
Use the Click2Call template
The webphone package contains a ready to use click to call solution: Just copy the whole webphone folder to your website, set the parameters in the webphone_api.js file and use the click2call_example.html.
You can completely customize the click2call example for your needs (change any settings, change the html/css/javascript, use your custom button image).
Launch from URL
You can pass any setting as URL parameter and the webphone (and the included templates) can be easily parametrized to act as a click to call solution:
A working example with the click to call skin:
This will launch the click to call page and will initiate the call automatically. You can find more examples here.
You can also use any other skins for click to call. For example here is with the softphone skin:
Custom click to call solution
You can easily create your custom click to call solution from scratch by using the webphone as a library/SDK.
A simple click to call example can be found in the webphone package: "click2call_example.html"
The following parameters must be configured in order to make a call: serveraddress, username, password, callto.
A Step by step guide to add click to call button to your web page:
Put the below code in your web page's <head> section:
<link rel="stylesheet" href="css/click2call/click2call.css" />
<script src="webphone_api.js"></script>
<script src="js/click2call/click2call.js"></script>
<script>
/**Configuration parameters*/
webphone_api.parameters['serveraddress'] = '';
webphone_api.parameters['username'] = '';
webphone_api.parameters['password'] = '';
webphone_api.parameters['md5'] = '';
webphone_api.parameters['realm'] = '';
webphone_api.parameters['callto'] = '';
webphone_api.parameters['autoaction'] = 1;
</script>
Copy this html element in you page, where you want the click to call button to show up:
<div id="c2k_container_0" title=""><a href="tel://CALLTO" id="c2k_alternative_url">CALLTO</a></div>
Customize the button
Customization options can be found in click2call.js file located in js/click2call/ folder.
The following customizations are available:
- button color (for call and hang up states)
- text displayed on the button (for call and hang up states)
- button width, height and corner radius
- chat window default state: open or collapsed
The styling can be further customized from click2call.css located in css/click2call/ folder.
Use as a chat window
The click to call button can also be used as a chat window. This is controlled by the "autoaction" parameter: 1=call, 2=chat.
The chat window can also be opened by accessing the menu and selecting the Chat item.
The menu can be accessed by right clicking or by long clicking on the button.
Floating button
The click to call can also be used as a floating button on your page. The floating related configurations can be found in click2call.js file located in js/click2call/ folder.
To enable floating, set the "float_button" config to true and specify two direction coordinates for the floating. For example to have a floating button on the top right corner of your page, located from 100 pixels from the top and 10 pixels from the right:
var float_button = true;
var float_distance_from_top = 100;
var float_distance_from_right = 10;
Floating webphone skin
To float the webphone skin over your web page, just set the following CSS attributes for the container HTML element of the webphone (which can be a DIV or an iframe):
// this aligns the webphone to the bottom-right corner of you page
z-index: 1000; position: fixed; bottom: 0px; right: 0px;
If you wanted for instance to set it in the top-left corner, then the CSS attributes would be:
z-index: 1000; position: fixed; top: 0px; left: 0px;
Multiple instances
To add more than one click to call button to a page, include the script part in the <head> section once, and copy the container <div> increasing the id index number for every instance.
ex:
<div id="c2k_container_0" title="55555"></div>
<div id="c2k_container_1" title="66666"></div>
<div id="c2k_container_2" title="77777"></div>
<div id="c2k_container_3" title="88888"></div>
These id indexes must be unique and increasing.
The callto parameter can be set as the title attribute of the <div> element.
Load on demand
You can also load the sip web phone on demand as explained here.
Auto-call
If you wish to make a call automatically, then just initialize the webphone as described above and
-either set also the “autoaction” parameter to 1
-or make the call with the webphone_api.call(number) API from the onLoaded() or from the onRegistered() callback.
Note:
o Even if you initiate a call form onLoaded and the webphone is not yet registered -and it needs to register-, then it will handle registration first then it will initialize the call automatically.
o If your IP-PBX doesn’t require registrations, then just set the “register” setting to 0.
This section is mostly for server side developers. If you have more JavaScript skills then we recommend to just use the JavaScript API directly as described in the development section.
First of all it is important to mention that the webphone doesn’t have any server side framework dependencies. You can host it on any webserver without any framework (.PHP, .NET, Node.Js ot others installed).
The webphone is running entirely on the client side (in the user browser as a browser sip plugin) and can be easily manipulated via its JavaScript SIP API, however you can easily integrate the webphone with any server side application or script be it .NET, PHP, Node.Js, J2EE or any other language or framework even if you don’t have JavaScript experience. Just create a HTTP API to catch events such as login/call start/disconnect and drive your app logic accordingly.
The most basic things you can do is to dynamically generate the webphone parameters per session depending on your needs. For example if the user is already logged-in, then you can pass its SIP username/password for the webphone (possibly encoded).
For this, just generate the webphone_api.js dynamically or pass the parameters in the URI.
For a tighter integration you will just have to call into your server from the webphone.
This can be done with simple XMLHttp /AJAX or websocket requests against your server HTTP API, then process the events in your server code according to your needs. The requests can be generated using the built-in HTTP API events or you can just post them yourself from your custom JavaScript code using websocket or ajax requests. Usually these requests will be made from callback events which are triggered on web phone state machine changes, but you are free to place ajax request anywhere in your code such as click on a button.
Example:
For example if you need to save each call details (date, caller, called, duration, others) into a server side database, then just define a “oncalldetails” or similarly named API in your server side application which can be called via simple HTTP request in one of the following ways:
1. Using the built-in HTTP API integration capabilities:
Just set the scurl_onincalldisconnected to your HTTP API. For example: (or wherever your API can be called). This method is very convenient if you are a server side developer with no JavaScript knowledge as you don’t need to touch any JavaScript to implement this.
2. Using custom AJAX requests:
Use the onCdr() API to setup a callback which will be triggered after each call.
Send an AJAX request (XMLHttpRequest or jQuery get or post) to your application server with the CDR details.
(You can pass the details in HTTP GET URL parameters or in HTTP POST body in your preferred format such as clear text, json, xml or other).
Then you will receive request to this API entry on your app server and you can process them accordingly (load the URL parameters and store in your database).
For auto-provisioning from a server side application, you can create an API to return all the webphone parameters (settings) and set the “scurl_setparameters” setting to this API URL.
You can integrate the webphone with your server code using your custom HTTP (AJAX) API URI’s.
Just set one or more of the following settings to point to your server application HTTP API entries which will be called automatically as the webphone state machine changes:
· scurl_onstart: will be called when the webphone is starting
· scurl_onoutcallsetup: will be called on outgoing call init
· scurl_onoutcallringing: will be called on outgoing call ring
· scurl_onoutcallconnected: will be called on outgoing call connect
· scurl_onoutcalldisconnected: will be called on outgoing call disconnect with call details (CDR)
· scurl_onincallsetup: will be called on incoming call
· scurl_onincallringing: will be called on incoming call ring
· scurl_onincallconnected: will be called on incoming call connect
· scurl_onincalldisconnected: will be called on incoming call disconnect with call details (CDR)
· scurl_oninchat: will be called on incoming instant message
· scurl_onoutchat: will be called on outgoing instant message
· scurl_setparameters: will be called after "onStart" event(url) and can be used to provision the webphone from server API. The answer should contain parameters as key/value pairs, ex: username=xxx,password=yyy
· scurl_displaypeerdetails: will be called at the beginning of incoming and outgoing calls to return details about the peer from your server API (like full name, address or other details from your CRM). It will be displayed at the location specified by the “displaypeerdetails” parameter. You can return any string as clear text or html which can be displayed as-is.
For example: scurl_onoutcallsetup:
(Your API will be called each time the webphone user makes an outgoing call and the parameters in uppercase will be replaced at runtime in the same way as described for the links setting)
For API request, the webphone will try to fetch the result using the following techniques (first available): AJAX/XHTTP, CORS, JSONP and websocket (if available).
You can use the webphone library to implement your custom click-to-call solution or use one of the skin templates for click to call.
The VoIP web sip phone browser plugin doesn’t depend on any framework and can be integrated with any system or CRM with JavaScript binding support. Usually you just need to include the webphone_api.js, set the basic parameters in the webphone_api.js (such as serveraddress/username/password to be used, but these can be passed also by the API) and just use the call() function to make an outgoing calls. Incoming calls are handled automatically and with a few more API calls you can easily implement features such as call transfer, conference, chat, dtmf or video call.
For example here is tutorial for Salesforce webphone integration in case if you are interested in this platform or some details about VoIP callcenter integration.
Consult your CRM documentation to find the details about integration third-party general modules (or even better if it has an interface specific for third party phone integrations). Contact us if you need help with any integration.
This section is for JavaScript developers. You can use this webphone also without any JavaScript skills:
· If you don’t have any programming skills: customize and use the included turn-key templates (for example the “softphone.html”) on your website.
· If you are a server-side developer not comfortable with JS: take advantage of the server integration capabilities
Developers can use the webphone as an SDK (JavaScript library) to create any custom VoIP solution, standalone or integrated in any webpage or web application.
First of all you should deploy the webphone on your webserver (Copy the webphone folder to your webserver and adjust any settings you might need according to your SIP server). You can also launch it from local file system on your dev environment (works with some limitations), but better if you use it from a webserver.
The library parameters can be preconfigured in webphone_api.js, changed runtime from JavaScript, passed by URL parameters or set dynamically by any server side script such as PHP, .NET, java servlet, J2EE or Node.js.
The webphone doesn’t require any extra client or server side framework (it is a client side VoIP implementation which can be used from simple JavaScript) however you are free to use your own favorite framework or libraries to interact with the web phone (for example use with jQuery on the client side or integrate into your PHP/.ASP/.NET/J2EE/NodeJS or other server side framework or use it straight without any frameworks involved).
The downloadable demo version has some limitations to disable commercial usage, however if your development process is affected by these then you can request a trial from mizutech with all demo limitation removed.
The public JavaScript API can be found in "webphone_api.js" file, under global javascript namespace "webphone_api".
Just include the "webphone_api.js" to your project or html and start using the webphone API.
The API reference can be found here.
A minimal implementation can be achieved with less than 5 lines of code
on your website. See the minimal_example.html (found in the webphone package)
Example:
<head>
<!-- Include the webphone_api.js to your webpage -->
<script src="webphone_api.js"></script>
</head>
<body>
<script>
//Wait until the webphone is loaded, before calling any API functions
webphone_api.onLoaded(function () {
//Set parameters (Replace upper case worlds with your settings)
//Alternatively these can be also preset in your webphone_api.js file or passed as URL parameters
webphone_api.setparameter('serveraddress', SERVERADDRESS);
webphone_api.setparameter('username', USERNAME);
webphone_api.setparameter('password', PASSWORD);
//See the “Parameters” section below for more options
//Start the webphone (optional but recommended)
webphone_api.start();
//These API calls below actually should be placed behind separate functions (button clicks)
/ html files in the webphone folder for more examples.
o a very simple but functional basic example can be found in the webphone package: basic_example.html
o as a better example, see the tech demo page (techdemo_example.html / techdemo_example.js).
o a basic example for incoming calls is implemented in the incoming_example.html
o click2call.html is a ready to use click to call implementation
o softphone.html implements a fully features browser softphone
You can also try the same examples from our online demo.
You are free to use/modify any of these file and adjust it after your needs or create your own solution from scratch.
For a general implementation we would recommend to start with the “techdemo_example” and modify/improve it after your needs.
Most of the traditional VoIP functionalities (in/our calls, chat, call divert) can be handled very easily with the webphone, however some advanced features might require special care if you wish to interact with them.
Lots of things can be achieved by the webphone parameters, without the need of any programming effort.
Here are some examples for advanced usage:
o settings/auto-provisioning: it can be done easily with the setparameter API but you might have special needs which would require to pass the parameters in a special way. See the beginning of the parameters section for the possibilities and some more in the FAQ.
o multiple lines: handled automatically but you might need to handle it explicitly if required for your project
o low-level engine messages: this is only for advanced users and rarely needed to intercept these messages. You might use the getEvents callback for this but it is recommended to use the others such as the onCallStateChange to handle the web VoIP phone events.
o low-level interaction with the native engines: if somehow you have some extra requirements which is not available with this high-level API then you might use the low-level jvoip API with the NS and Java engines
o dtmf, call transfer, hold, forward: we took special care to make these as simple to use as possible so all of these can be handled by a single API call
o conference: handled automatically by default via a single API call but optionally you might implement some specific user interface to display all parties
o parameter encryption/obfuscation: usually not required since you are working with them in the otherwise secure user session, but if you wish to use it then it is described here
o video: you must provide a html element where the video have to be displayed and manage this GUI accordingly: <div id="video_container"></div>
o chat, sms: these also requires some extra user interface to send the messages and display the call history
o manipulating SIP messages: requires some VoIP/SIP skills if you need to interact this way with your VoIP server and you can use the setsipheader/getsipheader API’s
Note: all of these are implemented in the “softphone” skin which is included with the webphone so you might use/modify this skin if you need a complete softphone like solution instead to develop your own from scratch (if you don’t have specific requirements which can’t be handled by customizing the softphone skin)
For more details, see the “JavaScript API” section below in this documentation.
The parameters can be used to customize the user interface or control the settings like the SIP server domain, authentication, called party number, autodial and many others.
Most of the settings are optional except the "serveraddress" (but also this can be provided at runtime via the API).
The other important parameters are the SIP user credentials (username, password) and the called number (callto) which you can also preset (for example if you wish to implement click to call) however these are usually entered by user (and optionally can be saved in local cookie for later reuse).
The webphone parameters
can be set in multiple ways (statically and dynamically) to allow maximum flexibility and
ease the usage for any work-flow.
Use one (or more) of the following methods for the webphone configuration:
· Preset the settings in the "webphone_api.js" file, under "parameters" variable (in "parameters" Javascript object at the beginning of the file)
· Use the setparameter() API call from JavaScript (Other function calls might also change settings parameters)
· Webpage URL query string (The webphone will look at the embedding document URL at startup. Prefix all keys with “wp_”. For example &wp_username=x or any other parameter specified in this documentation)
· Via the scurl_setparameters settings which can load the parameters from your server side application (This will be called after "onStart" event and can be used to provision the webphone from server API. The answer should contain parameters as key/value pairs, ex: username=xxx,password=yyy)
· Cookies (prefix all keys with “wp_”. For example wp_username)
· SIP signaling (sent from server) with the x-mparam header (or x-mparamp if need to persist). Example: x-mparam=loglevel=5;aec=0
· Auto-provisioning: the browser phone is also capable to download it’s settings from a config file based on user entered OP CODE (although this way of configuration is a little bit redundant for a web app, since you can easily create different versions of the app –for example by deploying it in different folders- already preconfigured for your customers, providing a direct link to the desired version instead of asking the users to enter an additional OPCODE)
· User input: You can let the user to modify the settings. For example to enter username/password for SIP authentication (For example using the softphone skin most of the settings can be specified by the users which might overwrite server side settings loaded from the webphone_api.js file)
Any of these methods can be used or they can be even mixed.
The quick and
easiest way to start is to just set all the required parameters in the
webphone_api.js file. For example:
var parameters = {
serveraddress: 'voip.mizu-voip.com', //your SIP server URI (or IP:port)
username: 'webphonetest1', //the username is usually specified by the enduser and not need to be set here
password: 'webphonetest1', //the password is usually specified by the enduser and not need to be set here
displayname: 'John Smith', //optional display name
brandname: 'BestPhone', //your brand name
rejectonbusy: true, //will reject incoming call if user already in call
ringtimeout: 50, //disconnect the call after 50 sec on no answer
loglevel: 5, //enable detailed logs
};
Usually you might set some parameters in the above webphone_api.js file (the common parameters applicable for all users), then you use one of the other methods to specify instance specific parameters (for example user credentials for auto login).
Note:
· For a basic usage you will have to set only your VoIP server ip or domain name (“serveraddress” parameter).
The SIP username/password are asked from the user with the default skins if not preconfigured.
The rest of the parameters are optional and should be changed only if you have a good reason for it.
· Some parameters (username/password, displayname) are usually set by the user via some user interface (using the setparameter() API), however in some situation you might hardcode them on the server side webphone_api.js file. For example if you have some static IVR service and the caller user identity doesn’t matter.
· All parameters can be passed as strings and will be converted to the proper type internally by the webphone browser plugin.
· Don’t remove or comment out already set parameters because the old value might be already cached by the browser webphone. Instead of this you should just set to “NULL”/”DEF” or its default value. Details here.
· Prefix parameter name with “ucfg_” if it should prefer client side settings (otherwise server side settings defined in the webphone_api.js will overwrite the client settings). Example: ucfg_aec: 2
· Parameters can be also encrypted or obfuscated. See the “Parameter security” section for the details.
Credentials and other SIP parameters:
(string)
The address of your SIP server (domain or IP + port).
It can be specified as IP address or as A or SRV domain name.
Specify also the port if your server is not using the default 5060; in this case append the port after the address separated by colon.
Examples:
mydomain.com (this will use the default SIP port: 5060)
sip.mydomain.com:5062
10.20.30.40:5065
This is the single most important parameter (along with the username/password but those can be also entered by the user).
Default value is empty.
(string)
This is the SIP username (used for authentication and as A number/Caller-ID for the outgoing calls).
Default value is empty.
Note:
o The username/password parameters are usually supplied by the user (via some user interface and then calling the setparameter() API, however in some cases you might just set it statically in the webphone_api.js file (when caller user credentials doesn’t matter). See more here.
o Even if you don’t need a username and/or your server accepts all calls without authentication, you must set some username to some value: the “anonymous” username might be used in this case
o If you set the username setting to “Anonymous” then the username input box will be hidden on the “softphone” skin settings and login screens
o If you wish to set a separate caller-id you can use this parameter to specify it and then use the ”sipusername” parameter to specify the username used for authentication as specified in SIP standards. However please note that most SIP server can treat also the below mentioned “displayname” parameter as the caller-id so the usage of separate username/sipusername is usually not necessary and confusing. See more details here.
(string)
SIP authentication password.
Default value is empty.
Note:
o Make sure to never hardcode the password in html or set it via insecure http. See more details here about security.
o You can use the webphone also without password (if not using via server or your server doesn’t authenticate the users). In this case you can set the password to any value since it is supposed that it will not be required for calls or registrations
o If your IP-PBX accept blind registrations and/or calls then the value of the password doesn’t matter (it will not be used anyway)
o If you set the password setting to “nopassword” then the password input box will be hidden on the “softphone” skin settings and login screens
o If your IP-PBX doesn’t require registrations or you are not using any server then you should set the “register” setting to 0
(string)
Optional SIP display name.
Specify default display name used in “from” or “contact” SIP headers.
Default value is empty (the “username” field will be displayed for the peers).
(string)
Optional parameter to set the SIP realm if not the same with the serveraddress or domain.
Rarely required. (Only if your VoIP server has different realm setting as its domain and it strictly enforces that realm)
Default value is empty. (By default the serveraddress will be used without the port number)
(string)
Outbound SIP proxy address (Examples: mydomain.com, proxy.mydomain.com:5065, 10.20.30.40:5065)
Leave it empty if you don’t have a stateless proxy. (Use only the serveraddress parameter)
Default value is empty.
(number)
With this parameter you can set whether the softphone should register (connect) to the sip server.
0: no (the webphone will not send REGISTER requests)
1: auto guess (yes if username/password are preset, otherwise no)
2: yes (and must be registered before to make calls)
Default value is 1.
(number)
Registration interval in seconds (used by the re-registration expires timer).
Default value is 120 or 300 depending on the circumstances.
This is important for SIP servers to find out unexpected termination if the webphone application or webpage such as killing the browser, power loss or others (so the server will know that the client is no longer alive if this time is expired, but no new re-registration were received from the client).
Note: we don’t recommend to set the re-register interval below 30 seconds (it just causes unnecessary server load; below 30 seconds most of the SIP servers will not decide anyway; some servers doesn’t accept such short re-registration periods). Also you should not set it longer then 3600 (one hour).
(string)
Specify the voicemail number (which the user can call to hear its own voicemails) if any.
Most PBX servers will automatically send the voicemail access number so usually this is detected automatically.
Default value is empty (auto-detect).
(string)
The webphone can initiate call on startup if this is set. It can be used to implement click to call or similar functionality.
Can be any phone number, username or SIP URI acceptable by your VoIP server.
Default value is empty.
(number)
Useful for click-to-call to specify what to do if you pass the “callto” parameter
0: nothing (do nothing, just preset the destination number; the user will have to initiate the call/chat)
1: call (default. Will auto start the call to “callto”)
2: chat (will show the chat user interface presenting a chat session with “callto”)
3: video call (will auto start a vide call)
Note: the other SIP related settings can be found in the “Engine related settings” below (such as dtmfmode or codec).
Library and engine related settings:
By default the webphone will choose the “best” suitable engines automatically based on OS/browser/server support. This algorithm is optimized for all OS and all browsers so you can be sure that your users will have the best experience with default settings, however, if you wish, you can influence this engine selection algorithm by setting one or more of the following parameters:
· enginepriority_java
· enginepriority_webrtc
· enginepriority_ns
· enginepriority_flash
· enginepriority_app
· enginepriority_p2p
· enginepriority_accessnum
· enginepriority_nativedial
Possible values:
0: Disabled (never use this engine)
1: Lower (decrease the engine priority)
2: Normal (default)
3: Higher (will boost engine priority)
4: Highest (will use this engine whenever possible)
5: Force (only this engine will be used)
For example if you wish to prioritize the NS engine, just set: enginepriority_ns=3
The engines also have a built-in default priority number assigned which can range from 0 to 100. You can change also these values with the enginedefpriority_ENGINENAME settings.
Default values:
enginedefpriority_java: 32
enginedefpriority_webrtc: 20
enginedefpriority_flash: 13
enginedefpriority_ns: 30
enginedefpriority_app: 10
enginedefpriority_p2p: 5
enginedefpriority_callback: 5
enginedefpriority_nativedial: 3
Even if you have a favorite engine, you should not disable the others. Just set your favorite engine priority to 3 or 4. This way even endusers which doesn’t have a chance to run your favorite engine might be able to make calls with other engines.
(string)
Optional setting to indicate the domain name or IP address of your websocket service used for WebRTC if any (your server address and websocket listen port).
Examples:
ws://mydomain.com
ws://10.20.30.40:5065
wss://asterisk.mydomain.com:8088/ws
wss://sip.mydomain.com:8080
Default value is empty (which means auto service discovery and if no webrtc service found then the mizu webrtc service can be used if accessible).
Note: latest Chrome and Opera require secure websocket (wss). You will need to install an SSL certificate for your WebRTC server for this and set this parameter with the domain name (not IP address). This is needed only if your VoIP server is WebRTC capable or you have your own WebRTC to SIP gateway. Otherwise no changes are required.
More details about webrtc can be found in the FAQ.
(string)
Optional setting to indicate the address (domain name or IP address + port number) of your flash service if any (flash media + RTMP). If not set, then the mizu flash to sip service might be used (rarely used in normal circumstances). Format: yourdomain.com:rtmpport
Example: 10.20.30.40:5678
Default value is empty.
(string)
STUN server address in address:port format (RFC 5389)
You can set to “null” to completely disable STUN.
Examples:
11.22.33.44:3478
mystunserver.com:3478
null
By default (if you leave this setting unchanged) the webphone will use the Mizutech STUN servers (unlimited free service for all webphone customers). You can change this to your own STUN server or use any public server if you wish.
Note: if you set an incorrect STUN server, then the symptoms are extra delays at call setup (up to “icetimeout”).
(string)
TURN server address in address:port format (RFC 5766)
You can set to “null” to completely disable TURN.
Examples:
11.22.33.44:80
mystunserver.com:80
null
TURN is required only if the webphone cannot send the media directly to the peer (which is usually your VoIP server) and your server doesn’t support TCP candidates. For example if all UDP is blocked or only TCP 80 is allowed or you need peer to peer media via TURN relay
By default (if you leave this setting unchanged) the webphone can use the Mizutech TURN servers. If you wish, you can deploy your own TURN server using the popular open source coturn server. The MizuTech WebRTC to SIP gateway also has its own built-in TURN server.
(string)
Any TURN URI parameter.
Example: transport=tcp
(string)
Username for turn authentication.
(string)
Password for turn authentication.
(number)
Timeout for ICE address gathering (STUN/TURN/others) in milliseconds.
Default is 3000 (3 seconds).
You might increase in special circumstances if you are using some slow STUN/TURN server or decrease if peer address is public (like if your SIP or WebRTC server is on public IP always routing the media, so calls will work also without STUN).
(number)
Try to auto-detect webrtc address if not set (if the active SIP server has built-in WebRTC capabilities)
0: no
1: yes
Default is 1.
(boolean)
Offer native softphone to install if no suitable engine found.
This is useful for browsers which doesn’t have any built on capability for VoIP nor it allows external plugins such as iOS/Safari.
Download links can be configured with "android_nativedialerurl" and "ios_nativedialerurl" listed below, otherwise the default will be used (auto provisioned apps from Mizutech with your branding, customization and settings as you define it for the webphone).
Default is true.
(string)
Android native softphone download URL if any. (Optional setting to allow alternative softphone offer on Google Play).
Note: Android browsers has support also for WebRTC so this might be selected only with old phones or if you disable WebRTC.
Default is empty (which means the default app).
(string)
iOS native softphone download URL if any. (Optional setting to allow alternative softphone offer on Apple App Store).
The safari browser under iOS doesn’t offer any plugin for VoIP so the webphone can use its native softphone (will be auto provisioned from your webphone settings).
Default is empty (which means the default app).
(string)
Set this if your IP-PBX has an access number where users can call into from PSTN and it can forward their call on VoIP (IVR asking for the target number).
This can be used when no other engines are working (no suitable environment, no internet connection).
Default is empty.
(string)
Set this if your server has a callback access number where users can ring into and will receive a call from your server (possibly with an IVR which might offer the possibility to specify the destination number via DTMF).
This can be used when no other engines are working (no suitable environment, no internet connection) and it is very useful in situation where call from the server is cheaper than user call to server.
Default is empty.
(string)
This will overwrite the default User-Agent setting.
Do not set this when used with mizu VoIP servers because the server detects extra capabilities by reading this header.
Default is empty.
(string)
Set a custom sip header (a line in the SIP signaling) that will be sent with all messages. Can be used for various integration purposes and usually has a key:val format. For example: myheader:myvalue.
Custom SIP headers should begin with “X-“ to be able to bypass servers, gateways and proxies (For example: X-MyHeader: 47).
You can add more than one header, separated by semicolon (For example: customsipheader: 'x-key1: val1;x-key2: val2',).
Default is empty.
(number)
Specify whether calls should fail or succeed also without a microphone device.
0: no (calls with be allowed even if client doesn’t have any microphone audio device)
1: with warning (call will be allowed but a warning message will be displayed for the user)
2: yes (calls will fail if user doesn’t have a microphone device)
Default is 1.
(number)
Transport protocol for native SIP.
0: UDP (User Datagram Protocol. The most commonly used transport for SIP)
1: TCP (signaling via TCP. RTP will remain on UDP)
2: TLS (encrypted signaling)
3: HTTP tunneling (both signaling and media. Supported only by mizu server or mizu tunnel)
4: HTTP proxy connect (requires tunnel server)
5: Auto (automatic failover from UDP to HTTP if needed)
Default is 0.
Note: this will not affect WebRTC since webrtc transport is controlled by the browser: http/https, websocket/secure websocket (ws/wss) and DTLS/SRTP for the media.
(String)
Specify local IP address to be used.
This should be used only on devices with multiple ethernet interface to force the specified IP.
Default is empty (autodetect)
Note: This setting is not applicable for WebRTC (In case of WebRTC this is handled entirely by the browser internal WebRTC stack)
(number)
Specify local SIP signaling port to use.
Default is 0 (a stable port which is selected randomly at the first usage)
Note: This is not the port of your server where the messages should be sent. This is the local port of the signaling socket.
Note: This setting is not applicable for WebRTC (In case of WebRTC this is handled entirely by the browser internal WebRTC stack)
(number)
Specify local RTP port base.
Default is 0 (which means signalingport + 2)
Note: If not specified, then VoIP engine will choose signalingport + 2 which is then remembered at the first successful call and reused next time (stable rtp port). If there are multiple simultaneous calls then it will choose the next even number.
Note: This setting is not applicable for WebRTC (In case of WebRTC this is handled entirely by the browser internal WebRTC stack)
(boolean)
Send rtp even if muted (zeroed packets)
Set to true only if your server is malfunctioning when no RTP is received.
Default value is false.
(number)
Media encryption method
0: not encrypted (default)
1: auto (will encrypt if initiated by other party)
2: SRTP
Default is 0.
Note: this will not affect WebRTC since WebRTC always uses DTLS/SRTP for the media.
(number)
DTMF send method
· 0: disabled
· 1: sip INFO method
· 2: auto detect (RFC2833 in the RTP if RTP stream is working and peer announced telephone-event payload), otherwise it will send (also) SIP INFO.
· 3: both INFO and RFC2833
· 4: RFC2833 (will not send SIP INFO even if there is no RTP stream negotiated)
Default is 2.
Note:
Received DTMF are recognized by default in both INFO or RFC2833 formats (No In-Band DTMF processing)
You can also use the “inbounddtmf” and “outbounddtmf” to suggest server side dtmf types. These can be set only to 1 or 2.
(number)
Specify whether the webphone should generate local DTMF tone when DTMF is sent.
0=no
1=if one digit
2=always (also when multiple digits are sent at once)
Default is 1.
(number)
Start to send media when session progress is received.
0: no
1: reserved
2: auto (will early open audio if wideband is enabled to check if supported)
3: just early open the audio
4: null packets only when sdp received (NS only)
5: yes when sdp received
6: always forced yes
Default is 2.
(string)
Set your preferred audio codec. Will accept one of the followings: pcmu, pcma, g.711 (for both PCMU and PCMA), g.719, gsm, ilbc, speex, speexwb, speexuwb, opus, opuswb, opusuwb, opusswb
Default is empty which means the built-in optimal prioritization.
By default the engine will present the codec list optimized regarding the circumstances (the combination of the followings):
· available client codec set (not all engines supports all codecs)
· server codec list (depending on your server, peer device or carrier)
· internal/external call: for IP to IP calls will prioritize wideband codecs if possible, while for outbound calls usually G.729 will be selected if available
· network quality (bandwidth, delay, packet-loss, jitter): for example iLBC is more tolerant to network problems if supported
· device CPU: some old mobile devices might not be able to handle high-complexity codec’s such as opus or G.729. G711 and GSM has low computational costs
You can also fine-tune the codec settings with the use_xxx settings where xxx is the codec name as described in JVoIP documentation.
(string)
List of allowed audio codec’s separated by comma.
By default the webphone will automatically choose the best codec depending on available codec’s, circumstances (network/device) and peer capabilities.
Set this parameter only if you have some special requirements such as forcing a specific codec, regardless of the circumstances.
Example: Opus,G.729,PCMU (This will disable Speex, GSM, iLBC, GSM and PCMA).
Default: empty (which means auto detection and negotiation)
Recommended value: leave it empty
Under normal circumstances, the following is the built-in codec priority:
I. Wideband Speex and Opus (These are set with top priority as they have the best quality. Likely used for VoIP to VoIP calls if the peer also has support for wideband)
II. G.729 (Usually the preferred codec for VoIP trunks used for mobile/landline calls because it’s excellent compression/quality ratio for narrowband)
III. iLBC, GSM (If G.729 is not supported then these are good alternatives. iLBC has better characteristics and GSM is better supported by legacy hardware)
IV. G.711: PCMU and PCMA (Requires more bandwidth, but has the best narrowband quality. Preferred from WebRTC if Opus is not supported as these are present in almost any WebRTC and SIP endpoints and servers)
(string)
List of allowed video codec’s separated by comma.
You might use this parameter to exclude some codec from the offer list.
For example if you don’t wish to use VP8, then set this to: “H264, H263”
Default: empty (which means auto detection and negotiation)
Note: WebRTC has support only for H.264 and VP8 from common browsers so you should not disable these codec’s.
(number)
Enable/disable video.
0: disable
1: enable
2: force always
Note: if you are using the webphone with a custom skin, the video will be displayed in a div with id set to “video_container”, so your html must have this element: <div id="video_container"></div>
(number)
Max bandwidth for video in kbits.
It will be sent also with SDP “b:AS” attribute.
Default is 0 which means auto negotiated via RTCP and congestion control.
(number)
You can suggest the size of the video (in pixels) with the following parameters:
· video_width
· video_height
· video_min_width
· video_min_height
· video_max_width
· video_max_height
(number)
Number of payloads in one UDP packet.
By default it is set to 0 which means 2 frames for G729 and 1 frame for all other codec.
(number)
Enable/disable acoustic echo cancellation
0=no
1=yes except if headset is guessed
2=yes if supported
3=forced yes even if not supported (might result in unexpected errors)
Default is 1.
(number)
Automatic gain control.
0=Disabled
1=For recording only
2=Both for playback and recording
3=Guess
Default value is 3
(number)
Although the jitter size is calculated dynamically, you can modify its behavior with this setting.
0=no jitter,1=extra small,2=small,3=normal,4=big,5=extra big,6=max
Default is 3
(number)
Enable/disable presence.
Possible values:
0: disable
1: auto (if presence capabilities detected)
2: always enable / force
(boolean)
Specify whether the webphone stack should be started automatically on page load.
If set to false then the start() method needs to be called manually in order for the webphone to start. Also the webphone will be started automatically on some other method calls such as register() or call().
Default is true.
Note: you can set this to false to prevent the auto initialization of the webphone, so you might delay this until actually the user wish to interact with your phone UI (such as pushing your click to call button)
(number)
Tracing level. Values from 1 to 5.
Log level 5 means a full log including SIP signaling. Higher log levels should be avoided, because they can slow down the softphone.
Loglevel above 5 is meant only for Mizutech developers and might slow down the webphone.
Do not set to 0 because that will disable also the important notifications presented for the users.
More details about logs can be found here.
(boolean)
Specify whether to send logs to console.
true: will output all logs to console (default)
false: will output only level 1 (important events also displayed for the user)
The amount of logs depends on the “loglevel” parameter.
Default is: true
With the NS and Java engines you can also use any parameters supported by the Mizu JVoIP SDK as listed in the JVoIP documentation.
(Unrecognized parameters will be skipped if the WebRTC engine is used)
These parameters are used for call auto-answer, forward, transfer, number rewrite and similar tasks:
(number)
Normalize called phone numbers.
If the dialed number looks like a phone number (at least 5 number digits and no a-z, A-Z or @ characters and length between 5 and 20) then will drop all special characters leaving only valid digits (numbers, *, # and + at the beginning).
Possible values:
0: no, don’t normalize
1: yes, normalize (default)
(string)
Add any prefix for the called numbers.
Default is empty.
In case if you need to rewrite numbers after your dial plan on the client side, you can use the numpxrewrite parameter (although these kind of number rewrite are usually done after server side dial plan):
You can set multiple rules separated by semicolon.
Each rule has 4 parameters, separated by comma: prefix to rewrite, rewrite to, min length, max length
For example:
‘74,004074,8,10;+,001,7,14;',
This will rewrite the 74 prefix in all numbers to 004074 if the number length is between 8 and 10.
Also it will rewrite the + prefix in all numbers to 001 if the number length is between 7 and 14.
(string)
Block incoming communication (call, chat and others) from these users. (username/numbers/extensions separated by comma).
Default value is empty.
(string)
Specify a number where incoming calls should be forwarded when the user is already in a call. (Otherwise the new call alert will be displayed for the user or a message will be sent on the JS API)
Default is empty.
(string)
Forward incoming calls to this number if not accepted or rejected within 15 seconds.
Default is empty.
(string)
Specify a number where ALL incoming calls should be forwarded.
Default is empty.
(string)
Specify a number where ALL incoming calls should be transferred to.
This might be used if your server doesn’t support call forward (302 answers) otherwise better to set this on server side because the call will not reach the webphone when it is offline/closed, so no chance for it to forward the call.
Default is empty.
(number)
Set to ignore all incoming calls.
0: don’t ignore
1: silently ignore
2: reject
Default value is 0.
(boolean)
Set to true to automatically accept all incoming calls (auto answer).
Default value is false.
(boolean)
Specify whether to auto accept incoming calls when the user clicks to enable device sharing for WebRTC (the audio device permission browser popup)
-true: accept webrtc call on browser share device click (default)
-false: do nothing (user will have to click the “Accept” button to accept the incoming call or you must call the accept() API)
(number)
Will play a short sound when calls are connected
0: Disabled
1: For auto accepted incoming calls
2: For incoming calls
3: For outgoing calls
4: For all calls
Default value is 0
(number)
Retry the call on failure or no response.
0: no
1: yes
Default value is 1.
(boolean)
Set to true to automatically reject (disconnect) incoming call if a call is already in progress.
Default value is false.
(number)
Specify whether to enable (possible accidental) outgoing call to a number where there is already a call in progress.
This might happen as a result of API misuse or by user double-click on the call button.
Set to 1 to reject such kind of second call.
Set to 0 to disable this verification and enable all calls.
Default is 1.
(number)
Set to 1 to auto-redial on 301/302 call forward.
Set to 0 to disable auto call forward.
Default value is 1.
(number)
Auto Mute/Hold all call legs on conference calls.
0=no
1=yes
Default is 0.
(number)
Specify if other lines will be muted on new call
0=no (default)
1=on incoming call
2=on outgoing call
3=on incoming and outgoing calls
4=on other line button click
Default is 0
(number)
Specify if other lines will be muted on new call
0=no (default)
1=on incoming call
2=on outgoing call
3=on incoming and outgoing calls
4=on other line button click
Default is 0
(number)
Specify transfer mode for native SIP.
-1=default transfer type (same as 6)
0=call transfer is disabled
1=transfer immediately and disconnect with the A user when the Transf button is pressed and the number entered (unattended/blind transfer)
2=transfer the call only when the second party is disconnected (attended transfer)
3=transfer the call when the VoIP Applet is disconnected from the second party (attended transfer)
4=transfer the call when any party is disconnected except when the original caller was initiated the disconnect (attended transfer)
5=transfer the call when the VoIP Applet is disconnected from the second party. Put the caller on hold during the call transfer (standard attended transfer)
6=transfer the call immediately with hold and watch for notifications (unattended transfer)
Default is -1 (which is the same as 6)
If you have any incompatibility issue, then set to 1 (unattended is the simplest way to transfer a call and all sip server and device should support it correctly)
Note: only unattended/blind transfer is support between SIP and WebRTC (if one endpoint is using native SIP while the other is on WebRTC)
(number)
Specify if replace should be used with transfer so the old call (dialog) is not disconnected but just replaced.
This way the A party is never disconnected, just the called party is changed. The A party must be able to handle the replace header for this.
-1=auto
0=no (will create a separate call)
1=yes (smooth transfer, but not supported by some servers)
Default is -1
(number)
If to treat session progress (183) responses as ringing (180). This is useful because some servers never sends the ringing message, only a session progress and might not start to send in-band ringing (or some announcement). In this circumstances the webphone can generate local ringback.
The following values are defined:
0: do nothing (no ringback on session progress message)
Will not call startRingbackTone() on 183 (only for 180)
1: change status to ring
2: start local ring if needed and be ready to accept media (which is usually a ringtone or announcement and will stop the locally generated ringback once media received)
Will call startRingbackTone() on 180 and 183 but stop on early media receive.
3: start media receive and playback (and media recording if the “earlymedia” applet parameter is set)
4: change status to ringing and start media receive and playback (and media recording if the “earlymedia” applet parameter is set to true)
5: play early ringback and don’t stop even if incoming early media starts
Will call startRingbackTone() on 180 and 183 and do NOT stop on early media receive.
Default value is 2.
*Note: on ringing status the web phone is able to generate local ringback tone. However with the default settings this locally generated ringtone playback is stopped immediately when media is started to be received from the server (allowing the user to hear the server ringback tone or announcements)
(number)
Maximum ring time allowed in millisecond.
Default is 90000 (90 second)
You can also set separate ring timeout for incoming and outgoing calls with the “ringtimeoutin” and “ringtimeoutout” settings.
(number)
Maximum speech time allowed in millisecond.
Default is 10800000 (3 hours)
(number)
RTP timeout in seconds to protect again dead sessions.
Calls will be disconnected if no media packet is sent and received for this interval.
You might increase the value if you expect long call hold or one way audio periods.
Set to 0 to disable call cut off on no media.
Default value is 300 (5 minute)
(string)
You can barge-in or spy on the calls by sending a specific SIP header specified by the “bargeinheader” parameter available for NS and Java engines.
For example if you specify the value as “X-barge: yes”, then when your server sends this in the INVITE, the call will be auto-accepted and hidden joining a conference with all calls made by the user/agent.
Default is empty (disabled).
(string)
Voice record upload URL.
With this setting you can setup VoIP call recording (voice recording).
If set then calls will be recorded and uploaded to the specified ftp or http address in pcm/wave, gsm, mp3 or ogg format.
The files can be uploaded to your FTP server (any FTP server with specified user login credentials) or HTTP server (in this case you need a server side script to save the uploaded data to file using http PUT or multipart/form-data POST)
Default value is empty (no voice call recording).
Example::[email protected]/voice_DATETIME_CALLER_CALLED
You can also suggest a particular file format by appending its extension to the file name (for example .wav or .mp3).
For example::[email protected]/voice_DATETIME_CALLER_CALLED.wav
You can use the following keywords in the file name as these will be replaced automatically at runtime to their respective values:
· DATETIME: will be replaced to current date-time
· DATE: will be replaced to current date (year/month/day)
· TIME: will be replaced to current time (hour/min/sec)
· CALLID: will be replaced to sip call-id
· USER: will be replaced to local user name
· CALLER: will be replaced to caller party name (caller id)
· CALLED: will be replaced to callee party name
· SERVER: the domain or IP of the SIP server
If you set a HTTP URI, then the following headers will be also set in the HTTP PUT or POST: X-type, X-filename, X-user, X-caller, X-called, X-callid and X-server.
Note:
1. You can also use the voicerecord API to turn on/off the voice recording at runtime (if not all calls have to be recorded)
2. Voice call recording usually can be performed also on the server side. Check your PBX/softswitch documentation for this.
Most of these apply only to the Softphone user interface which is shipped with the webphone to further customize the web softphone user interface and behavior (Softphone.html)
(string)
Brand name of the softphone to be displayed as the title and at various other places such as SIP headers.
Default is empty.
(string)
Your company name to be displayed in the about box and various other places.
Default is empty.
(string)
Displayed on login page.
Can be text or an image name, ex: "logo.png" (image must be stored in images/folder)
Default is empty.
(number)
You can easily change the skin of the supplied user interfaces with this setting (softphone, click to call).
Possible values:
1. Default
2. Light Blue
3. Light Green
4. Light Orange
5. Light Purple
6. Dark Red
7. Yellow
8. Blue
9. Purple
10. Turquoise
11. Light Skin
12. Green Orange
Default is 0.
More details about design changes.
Set the language for the user interface.
Two character language code (for example en for English or it for Italian).
More details about localization can be found in the FAQ.
(number)
User interface complexity level.
0=minimal
5=reduced
10=full (default)
15=more (for tech user)
You might set to 5 for novice users or if only basic call features have to be used.
Default is 10.
(number)
This can be used to hide the server address setting from the user if you already preconfigured the server address in the webphone_api.js (“serveraddress” config option), so the enduser have to type only their username/password to use the softphone.
Possible values:
0: no (will hide the server input setting for the endusers)
1: auto (default)
2: yes (will shot the server input setting for the endusers)c
(number)
Whether to use a simplified login page with username/password in the middle (instead of list style settings; old haveloginpage).
Possible values:
-1: auto (will auto set to 1 if featureset is Minimal, otherwise 0)
0: no
1: only at first login
2: always
(number)
0: Auto guess or Ask
1: SMS only
2: Chat only
Default is 0.
(number)
Define how to handle incoming chat messages.
0: open/show chat window if not in call
1: just set a notification
Default is 0.
(number)
Enable/disable conference room feature.
0: disabled
1: enabled (if supported by the server)
Default is 1.
(string)
Can be used to add call park and call pickup (will be sent as DTMF for call park and user need to call to pickup number to later reload the call from the same or other device).
If set, then it will be displayed on the call page as an extra option.
(boolean)
Enable/disable the time counter during ring-time.
(boolean)
Set to true to enable file transfer.
(string)
HTTP URI used for file transfer. By default Mizutech service is used which is provided for free with the web softphone.
(number)
Show notifications in phone notification bar (usually on the top corner of your phone).
0:Never
1:On event
2:Always
Default is 1.
(boolean)
Always display volume controls when in call.
Default is false.
(boolean)
Always display audio device when in call.
Default is false.
(string)
Specify where to display the information returned by scurl_displaypeerdetails.
It can be used display details about the peers from your CRM such as full name, address or other details.
(Useful in call-centers and for similar usage)
Possible values:
0: show on call page (instead of contact picture)
1: on new page
div id: display on the specified DIV element
(number)
Whether to (automatically) add new unknown called numbers to your contact list.
0:No
1:Ask
2:Yes (will not ask for a contact name)
Default is 1.
(boolean)
Whether to display a popup about incoming calls in certain engines.
Set to false to disable (in this case make sure that you handle the incoming call alert from your HTML/JS if required).
Default is true.
(string)
Header text displayed for users on top of softphone windows.
Default is empty.
(string)
Footer text displayed for users on the bottom of softphone windows.
Default is empty.
(string)
Version number displayed for users.
Default is empty (will load the built-in version number)
(string)
Display custom popup for user once.
Default is empty.
(number)
This is to allow contact synchronization between mobile and desktop.
-1=don't show
0=show Sync option in menu and Contacts page (if no contacts available)
1=show in menu only
Default is 1
(string)
Set one or more contacts to be displayed by default in the contact list.
Name and number separated by comma and contacts separated by semicolon:
Example: defcontacts: 'John Doe,12121;Jill Doe,231231'
(string)
List of settings options and features to be disabled or hidden.
To disable entire features, use the upper case keywords such as CHAT,VIDEO,VOICEMAIL,CONFERENCE.
To disable settings, use the setting label or name such as Audio device, Call forward.
Example: disabledsett: 'theme,email,Call forward,callforwardonbusy,callforwardonnoanswer,callforwardalways,VIDEO'
(string)
List of settings options to be disabled or hidden when using the softphone skin.
Example: hidesettings: 'theme,email,callforwardonbusy,callforwardonnoanswer,callforwardalways,autoaccept,autoanswer_forward,forward,autoignore'
(string)
Custom parameters can be set in a key-value pair list, separated by semicolon Ex: displayname=John;
Default is empty.
(number)
Specify allowed actions on the logs page.
0: no options (users will still be able to copy-paste the logs)
1: upload (default)
2: email launch (the email address set by the “supportmail” parameter or [email protected] if not set)
(strings)
The webphone GUI can load additional information from your web server application or display some content from your website internally in a WebView or frame. You can integrate the included softphone user interface with your website and/or VoIP server HTTP API (if any) by using the following parameters:
· advertisement: Advertisement URL, displayed on bottom of the softphone windows.
· supportmail: Company support email address.
· supporturl: Company support URL.
· newuser: New user registration http request OR link (if API then suffix with star *)
· homepage: Company home page link.
· accounturi: Company user account page link.
· recharge: Recharge http request (pin code must be sent) or link.
· p2p: Phone to phone http request or link.
· callback: Callback http request or link (For example:)
· sms: SMS http request.
· creditrequest: Balance http request, result displayed to user.
· ratingrequest: Rating http request, result displayed for user on call page.
· helpurl: Company help link.
· licenseurl: License agreement link.
· extramenuurl: Link specifying custom menu entry. Will be added to main page (dialpad) menu.
· extramenutxt: Title of custom menu entry. Will be added to main page (dialpad) menu.
Parameters can be treated as API requests (specially interpreted) or links (to be opened in built-in webview). For http API request the value must begin with asterisk character: "*...." For example if the "newuser" is a link, then it will be opened in a browser page; if it's an API http request (begins with *), then a form will be opened in the softphone with fields to be completed.
o The followings are always treated as API request: creditrequest, ratingrequest
o The followings can be links OR API http requests: newuser, recharge, p2p, callback, sms
o The rest will be treated always as links (opened in built-in webview or separate browser tab)
You can also use keywords in these settings strings which will be replaced automatically by the web softphone. The following keywords are recognized:
o DEVICEID: unique identifier for the client device or browser
o SESSIONID: session identifier
o USERNAME: sip account username. preconfigured or entered by the user
o PASSWORD: sip account password
o CALLEDNUMBER: dialed number
o PEERNUM: other party phone number or SIP uri
o PEERDETAILS: other party display name and other available details
o DIRECTION: 1=outgoing call, 2=incoming call
o CALLBACKNR,PHONE1, PHONE2: reserved
o PINCODE: reserved. will be used in some kind of requests such as recharge
o TEXT: such as chat message
o STATUS: status messages: onLoad, onStart, callSetup, callRinging, callConnected, callDisconnected, inChat, outChat
o MD5SIMPLE: md5 (pUser + ":" + pPassword)
o MD5NORMAL: md5 (pUser + ":" + pPassword+":"+randomSalt)
o MD5SALT: random salt
Example credit http request: USERNAME
(Where “USERNAME” will be dynamically replaced with the currently logged in username)
Parameters are safe by default since they are used only in the user http session. This means that the enduser can discover its own settings including the password, but other users –including users for the same browser or middle-men such as the ISP- will not be able to see the sensitive parameters if you are using secure http (HTTPS).
The only sensitive parameter is the SIP account “password”! (This is sent only as digest hash in signaling, but make sure to never display or log from your code)
Make sure to never hardcode it into your website (It should not found if you check the source of your webpage in the browser. The only exception would be if you offer some free to call service which is not routed to outside paid trunks/carriers). If the password has to be preconfigured then load it via an ajax call or similar method; just make sure to use HTTPS in this case because otherwise all the communication is in clear text between the browser and your server if the page is running on unsecure HTTP. Otherwise just let the endusers to enter their password on a login/settings form and pass it to the webphone with the setsipheader() API call.
There is no much reason to try to obfuscate or hide other parameters.
For example the “serveraddress” can be discovered anyway by analyzing the low level network traffic and this is perfectly normal. Most of the other parameters are completely irrelevant. Some sensitive information’s are also managed by the webphone (such as the user contact list) however these are stored only locally in the browser secure web storage or secure cookie by default (on HTTPS) and further encrypted or obfuscated by the webphone.
The following methods can be used to further secure the webphone usage:
-set the loglevel to 1 (with loglevel 5 the password might be written in the logs)
-don’t hardcode the password if possible (let the users to enter it) or if you must hardcode it then use encryption and/or obfuscation
-restrict the account on the VoIP server (for example if the webphone is used as a support access, then allow to call only your support numbers)
-instead of password, use the MD5 and the realm parameters if possible (and this can also passed in encrypted format to be more secure)
-instead of preconfigured parameters you can use the javascript VoIP api (setparameter)
-use https (secure http / TLS)
-for parameter encoding (encryption/obfuscation) you can use XOR + base64 with your built-in key (ask from Mizutech), prefixed with the “encrypted__3__” string (you can verify your encryption with this tool using selecting XOR Base64 Encrypt)
-secure your VoIP server (account limits, rate-limits, balance limits, fraud detection) and follow the VoIP security best practices. For example here you can find some details about mizu VoIP server security.
You can use the webphone javascript library in multiple ways for many purposes:
· create your own web dialer
· add click to call functionality to your webpage
· add VoIP capability to your existing web project or website
· integrate with any CRM, callcenter client or other projects
· modify one of the existing projects to achieve your goal (see the included softphone and click to call examples) or create yours from scratch
· and many others
The public JavaScript API can be found in "webphone_api.js" file, under global javascript namespace "webphone_api".
To be able to use the webphone as a javascript VoIP library, just copy the webphone folder to your web project and add the webphone_api.js to your page.
<head>
<!-- Include the webphone_api.js to your webpage -->
<script src="webphone_api.js"></script>
</head>
<body>
<script>
//Wait until the webphone is loaded, before calling any API functions
webphone_api.onLoaded(function () {
//Set parameters (Replace upper case worlds with your settings)
webphone_api.setparameter('serveraddress', SERVERADDRESS);
webphone_api.setparameter('username', USERNAME);
webphone_api.setparameter('password', PASSWORD);
webphone_api.setparameter(‘other’, MYCUSTOMSETTING);
//See the “Parameters” section below for more options
//Start the webphone (optional but recommended)
webphone_api.start();
/ webphone package for more examples. You should check especially the tech demo (techdemo_example.html / techdemo_example.js).
Note: If you don’t have JavaScript/web development experience, you can still fully control and customize the webphone:
· by its numerous configuration options which can be passed also as URL parameters
· from server side as described here
· we can also send ready to use fully customized web softphone with preconfigured settings, branding and integration with your web and VoIP server
More details can be found here.
Use the following API calls to control the webphone:
Any additional parameters must be set before start/register/call is called.
Return type: string
Will return value of a parameter if exists, otherwise will return empty string.
Optionally you can "start" the phone, before making any other action.
In some circumstances the initialization procedure might take a few seconds (depending on usable engines) so you can prepare the webphone with this method to avoid any delay when the user really needs to use by pressing the call button for example.
Set the “autostart” parameter to “false” if you wish to use this function. Otherwise the webphone will start automatically on your page load.
If the serveraddress/username/password is already set and auto register is not disabled (not 0), then the webphone will also register (connect) to the SIP server upon start.
If start() is not called, then the webphone will initialize itself the first time when you call some other function such as register() or call().
The webphone parameter should be set before you call this method (preset in the js file or by using the setparameter() function). See the “Parameters” section for details.
Optionally you can "register" if your SIP server has also registrar roles (most of them have this). This will "connect" to the SIP server by sending a REGISTER request and will authenticate if requested by the server (by sending a second REGISTER with the digest authorization details).
Note:
o If the serveraddress/username/password is already set and auto register is not disabled (not 0), then the webphone will register (connect) to the SIP server upon start, so no need to use this function in these circumstances.
o There is no need to call the register() multiple times as the webphone will automatically manage the re-registrations (based on the registerinterval parameter)
Un-register from your SIP server (will send a REGISTER with Expire header set to 0, which means de-registration).
Unregister is called also automatically at browser close so usually there is no need to call this explicitly.
Initiate call to a number, sip username or SIP URI.
Perhaps this is the most important function in the whole webphone API.
It will automatically handle all the details required for call setup (network discover, ICE/STUN/TURN when needed, audio device open and call setup signaling).
Initiate a video call to a number, sip username or SIP URI.
(Will failback to a simple voice call if video is not supported by peer, by the server or gateway. It should always work between WebRTC endpoints if peers has a camera device)
Disconnect current call.
Notes about line-management (in case if you are implementing a multi-line user interface, otherwise you don’t need to deal with line numbers):
o If the line is set to -2 it will disconnect all active calls.
o If line is set to -1, then it will disconnect the call on the current line (default behavior).
o Otherwise it will disconnect the call on the specified line.
Connect incoming call.
Disconnect incoming call.
(You can also use the hangup() function for this)
Silently ignore incoming call.
Forward incoming call to the specified number (phone number, username or extension)
Mute current call.
Pass true for the state to mute or false to un-mute.
The direction can have the following values:
0: mute in and out
1: mute out (speakers)
2: mute in (microphone)
Hold current call. This will issue an UPDATE or a reinvite with the hold state flag in the SDP (sendrecv, sendonly, recvonly and inactive).
Set state to true to put the call on hold or false to un-hold.
Transfer current call to number which is usually a phone number or a SIP username. (Will use the REFER method after SIP standards).
If the number parameter is empty and there are 2 calls in progress, then it will transfer line A to line B.
You can set the mode of the transfer with the “transfertype” parameter.
Add/remove people to conference.
Parameters:
-number: the peer username/number or line number
-add: true if to add, false to remove
If number is empty than will mix the currently running calls (interconnect existing calls if there is more than one call in progress).
If number is a number between 1 and 9 then it will mean the line number.
Otherwise it will call the new number (usually a phone number or a SIP user name) and once connected will join with the current session.
Example:
call(‘999’); //normal call to 999
conference(‘1234’); //will call 1234 and add to conference (conference between local user + 999 + 1234)
conference(‘2’,false); //remove line 2 from conference
conference(‘’); //add all current calls to conference
conference(‘’,false); //destroy conference (but keep the calls on individual lines)
setline(3); //select the third line
hangup(); //will disconnect the third line
setline(-2); //select all lines
hangup(); //will disconnect all lines
Note:
-if number is empty and there are less than 2 active calls, then the conference function can’t be used (you can’t put one single active call into a conference)
-you can also use the webphone with your server conference rooms/conference bridge. In this way, there is no need to call this function (just make a normal call to your server conference bridge/room access number)
Send DTMF message by SIP INFO or RFC2833 method (depending on the "dtmfmode" parameter).
Please note that the msg parameter is a string. This means that multiple dtmf characters can be passed at once and the webphone will streamline them properly.
The dtmf messages are sent with the protocol specified with the “dtmfmode” parameter.
Use the space character to insert delays between the digits.
Example:
API_Dtmf(-2,"1");
API_Dtmf(-2," 12 345 #");
Send a chat message. (SIP MESSAGE method as specified in RFC 3428)
Number can be a phone number or SIP username/extension number (or whatever is accepted by your server).
The message can be clear ASCI or UTF-8 text or html encoded.
Send a SMS message if your provider/server has support for SMS.
The number parameter can be any mobile number.
The msg is the SMS text.
The from is the local user phone number and it is optional.
SMS can be handled on your server by:
-converting normal chat message to SMS automatically if the destination is a mobile number
-or via an HTTP API (you can specify this to the webphone as the “sms” parameter)
Start/stop voice recording.
Set the start parameter to true for start or false to stop.
The url is the address where the recorded voice file will be uploaded as described by the voicerecupload setting.
Note: you can also just set the “voicerecupload” parameter to have all calls recorded.
Open audio device selector dialog (built-in user interface).
Call this function and pass a callback, to receive a list of all available audio devices.
For the dev parameter pass 0 for recording device names list or 1 for the playback or ringer devices.
The callback will be called with a string parameter which will contain the audio device names in separate lines (separated by CRLF).
Note: with the Java or NS engine it might be possible that you receive only the first 31 characters from the device name. This is a limitation coming from the OS audio API but it should not cause any problem, as you can pass it as-is for the other audio device related functions and it will be accepted and recognized as-is.
Call this function and pass a callback, to receive the currently set audio device.
For the “dev” parameter one of the followings are expected:
0: for recording device
1: for the playback device
2: for ringer device
The callback will be called with a string parameter which will contain the currently selected audio device.
Note: WebRTC doesn’t support a separate ringer device at this moment (This is a browser limitation)
Select an audio device. The devicename should be a valid audio device name (you can list them with the getaudiodevicelist() call)
For the “dev” parameter pass:
0: for recording device
1: for the playback device
2: for ringer device (Will be skipped if the engine is WebRTC and will use the playback device also for ring)
The "immediate" parameter can have the following values:
0: default
1: next call only
2: immediately for active calls
Call this function, passing a callback and will return the volume (percent) for the selected device.
The dev parameter can have the following values:
0 for the recording (microphone) audio device
1 for the playback (speaker) audio device
2 for the ringback (speaker) audio device
The callback will be called with the volume parameter which will be 0 (muted), 50 (default volume) or other positive number.
Note: the reason why this needs a callback (and doesn’t just returns the volume as the function return value is because for some engines the volume will be requested in an asynchronous way so it might take some time to complete).
Set volume (percent for the selected device. Default value is 50% -> means no change
The dev parameter can have the following values:
0 for the recording (microphone) audio device
1 for the playback (speaker) audio device
2 for the ringback (speaker) audio device
Set a custom sip header (a line in the SIP signaling) that will be sent with all messages.
Can be used for various integration purposes (for example for sending the http session id or any custom data).
For example: setsipheader(‘X-MyExtra: whatever’);
You can also set this with the customsipheader parameter.
Note:
· It is recommended to prefix customer headers with X- so it will bypass SIP proxies.
· Multiple lines can be separated by semicolon ; Example: setsipheader(‘X-MyExtra1: aaa; X-MyExtra2: bbb’);
· Multiple lines can be also set by calling this function multiple times with different keys.
· There are two kinds of headers that you can set:
o per line: if the current line is set and there is an active call on that line
o global (set for all lines including the registrar endpoint): if the line is -2 or there is no current call on the selected line (for example if you set it at startup, before any calls or with line set to -2)
· You can remove all the previously passed headers (per line or global) by calling this function with an empty string. Example: setsipheader(‘’);
· You can remove a previously set header by calling this function with an empty key for that header. Example: setsipheader(‘X-MyExtra:’);
Call this function passing a callback.
The passed callback function will be called with one parameter, which will be the string value of the requested sip header from the received SIP messages (received from your server of from the other peer). If no such header is found or some other error occurs, then the returned string begins with “ERROR” (for example: “ERROR: no such header”) so you might ignore these.
Note:
-The reason why this needs a callback (and doesn’t just returns the last seen header values is because for some engines the signaling messages have to be requested in an asynchronous way so it might take a little time –usually only a few milliseconds- to complete the request).
-The getsipheader() will send you the headers from the incoming SIP messages (not the headers previously set by the setsipheader() function call)
Will return the last SIP signaling message as specified by the current line and the dir/type parameters.
Call this function passing a callback.
The passed callback function will be called with one parameter, which will be the string value of the requested sip message as raw text.
If no such message is found or some other error occurs, then the returned string begins with “ERROR” (for example: “ERROR: SIP message not found”) so you might ignore these.
The following parameters are defined:
dir:
0: in (incoming/received message)
1: out (outgoing/sent message)
type:
0: any
1: SIP request (such as INVITE, REGISTER, BYE)
2: SIP answer (such as 200 OK, 401 Unauthorized and other response codes)
3: INVITE (the last INVITE received or sent)
4: the last 200 OK (call connect, ok for register or other)
callback:
The callback function
You can use this function if you have good SIP knowledge and wish to parse the SIP messages yourself from JavaScript for some reason (for example to extract some part of it to be processed for other purposes).
Example to return the last received INVITE message about an incoming call: getsipmessage(0,3,mysipmsgrecvcallback)
Note: just as other functions, this will take in consideration the active line (set by setline() or auto set on in/out call setup). You can set the active line to “all” [with setline(-2)] to get the last message regardless of the line.
Call this function passing a callback with a string parameter where you will receive additional information about the previously disconnected calls.
This function can be used for explicit line/channel management and it will set the current active channel.
For the line parameter you can pass one of the followings:
-line number: -2 (all), -1 (current/best), 0 (invalid), 1 (first channel), 2 (second channel) …. 100
-sip call id (so the active line will be set to the line number of the endpoint with this sip call id)
-peer username (so the active line will be set to the line number of the endpoint where the peer is this user)
Use this function only if you present line selection for the users. Otherwise you don’t have to take care about the lines as it is managed automatically (with each call on the first “free” line)
Note: You can set the line to -2 and -1 only for a short period. After some time the getline() will report the real active line or “best” line.
More details about multi-line can be found in the FAQ.
Return type: number).
More details about multi-line can be found in the FAQ.
Return type: boolean
Return true if the webphone is registered ("connected") to the SIP server.
Note: you can track the phone state machine also with the events callbacks or check this FAQ.
Return type: boolean
Return true if the webphone is in call, otherwise false.
Note: you can track the phone state machine also with the events callbacks.
Return type: boolean
Return true if the call is muted, otherwise will return false.
Return type: boolean
Return true if the call is on hold, otherwise will return false.
Check if communication channel is encrypted: -1=unknown, 0=no, 1=partially, 2=yes, 3=always
Will receive presence information as events: PRESENCE, status,username,displayname,email (displayname and email can be empty)
Userlist: list of sip account username separated by comma.
Function call to change the user online status with one of the followings strings: Online, Away, DND, Invisible, Offline (case sensitive)
Returns the currently used engine name as string: "java", "webrtc", "ns", "app", "flash", "p2p", "nativedial".
Can return empty string if engine selection is in progress.
Might be used to detect the capabilities at runtime (for example whether you can use the below jvoip function or not)
Delete stored data (from cookie, config file and local-storage).
For the level parameters the following are defined:
1: just settings file
2: delete everything: settings, contacts, call history, messages
You should call this on logout (not at start) if for some reason you wish to delete the stored phone settings.
If engine is Java or the NS Service plugin, then you can access the full java API as described in the JVoIP SDK documentation.
Parameters:
Name: name of the function
Jargs: array of arguments passed to the called function. Must be an array, if API function has parameters. If API function has no parameters, then it can be an empty array, null, or omitted altogether.
For example the API function: API_Call(number) can be called like this: webphone_api.jvoip('API_Call', [number]);
Returns a string containing all the accumulated logs by the webphone (the logs are limited on size, so old logs will be lost after long run).
More details about logs can be found here.
Returns the webphone global status. The possible returned texts are the same like for getEvenetsnotifications.
You might use the events described below instead of polling this function.
The following callback functions can be used to receive event from the webphone such as the phone state machine status (registered/call init/call connected/disconnected) and other important events and notifications:
The passed callback function will be called when the webphone was loaded.
You can start working with the webphone library from here.
The passed callback function will be called when the VoIP engine was started.
Webphone is ready to make call here.
Note: you can already initiate calls on the onLoaded callback as those will be queued and executed after onStart.
The passed callback function will be called on registered (connected) to VoIP server (if the webphone has to register).
The passed callback function will be called on unregistered (disconnected) from VoIP server.
Note: If user closes the webpage, then you might not have enough time to catch this event.
The passed callback function will be called on every call state change.
Parameters:
· status: can have following values: callSetup, callRinging, callConnected, callDisconnected
· direction: 1 (outgoing), 2 (incoming)
· peername: is the other party username (or phone number or extension)
· peerdisplayname: is the other party display name if any
A simple usage example can be found here.
The passed callback function will be called when chat message is received.
Parameters:
· from: username, phone number or SIP URI of the sender
· msg: the content of the text message
The passed callback function will be called at each call disconnect. You will receive a CDR (call detail record).
Parameters:
· caller: the caller party username (or number or sip uri)
· called: called party username (or number or sip uri)
· connect time: milliseconds elapsed between call initiation and call connect (includes the call setup time + the ring time)
· duration: milliseconds elapsed between call connect and hangup (0 for not connected calls. Divide by 1000 to obtain seconds)
· direction: 1 (outgoing call), 2 (incoming call)
· peerdisplayname: is the other party display name if any
· reason: disconnect reason as string
Note: you can get some more details about the call by using the getlastcalldetails() function.
Here you can receive important events and notifications (as strings) that should be displayed to the user.
The passed callback function will be called with two string parameters:
-message: a text message intended to be displayed for the user
-title: the title of the "popup/alert". This can be null/empty for some messages
For example:
o "Invalid phone number or SIP URI or username" (displayed if user is trying to call an invalid peer)
o "Waiting for permission. Please push the Allow/Share button in your browser..." (when waiting for WebRTC browser permission)
o “Check your microphone! No audio record detected.” (which is displayed after 6 seconds in calls if the VAD doesn’t report any activity).
If you call this function, then the webphone will not display these messages anymore (You can silently ignore them, handle somehow or just display to the user).
If you don’t setup a callback for this, then the notifications will be displayed as auto-hiding popups.
Note:
-The text of the message is language dependent, meaning if the language of the webphone is changed, the message/title language is also changed.
-Engine selection related popups are always handled by the webphone (However these are presented only when really necessary and can be suppressed by forcing the webphone to a single engine)
The passed callback function will receive all the logs in real time. It can be used for debugging or for log redirection if the other possibilities don’t fit your needs.
This function returns ALL events from the webphone including sip stack state, notifications, events and logs.
This is a low level function and you should prefer the onXXX callback instead of using string typed notifications.
Call this function once and pass a callback, to receive important events (as strings), which should be displayed for the user and/or parsed to perform other actions after your software custom logic. For the included softphone and click to call these are already handled, so no need to change, except if you need some extra custom actions or functionality.
See the “Notifications” section below for the details.
Example:
webphone_api.getEvents( function (event)
{
// For example the following status means that there is an incoming call ringing from 2222 on the first line:
// STATUS,1,Ringing,2222,1111,2,Katie,[callid]
// parameters are separated by comma(,)
// the sixth parameter (2) means it is for incoming call. For outgoing call this parameter is 1.
// example for detecting incoming and outgoing calls:
varevtarray = event.split(',');
if (evtarray[0] === 'STATUS' &&evtarray[2] === 'Ringing')
{
if (evtarray[5] === '1')
{
// means it is an outgoing call
// ...
}
else if (evtarray[5] === '2')
{
// means it is incoming call
// ...
}
}
});
You might also check the basic_example.html included in the package.
If you will use this function, then most probably you will catch everything here and don’t need to use the other events functions described below.
If you don’t wish to deal with notification strings parsing, then you can use the functions below to catch the important events from the webphone in which you are interested in. Call them once, passing a callback:
“Notifications” means simple string messages received from the webphone which you can parse with the getEvents(callback) to receive notifications and events from the sip web phone about its state machine, calls statutes and important events.
Skip this section if you are not using the getEvents() function. (You can use the functions such as onRegistered/onCallStateChange/others to catch the important events in which you are interested in and completely skip this section about the low-level notification strings handling).
If you are using the getEvents() function then you will have to parse the received notification strings from your java script code. Each notification is received in a separate line (separated by CRLF). Parameters are separated by comma ‘,’. For the included softphone and click to call these are already handled, so no need to change, except if you need some extra custom actions or functionality.
The following messages are defined:
Where line can be -1 for general status or a positive value for the different lines.
General status means the status for the “best” endpoint.
This means that you will usually see the same status twice (or more). Once for general phone status and once for line status.
For example you can receive the following two messages consecutively:
STATUS,1,Connected,peername,localname,endpointtype,peerdisplayname,[callid]
STATUS,-1,Connected
You might decide to parse only general status messages (where the line is -1).
The following statustext values are defined for general status (line set to -1):
o Initializing
o Ready
o Register…
o Registering…
o Register Failed
o Registered
o Accept
o Starting Call
o Call
o Call Initiated
o Calling…
o Ringing…
o Incoming…
o In Call (xxx sec)
o Hangup
o Call Finished
o Chat
Note: general status means the “best” status among all lines. For example if one line is speaking, then the general status will be “In Call”.
The following statustext values are defined for individual lines (line set to a positive value representing the channel number starting with 1):
o Unknown (you should not receive this)
o Init (started)
o Ready (sip stack started)
o Outband (notify/options/etc. you should skip this)
o Register (from register endpoints)
o Subscribe (presence)
o Chat (IM)
o CallSetup (one time event: call begin)
o Setup (call init)
o InProgress (call init)
o Routed (call init)
o Ringing (SIP 180 received or similar)
o CallConnect (one time event: call was just connected)
o InCall (call is connected)
o Muted (connected call in muted status)
o Hold (connected call in hold status)
o Speaking (call is connected)
o Midcall (might be received for transfer, conference, etc. you should treat it like the Speaking status)
o CallDisconnect (one time event: call was just disconnected)
o Finishing (call is about to be finished. Disconnect message sent: BYE, CANCEL or 400-600 code)
o Finished (call is finished. ACK or 200 OK was received or timeout)
o Deletable (endpoint is about to be destroyed. You should skip this)
o Error (you should not receive this)
You will usually have to display the call status for the user, and when a call arrives you might have to display an accept/reject button.
For simplified call management, you can just check for the one-time events (CallSetup, CallConnect, CallDisconnect)
Peername is the other party username (if any)
Localname is the local user name (or username).
Endpointtype is 1 from client endpoints and 2 from server endpoints.
Peerdisplayname is the other party display name if any
CallID: SIP session id
For example the following status means that there is an incoming call ringing from 2222 on the first line:
STATUS,1,Ringing,2222,1111,2,Katie,[callid]
The following status means an outgoing call in progress to 2222 on the second line:
STATUS,2,Speaking,2222,1111,1,[callid]
To display the “global” phone status, you will have to do the followings:
1. Parse the received string (parameters separated by comma)
2. If the first parameter is “STATUS” then continue
3. Check the second parameter. It “-1” continue otherwise nothing to do
4. Display the third parameter (Set the caption of a custom html control)
5. Depending on the status, you might need to do some other action. For example display your “Hangup” button if the status is between “Setup” and “Finishing” or popup a new window on “Ringing” status if the endpointtype is “2” (for incoming calls only; not for outgoing)
If the “jsscripstats” is on (set to a value higher than 0) then you will receive extended status messages containing also media parameters at the end of each call:
STATUS,1,Connected,peername,localname,endpointtype,peerdisplayname,rtpsent,rtprec,rtploss,rtplosspercet,serverstats_if_received,[callid]
This notification is received for incoming chat messages.
Line: used phone line
Peername: username of the peer
Presence: presence status string; one of the followings: CallMe,Available,Pending,Other,CallForward,Speaking,Busy,Idle,DoNotDisturb,Unknown,Away,Offline,Exists,NotExists,Unknown
This notification is received for incoming chat messages.
Line: used phone line
Peername: username of the sender
Text: the chat message body
This notification might be received when the other peer start/stop typing (RFC 3994):
Line: used phone line
Peername: username of the sender
Composing: 0=idle, 1=typing
This notification is received for the last outgoing chat message to report success/fail:
Line: used phone line
Peername: username of the sender
Status: 0=unknown,1=sending,2=successfully sent,3=failed to send
Text: failure reason (if Status is 3)
After each call, you will receive a CDR (call detail record) with the following parameters:
Line: used phone line
Peername: other party username, phone number or SIP URI
Caller: the caller party name (our username in case when we are initiated the call, otherwise the remote username, displayname, phone number or URI)
Called: called party name (our username in case when we are receiving the call, otherwise the remote username, phone number or URI)
Peeraddress: other endpoint address (usually the VoIP server IP or domain name)
Connecttime: milliseconds elapsed between call initiation and call connect
Duration: milliseconds elapsed between call connect and hangup (0 for not connected calls. Divide by 1000 to obtain seconds.)
Discparty: the party which was initiated the disconnect: 0=not set, 1=local, 2=peer, 3=undefined
Disconnect reason: a text about the reason of the call disconnect (SIP disconnect code, CANCEL, BYE or some other error text)
This message is sent immediately after startup (so from here you can also know that the SIP engine was started successfully).
The what parameter can have the following values:
“api” -api is ready to use
“sip” –sipstack was started
Important events which should be displayed for the user.
The following TYPE are defined: EVENT, WARNING, ERROR
This means that you might receive messages like this:
WPNOTIFICATION,EVENT,EVENT,any text NEOL \r\n
Should be displayed for the users in some way.
Various custom messages. Ignore.
Detailed logs (may include SIP signaling).
The following TYPE are defined: EVENT, WARNING, ERROR
Voice activity.
This is sent in around every 2000 milliseconds (2 seconds) by default from java and NS engines (configurable with the vadstat_ival parameter in milliseconds) if you set the “vadstat” parameter to 3 or it can be requested by API_VAD. Also make sure that the “vad” parameter is set to at least “2”.
This notification can be used to detect speaking/silence or to display a visual voice activity indicator.
Format:
VAD,local_vad: ON local_avg: 0 local_max: 0 local_speaking: no remote_vad: ON remote_avg: 0 remote_max: 0 remote_speaking: no
Parameters:
local_vad: whether VAD is measured for microphone: ON or OFF
local_avg: average signal level from microphone
local_max: maximum signal level from microphone
local_speaking: local user speak detected: yes or no
remote_vad: whether VAD is measured from peer to speaker out: ON or OFF
remote_avg: average signal level from peer to speaker out
remote_max: maximum signal level from peer to speaker out
remote_speaking: peer user speak detected: yes or no
Format: messageheader, messagetext. The followings are defined:
“CREDIT” messages are received with the user balance status if the server is sending such messages.
“RATING” messages are received on call setup with the current call cost (tariff) or maximum call duration if the server is sending such messages.
“MWI” messages are received on new voicemail notifications if you have enabled voicemail and there are pending new messages
“PRESENCE” peer online status
“SERVERCONTACTS” contact found at local VoIP server
“NEWUSER” new user request
“ANSWER” answer for previous request (usually http requests)
1. Try from your desktop or webserver by downloading the webphone package or try the online demo.
2. If you like it, we can send your own licensed build within one workday on your payment.
The pricing can be found here. For the payment we can accept PayPal, credit card or wire transfer.
Contact Mizutech at [email protected]with the following details:
-your VoIP and/or web server(s) address (ip or domain name or URL)
-your company details for the invoice (if you are representing a company)
For the old “websipphone” (Java Applet based webphone) users:
Please note that this is a separate product and purchase or upgrade cost might be required. See the upgrade guide about the details. The old java applet based websipphone have been renamed to “VoIP Applet” and we will continue to fully support it as a separate product:
You can easily upgrade your old java applet websiphone to this universal webphone by following the steps described below in the “How to upgrade from the old java applet websipphone” FAQ.
We offer support
and maintenance upgrades to all our customers. Guaranteed supports hours depend
on the purchased license plan and are included in the price.
Please include the followings with your message:
· exact issue description
· screenshot if applicable
· optionally a description about how we can reproduce the problem with valid sip test account(s)
If the included support period with your license is expired, it can be increased by 2 years for around $600 (Note: This is completely optional. There is no need for any support plan to operate your webphone). For gold partners we also offer priority, phone and 24/7 emergency support.
Direct support is provided for the common features (voice calls, chat, dtmf, hold, forward and others) and common OS/browsers (Windows/Linux/Android/MAC OS, IE/Firefox/Chrome/Safari/Opera) and not for extra features (such as presence, fax) and exotic OS/Browsers (such as FreeBSD, Chromium, Konqueror). The webphone should work also with other OS/browsers, but we are not testing every release against exotic platforms.
You will receive the followings:
· the web phone software itself (the webphone files including the engines, javascript API, html5/css skins and examples)
· the ready-to-use/turn-key softphone skin and click to call button
· latest documentations and code examples
· invoice (on request or if you haven’t received it before the payment)
· support on your request according to the license plan
Yes.
You can fully customize the webphone yourself by its numerous configuration options. However if you have some specific requirement which you can’t handle yourself, please contact us at [email protected]. Contact us only with webphone/VoIP specific requirements (not with general web development/design related requests as these can be handled by any web developer and we are specialized for VoIP).
No. The webphone can be deployed by anybody. If you already have a website, then you should be able to copy-paste and rewrite the example HTML codes. Some basic Java Script knowledge is required only if you plan to use the Java Script API (although there are copy-paste examples for the API usage also).
· A webserver (rented, hosted) to host the webphone files
· A SIP account by one of the followings:
o Your existing IP-PBX or softswitch OR)
· Optional: Some server side scripts if more customization/changes are required than possible with the webphone API and parameters
· Optional if you need better control on WebRTC: WebRTC capable SIP server or WebRTC-SIP gateway (both options are freely available)
The webphone is a self-hosted client-side software completely running in the client browser and with no any “cloud” dependencies.
It has the following server side dependencies (all of this controllable by you so you can run the web VoIP bowser plugin on your own also on a private local LAN without to use any third-party service):
· A webserver where the webphone files are hosted (we send all the required files so it can be hosted on any web-server including servers behind NAT)
Note: for WebRTC to work (if you need this engine) the webphone have to be hosted on https. (This means that if you run the webphone from local LAN then at least browser CA verification must be enabled to the internet or you have to setup a local valid certificate)
· Optional: connection to custom web application (This is if you have some server side business logic such as .NET or .PHP application or if you are making API calls or using any resources from a custom web application. All these is up to you and it has nothing to do with the webphone itself)
· A SIP compatible VoIP server where the webphone will connect: any SIP server which can be otherwise reached by any sip softphone, including local LAN PBX services
· Optional: helper connectivity services such as WebRTC gateway and STUN/TURN server. All of these can be disabled and/or the webphone works also if these are not reachable.
In short:
Use any web server to host the webphone files. Just copy the webphone folder to your webhost and you are ready to go.
Some more details:
All the functionality of the web sip phone is implemented on client side (JavaScript running in users browser) so there is no any application specific requirements for the webserver. You can use any web server software (IIS, nginx, Apache, NodeJS, Java, others) on any OS (Linux, Windows, others). You can integrate the webphone with any server side framework if you wish (.NET, PHP, java servlet, J2EE, NodeJS and others). Integration tasks are up to you, and it can be done multiple ways such as dynamic webphone configuration per request, dynamic URL rewrite (since the webphone accepts parameters also in URL’s), or add more server side app logic via your custom HTTP API which can be called from webphone (for example on call, on call disconnect or other events; the VoIP webphone has callbacks for these to ease this kind of integrations). All these are optional since you can implement any kind of app logic also on client side from JavaScript if you need so.
We recommend deploying the webphone to a secure site (https)
otherwise the latest Chrome and Opera doesn’t allow WebRTC.
If you can’t enable https on your webhost for some reason, then we can host your webphone if you wish on a secure white-label domain for free.
Depending on the client browser and the selected engine, the webphone might have to download some platform specific binaries. (These are found in the “native” folder). Make sure that your web server allows the download of these resource types by allowing/adding the following mime types to your webserver configuration if not already added/allowed:
· extension: .mxml MIME type: application/octet-stream (or application/xv+xml)
· extension: .exe MIME type: application/octet-stream (or application/x-msdownload)
· extension: .dll MIME type: application/x-msdownload (or application/x-msdownload)
· extension: .jar MIME type: application/java-archive
· extension: .jnilib MIME type: application/java-archive
· extension: .so MIME type: application/octet-stream
· extension: .dylib MIME type: application/octet-stream
· extension: .pkg MIME type: application/x-newton-compatible-pkg (or application/octet-stream)
· extension: .dmg MIME type: application/x-apple-diskimage
· extension: .swf MIME type: application/x-shockwave-flash
You can easily test if works by trying to download these files typing
their exact URI in the browser such as:
(The browser should begin to download the file, otherwise the jar mime type is still not allowed on your webserver or you entered an incorrect path or webserver doesn’t serve files from the specified folder)
The webphone works with any SIP capable voip server/softswitch/PBX including Asterisk, FreePBX, Huawei, Cisco, Mizu, 3CX, Voipswitch, Brekeke and many others. You don’t necessarily need to have your own SIP server to use the webphone as you can use any SIP account(s) from any VoIP provider.
The web phone is
using the SIP protocol standard to communicate with VoIP servers and
sofswitches. Since most of the VoIP servers are based on the SIP protocol today
the webphone should work without any issue. Some modules (WebRTC and Flash)
might require specific support by your server or a gateway to do the
translation to SIP, however these modules are optional, gateway software are
available for free and also mizutech includes its own free tier service (usable
by default with the webphone).
If you have any incompatibility problem, please contact [email protected] with a problem description and a detailed log (loglevel set to 5). For more tests please send us your VoIP server address with 3 test accounts.
If you don’t have your own VoIP server, you can use any third-party solution or service:)
There are many SIP servers over the internet where you can create free SIP accounts.
We also provide such a service here: voip service (you can create multiple sip accounts for free and make calls between them)
Using the Mizu webphone you can have a single solution for all platforms with the same user interface and API. No individual apps have to be maintained anymore for different platforms such as a Windows Installer, a Web application, Google Play app for Android and other binaries.
· Unlike traditional softphones, the webphone can be embedded in webpages while providing the same functionality as a traditional native solution
· Single unified JavaScript API and custom web user interface
· Easy and flexible customization for all kind of use-case (by the numerous parameters and optionally by using the API)
· Compatible with all browsers (IE, Firefox, Safari, Opera, Chrome, etc) and all OS (Windows, Linux, MAC, Android, etc)
· Compatible with your existing IP-PBX, VoIP server or any SIP service
· Works also behind corporate firewalls (auto tunnel over TCP/HTTP 80 if needed)
· Combines modern browser technologies (WebRTC, opus) with VoIP industry standards (G.729, conference, transfer, chat, voice recording, etc)
· Easy to use and easy to deploy (copy-paste HTML code)
· Easy integration with your existing infrastructure since it is using the open SIP/RTP standards
· Easy integration with your existing website design
· Proprietary SIP/RTP stack guarantees our strong long term and continuous support
· Support for all the common VoIP features
· Unlike NPAPI based solutions, the webphone works in all browsers (NPAPI is not supported anymore in Chrome and Firefox also plans do drop it)
· Unlike pure WebRTC solutions, the webphone works in all browsers (webrtc doesn’t work in IE, Edge, Safari only with extra plugin downloads)
· Unlike pure WebRTC solutions, the webphone is optimized for SIP with fine-tuned settings (TURN, STUN and others)
· As a browser phone
· Integration with other web or desktop based software to add VoIP capabilities
· A convenient dialer that can be offered for VoIP endusers since it runs directly from your website
· Callcenter VoIP client for agents/operators (easy integration with your existing software)
· Ready to use web VoIP client without the need of any further development
· SIP API for your favorite JS framework such as React, jQuery, Angular, Ember, Backbone or any others or just plain/vanilla JS
· Embedded in VoIP devices such as PBX or gateways
· Click to call functionality on any webpage
· VoIP conferencing in online games
· Buy/sell portals
· WebRTC SIP client or WebRTC softphone
· Salesforce help button
· Social networking websites , facebook phone
· Integrate SIP client with jQuery, Drupal, joomla, WordPress, angularjs, phpBB, vBulletin and others as a web plugin, module or API
· As an efficient and portable communication tool between company employees
· VoIP service providers can deploy the webphone on their web pages allowing customers to initiate SIP calls without the need of any other equipment directly from their web browsers
· Customer support calls (VoIP enabled support pages where people can call your support people from your website)
· VoIP enabled blogs and forums where members can call each other
· VoIP enabled sales when customers can call agents (In-bound sales calls from web)
· Java Script phone or WebRTC SIP client
· Web dialer for Asterisk and FreePBX
· Turn all phone numbers into clickable links on your website
· Integrate it with any Java applications (add the webphone.jar as a lib to your project)
· HTTP Call Me buttons
· Remote meetings
· HTML5 VoIP
· Web VoIP phone for callcenter agents integrated with your callcenter frontend
· Asterisk integration (or with any other IP-PBX)
· Convert any SIP link (sip: URI) on web to clickable (click to call) links and replace native/softphone solutions with a pure web solution
· "css" folder: - style sheets used in skin (GUI). The style of the skin can be changed by editing "mainlayout.css" file
· "css/themes" folder: jQuery mobile specific cascading style sheets and images used by the softphone and click to call skin templates
· "images" folder: images used by the includes skins (GUI)
· “js” folder: this is for javascript
· "js/softphone" folder: GUI files. For every jQuery mobile "page" there is an equivalent JavaScript file, which handles the behavior of the page. Also there is a string resource file (stringres.js) which contains all the text displayed to the user.
· "js/lib" folder: the webphone core library files
· "oldieskin" folder: old webphone skin, which is used only in old browsers, ex: IE 6
· "sound" folder: contains sound files (for example ringtone and keypad dtmf sounds)
· “native” folder: platform specific native binaries (the webphone might load whichever needed if any, depending on the engine used)
· the root folder contains the following files:
o "favicon.ico": web page favicon
o "index.html": a start page for the examples
o "oldapi_support.js": backward compatibility with old skin. Useful for cases where the webphone was integrated using the "old" JavaScript VoIP API.
o “iframe_helper.js”: can be used if you wish to access the webphone in a separate iframe
o “minimal_example.html”: shortest implementation to make a call
o "basic_example.html": simple usage example of softphone SDK
o “incoming_example.html”: simple example to handle incoming call
o "softphone.html": GUI html file for a full featured web phone (customize this after your needs by just changing the settings)
o “click2call.html”: a ready to use click to call implementation (customize this after your needs by just changing the settings)
o "webphone_api.js": the public Javascript API of the web phone
It is possible to delete the unneeded files (for example you can delete the softphone and the oldieskin folders if you are using the webphone as an API), however you should not bother too much about these and just leave all the files on your server. This can’t have any security implications; the webphone will use only the required files for your use-case.
No, the webphone can be used on their own as a fully self-hosted solution, connecting to your VoIP server directly (Java, NS and App), via WebRTC or via Flash so you will have a solution fully owned/controlled/hosted by you without dependency on our services.
With other words: if all our server will be switched off tomorrow, you will be still able to continue using our webphone softphone.
However please note that by default the webphone might use some of the services provided by mizutech to ease the usage and to make it a turn-key solution without any extra settings to be required from your side. Most of these are used only under special circumstances and none of these are critical for functionality; all of them can be turned off or changed. The following services might be used:
· Mizutech license service: demo, trial or free versions are verified against the license service to prevent unauthorized usage. This can be turned off by purchasing a license and your final build will not have any DRM and will continue to work even if the entire mizutech network is down.
Note: this is not used at all in paid versions
· WebRTC to SIP gateway: if your server doesn’t have WebRTC capabilities but you enable the WebRTC engine in the webphone then it might use the Mizu WebRTC to SIP gateway service. Other possibilities are listed here.
Note: this might be used only if you are using the webphone WebRTC engine but your server doesn’t have support for WebRTC nor you have a WebRTC-SIP gateway.
· Flash to SIP gateway: rarely used (only when there is no better engine then Flash). Just turn it off (by setting the “enginepriority_flash” parameter to 0) or install your own RTMP server and specify its address.
Note: usually Flash is not used at all as there are better built-in engines which are supported by more than 99.9% of the browsers.
· STUN server: by default the webphone might use the Mizutech STUN service. You can change this by changing the “stunserveraddress” to your server of choice (there are a lot of free public STUN services or you can run your own: stable open source software exists for this and it requires minimal processing power and network bandwidth as STUN is basically just a simple ping-pong protocol sending only a few short UDP packets and it is not a critical service).
Note: you can use the webphone without any STUN service if your SIP server has basic NAT handling capabilities and it is capable to route the RTP if/when needed.
· TURN server: by default the webphone might use the Mizutech TURN service which can help firewall/NAT traversal in some circumstances (rarely required). You can specify your own turn server by setting the “turnserveraddress” parameter (if TURN is required at all).
Note: you can use the webphone without any TURN service if your SIP server has basic NAT handling capabilities and it is capable to route the RTP if/when needed.
· JSONP: if you set some external API to be used by the softphone skin (such as for user balance or call rating requests) and your server can’t be contacted directly with AJAX requests due to CORS, then the API calls might be relayed by the Mizutech JSONP or websocket relay. To disable this, make sure that the domain where you are hosting the web phone plugin can access your domain where your API is hosted.
Note: this might be used only in very specific circumstances (when you integrate the webphone with your own API, but your own API can’t be accessed by the webphone via normal AJAX GET/POST requests)
· HTTPS proxy: with the WebRTC engine if you are using the webphone from Chrome and your website is not secured (not https) then the webphone might reload itself via the Mizu HTTPS proxy. To disable this, host your webphone on HTTPS if you wish to use WebRTC from Chrome. API requests can be also routed via this service (such as credit or rating requests) if you are running the webphone on HTTPS but defined your SIP server API as HTTP (otherwise browser blocks requests from secure page to insecure resources)
Note: this might be used only if your website is not on HTTPS (no SSL certificate) and you are using the webphone with the WebRTC engine in Chrome.
· Tunneling/encryption/obfuscation: In some conditions the webphone might use the Mizu tunneling service to bypass VoIP blockage and firewalls. This is usually required in countries where VoIP is blocked (such as UAE or China) or behind heavy firewalls with DPI and you can turn it off by setting the “usetunneling” parameter to 0.
Note: this is a special feature which needs to be turned on by mizutech support, otherwise it is not enabled by default.
· Auto upgrade: the native components can auto-upgrade itself from Mizutech download service. This is enforced only from known old versions to know good versions (only if the new version is already used by other customers a few weeks). You can disable this by setting the “autoupgrade” to 6. (You can also set the “autoupgrade” to 5 which will also disable the upgrade of the built-in SSL certificates, but this should be avoided and upgrading the certificates can’t do any harm to your webphone. This is just to avoid expiring ssl certificates)
Note: this might be used only if you use the webphone NS engine and can be turned off.
Note: if you are using the webphone on a local LAN then these services are not required and are turned off automatically (so the webphone will not try to use these if your VoIP and/or Web server are located on local LAN / private IP).
If you need to white-list (or block for some reason) our servers, here
is the address list associated with the above services:, mnt.mizu-voip.com, rtc.mizu-voip.com, usrtcx.webvoipphone.com, usrtc.webvoipphone.com,
88.150.148.180, 88.150.148.182, 88.150.183.87, 204.12.197.100, 204.12.197.98, 88.150.194.53
The webphone can be configured by its parameters or dynamically via the setparameter API.
There are many ways to set its parameters. You can statically hardcode them in the webphone_api.js file, pass as URL parameters or load from a server API by setting the scurl_setparameters to point to your API (HTTP AJAX URL).
For more details see the beginning of the Parameters chapter.
WebRTC is just one of the important engines built into the webphone. The webphone works also without this engine if not available in your environment. Otherwise the webphone will automatically detect WebRTC and will use if available.
If you need more control, you have several options to deal with WebRTC:
1. Don’t use WebRTC at all. There are other engines built into the web sip phone which can be used most of the time. There are only a few circumstances when the only available engine would be WebRTC. (Although WebRTC is convenient for enduser since it doesn’t need any browser plugin in browsers where it is supported). To completely disable WebRTC, set the enginepriority_webrtc setting to 0.
2. Check if your VoIP server already has WebRTC support. Most modern VoIP server already has implemented WebRTC (including mizu VoIP server, Asterisk and others) or you might just need to add/enable a module on your server for this, so chances are high that your VoIP server can handle WebRTC natively. Just set the webrtcserveraddress setting to point to your server websocket address
3. Use the free Mizutech WebRTC to SIP service tier. This is enabled by default and it might be suitable for your needs if you don’t have too much traffic over WebRTC (the webphone will automatically start to boost the priority for other engines when you are over the free quote)
4. Use the mizutech WebRTC to SIP gateway software. We are providing this software for free for our webphone customers. (You just have to setup this near your SIP server)
5. Use any third party WebRTC to SIP gateway: There are few free software which is capable to do this task for you, including Asterisk and Dubango. (However if you don’t have any of these installed yet, then we recommend our own gateway as mentioned above)
6. Use the Mizutech WebRTC to SIP paid service. We provide dedicated WebRTC to SIP conversion services for a monthly fee if required.
The webphone can be used as a WebRTC softphone by increasing the enginepriority_webrtc to 3 or 4 (in this case it will use the other engines only when WebRTC is not supported by the browser).
Note: Latest Chrome and Opera browsers requires secure connection to allow WebRTC for both your website (HTTPS) and websocket (WSS).
First of all it is important to mention that the browser web phone works just fine without Flash.
Chances are high that you don’t need Flash at all even if available. In the rare circumstances when the only usable engine would be Flash only, the webphone can automatically use the Mizutech Flash to SIP free service. In case if somehow you wish to drive all your traffic over Flash, then you can install a Red5 server (open source free software) to handle the translation between RTMP and SIP/RTP, then set the rtmpserveraddress to point to your flash media server and increase the value of the enginepriority_flash setting.
These engines doesn’t need any special server side support and they works with old legacy SIP (all SIP servers) without any extra settings or software. When the browser VoIP plugin uses one of these engines, there is a direct connection between the engine (running in the user’s browser) and your VoIP server, without involving any intermediary relay (RTP can also flow directly between the endusers, bypassing your server. This is up to your server settings and its NAT handling features). If you wish to force the usage of Java (which can offer top quality VoIP), then make sure to install the JRE from here (if not already installed on your system) and use Firefox or IE as Chrome doesn’t have Java applet support.
The webphone is fine-tuned for Asterisk out of the box and no changes are needed to work. However if you have some special requirement, such as using the built-in WebRTC module in Asterisk, check these articles:
· Setup Web SIP client for Asterisk
WebRTC is becoming a trendy technology but it has a lot of disadvantages and problems:
· It is a moving target. The standards are not completed yet. Lots of changes are planned also for 2016. Edge just start to add a different “ORTC” implementation
· Incompatibility. WebRTC has known incompatibility issues with SIP and there are a lot of incompatibilities even between two WebRTC endpoint as browsers has different implementation and different codec support
· Not supported by all browsers. No support in Edge, IE and Safari. No support on iOS and MAC (except with extra plugin downloads). No support on older Android phones.
· Lack of popular VoIP codec such as G.729 which can be solved only by expensive server side transcoding
· It is a black-box in the browser with browser specific bugs and a restrictive API. You have little control on what is going in the background
· A WebRTC to SIP gateway required if your VoIP server don’t have built-in support for WebRTC
· Adds unneeded extra complexity. The server has to convert from the websocket protocol to clear SIP and from DTLS to RTP
Luckily the Mizu webphone has some more robust engines that can be used without these limitations and by default will prioritize these over WebRTC whenever possible, depending on available browser capabilities and user willingness. (Small non-obtrusive notification might be displayed for the enduser when a better engine is available or if a user can upgrade with one-click install).
One of the main advantages of the Mizu webphone is that it can offer alternatives for WebRTC, so you can be sure that all your VoIP users are served with the best available technology, regardless of their OS and browser.
However we do understand that WebRTC is comfortable for the endusers as it doesn’t require any extra plugin if supported by the user browser. The mizu browser hone takes full advantage of this technology and we provide full support for WebRTC by closely following the evolution of the standards.
With a WebRTC only client you would miss all the benefits that could be offered by a standard SIP/RTP client connecting directly to your VoIP server with native performance, full SIP support with all the popular VoIP codecs and without the need for any protocol conversion, directly from enduser browser.
-Not all the listed features are available from all engines (the webphone automatically handle these differences internally)
-Some platforms currently have very limited VoIP support available from browsers. The most notable is iOS where the default browser (Safari) lacks any VoIP support. The webphone tries all the best to work around about these by using its secondary engines offering call capabilities for also for users on these platforms
-Android chrome uses the speaker (speakerphone) for audio output (this is hardcoded in their WebRTC engine and hopefully they will change this behavior in upcoming new versions). This affects only the WebRTC engine and you will have normal audio output if using the App engine on Android.
-Some features might not work correctly between WebRTC and SIP. This is not a webphone limitation, but it depends completely on server side (your softswitch or gateway responsible for WebRTC-SIP protocol conversion). Presence doesn’t work between WebRTC and SIP using the Mizu public WebRTC gateway
-For chat/IM to work your server have to support SIP MESSAGE as described in RFC 3428 (Supported by most SIP servers. See also Asterisk patch or FreePBX settings)
-Video is implemented only with the WebRTC engine (the webphone will auto-switch or auto-offer WebRTC whenever possible on video request)
-Some features require also proper server side support to work correctly. For example call hold, call transfer and call forward. See your VoIP server documentation about proper setup
-The webphone doesn’t work when Private Browsing is enabled (because no outbound WebSocket connections are allowed when private browsing is enabled)
There are many browser and OS related bugs either in the browser itself or in the plugins used for VoIP (native/webrtc/java/flash). Most of the issues are handled automatically by the webphone by implementing workarounds for a list of well-known problems. Rarely there is no any way to circumvent such issues from the webphone itself and needs some adjustment on server or client side.
Some chrome versions only use the default input for audio. If you have multiple audio devices and not the default have to be used changing on chrome, Advanced config, Privacy, Content and media section will fix the problem.
Some linux audio drivers allow only one audio stream to be opened which might cause audio issues in some circumstances. Workaround: change audio driver from oss to alsa or inverse. Other workarounds: Change JVM (OpenJDK); Change browser.
Incoming calls might not have audio in some circumstances when the webphone is running in Firefox with the WebRTC engine using the mizu WebRTC to SIP gateway (fixed in v.1.8).
If the java (JVM or the browser) is crashing under MAC at the start or end of the calls, please set the “cancloseaudioline” parameter to 3. You might also set the "singleaudiostream” to 5. If the webphone doesn’t load at all on MAC, then you should check this link.
One way audio problem on OSX 10.9 Maverick / Safari when using the Java
engine: Safari 7.0 allows users to place specific websites in an "Unsafe
Mode" which grants access to the audio recording. Navigate to "Safari
->Preferences -> Security (tab) and tick "Allow Plug-ins"
checkbox. Then depending on safari version:
-from the Internet plug-ins (Manage Website Settings)" find the site in question and modify the dropdown to "Run In Unsafe Mode".
-or go to Plug-in Settings and for the option "When visiting other websites" select "Run in Unsafe Mode". A popup will ask again, click "Trust"
You will be asked to accept the site's certificates or a popup will ask again, click "Trust". Alternatively, simply use the latest version of the Firefox browser.
Note that this java related issue is not a real problem since the webphone uses WebRTC plugin by default on MAC (Java might be used only if you explicitly configured the webphone browser plugin to prefer java over WebRTC)
Java in latest Chrome is not supported anymore (the webphone will
select WebRTC by default).
If for some reason you still wish to force Java, then in versions prior September 1, 2015 it can still be re-enabled:
Go to this URL in Chrome: chrome://flags/#enable-npapi (then mark activate)
Or via registry: reg add HKLM\software\policies\google\chrome\EnabledPlugins /v 1 /t REG_SZ /d java
(By default the webphone will handle this automatically by choosing some other engine such as WebRTC unless you forced java by the engine priority settings)
Symptoms:
· If your html can’t find the webphone library files you might see the following errors in your browser console:
o Failed to load resource: …/js/lib/api_helper.js
o ReferenceError: webphone_api is not defined
· If not supported by browser or your webserver doesn’t allow the required mime types, then the page hosting the webphone might load, but you will not be able to make calls (VoIP engine will not start)
Fixes:
· Missing library: Make sure that you have copied all files from the webphone folder (including the js and other sub-folders)
· Browser support: Make sure that your browser has support for any of the implemented VoIP engines: either Java or WebRTC is available in your browser or you can use the NS engine (on Windows, MAC and Android) or the app engine (on Android and iOS)
· Web server mime settings: Make sure that the .jar and .exe mime types are allowed on your webserver so the browsers are able to download platform specific native binaries
· HTTPS: Set a SSL certificate for your website for secure http, otherwise WebRTC will not work in chrome
· Lib not found: If your webphone files are near your html (in the same folder) then you might have to set the webphonebasedir parameter to point to the javascript directory
webphonebasedir
This setting is deprecated after 1.9 as the webphone should automatically detect its library path automatically.
If the html page, where you are including the webphone, is not in the same directory as the webphone, then you must set the "webphonebasedir" as the relative path to the webphone base directory in relation to your html page.
The base directory is the "webphone" directory as you download it from Mizutech (which contains the css,js,native,... directories).
For example if your page is located at and the webphone files are located at then the webphonebasedir have to be set to '../modules/webphone/'
The webphonebasedir parameter must be set in the webphone_api.js file directly (not at runtime by webphone_api.webphonebasedir).
Default is empty (assumes that your html is in the webphone folder).
· NS engine download not found: you might have to set the nativepluginurl parameter to point to the ns installer file.
nativepluginurl
(string)
This setting is deprecated after 1.9 as the webphone should automatically detect its library path automatically.
The absolute location of the Native Service/Plugin installer. In most of the cases this is automatically guessed by the webphone, but if for some reason (for example: you era using URL rewrite) the guessed location is incorrect, then it can be specified with this parameter.
The Service/Plugin installer is located in webphone package "native" directory.
Example:
“”
Default value is empty.
Make sure that:
-you have set your SIP server address:port correctly (from the user interface or “serveraddress” parameter in the webphone_api.js file)
-make sure that you are using a SIP username/password valid on your SIP server
-if you are using the WebRTC engine with the Mizu WebRTC SIP gateway service, make sure that your firewall or fail2ban doesn’t block the gateways. You should white-list rtc.mizu-voip.com and usrtc.webvoipphone.com
-make a test from a regular SIP client such as mizu softphone or x-lite from the same device (if these also doesn’t work, then there is some fundamental problem on your server not related to our webphone or your device firewall or network connection is too restrictive)
-send us a detailed client side log if still doesn’t work with loglevel set to 5 (from the browser console or from softphone skin help menu)
Make a
test call first from a simple SIP client such as mizu softphone or x-lite
By default only the PCMU,PCMA, G.729 and the speex ultra-wideband
codec’s are offered on call setup which might not be enabled on your server or
peer UA.
You can enable all other codec’s (PCMA, GSM, speex narrowband, iLBC and G.729 ) with the use_xxx parameters set to 2 or 3 (where xxx is the name of the codec: use_pcma=2, use_gsm=2, use_speex=2,use_g729=2,use_ilbc=2). Some servers has problems with codec negotiation (requiring re-invite which is not support by some devices). In these situations you might disable all codec’s and enable only one codec which is supported by your server (try to use G.729 if possible. Otherwise PCMU or PCMA is should be supported by all servers)
If you receive the “ERROR, Already in call with xxx” error on outbound call attempts and you wish to enable multiple calls to/from the same number, set the disablesamecall parameter to 0.
If still doesn’t work send us a detailed client side log with loglevel set to 5 (from the browser console or from softphone skin help menu)
-Call disconnection immediately upon setup can have many reasons such
as codec incompatibility issues, NAT issues, DTLS/SRTP setup problems or audio
problems. If you are not sure, send a detailed log to [email protected]
-If the calls are disconnecting after a few second, then try to set the “invrecordroute” parameter to “true” and the “setfinalcodec” to 0.
-If the calls are disconnecting at around 100 second, then most probably you are using the demo version which has a 100 second call limit.
In short:
Yes, the webphone works fine on private networks by default, without the need of any configuration changes.
Note: It is completely normal to use the webphone on LAN’s (browser client with private IP). This FAQ refers to the case when the SIP server (set by the “serveraddress” parameter where the webphone will register to) or Web server (from where you load your webpage embedding the webphone) is located on private IP.
Details:
The webphone can be
used also on local LAN’s (when your VoIP server and/or Web server are on your
private network).
-The NS and Java engines will connect directly to your server as a normal SIP softphone does.
-For WebRTC to work you will need a WebRTC to SIP gateway on your LAN or your PBX have to support WebRTC, otherwise this engine will not be picked up (this is handled automatically). You should also host your webpage on https (SSL certificate installed on your Web server) for WebRTC to work in Chrome.
-The webphone could use the Mizutech STUN, TURN, JSONP and HTTPS gateway services by default, however these are not required on local LAN’s (the webphone will detect this automatically and will not try to use these services while on local LAN).
With other worlds,
if you wish to work with the webphone on local LAN and your VoIP server doesn’t
have WebRTC support or your webserver doesn’t have SSL installed for the domain
you are using (HTTPS), we recommend to:
-use the NS engine on Windows (this should be preferred and auto-selected anyway for these circumstances)
-use Firefox with Java on other platforms (because the built-in Java applet engine will provide top quality VoIP for you)
The webphone can be used also without internet connection with some
limitations. An easy workaround for all this would be to enable at least CA
verifications (SSL certificate verifications) to the internet, however if this
is not possible then the following applies:
-WebRTC in Chrome needs https (secure http), which will work only with a local policy, otherwise the browser will not be able to verify the SSL certificate against public CA. If you can’t setup a local CA or browser rule for this, just disable WebRTC (or use Firefox instead of Chrome if you need WebRTC without certificate).
-Java applets need to be signed and on startup the JVM will have to pass the code verification signature. Workaround: Just disable the Java engine or add the applet location to the exception site list
-The NS engine can be used from unsecured http in local LAN’s with no issues (on https you need to add the localhost.daplie.com to the browser security exception list)
These circumstances will be automatically handled by the webphone, always selecting the best suitable engine if it has at least one available and unless you change the engine priority related settings.
If you are using the webphone in a controlled environment (where you have control over the clients, such as call-centers) then you might force the NS or Java engines by disabling or lowering the priority for the WebRTC engine (enginepriority_webrtc = 1). This is because NS and Java are more native for SIP/RTP and might have better quality, more features and lower processing costs on your server. The big advantage of WebRTC is that it can work without any extra plugin download/install, however in a controlled environment you can train your users (such as the callcenter agents) to allow and install the NS engine when requested and this one-time extra action will reward with long term quality improvement.
In case if you wish to include the webphone globally to your websites to be present on all pages (such as a “call to support” widget flying on the bottom-right side of your page), make sure to don’t let the webphone to auto-initialize itself automatically with each page load/reload because this might slow-down the responsivity of your website.
For this just set the “autostart” parameter to “false”.
In this call you can delay the VoIP engine initialization to the point when the enduser actually wish to interact with your VoIP US (such as clicking on your click to call button).
You just have to include the “webphone_api.js” to your page and create multiple VoIP UI elements.
For example you might have a contact list (or people list) displayed on your page, with a “Dial” button near each other. You don’t even need to initialize the webphone on your page load (set the “autostart” parameter to “false”). Just use the webhone_api.call(number) function when a user click on the dial button and the webphone will initialize itself on the first call.
The softphone user interface (softphone.html) can’t be included multiple times in a page (if you really need multiple phone UI on your page, then use separate iFrame for them).
Below are a few (both recommended and NOT recommended) methods to load the webphone into your webpage:
1. Load "webphone_api.js" using a script tag in the <head> section of your web page. This is actually not “on demand”, the webphone will be loaded when the page is loaded.
2. Load "webphone_api.js" on demand, by creating a <script> DOM element. Below is an example function which loads the script into the <head> section of the page:
function insertScript(pathToScript)
{
var addScript = document.createElement( "script" );
addScript.type = "text/javascript";
addScript.src = pathToScript;
document.head.appendChild( addScript );
}
3. The webphone can also be loaded into an iframe on demand. To have access to the webphone API in the iframe from the parent page, you have to follow the below two steps:
a. set the iframe's "id" attribute to "webphoneframe", for example: <iframe id="webphoneframe" src="softphone.html" width="300" height="500" frameborder="0" ></iframe>
b. include the "iframe_helper.js" file into your parent html page <head> section
Not recommended:
1. The web phone can be loaded on demand using document.write(), but it is a bad practice to call document.write() after the page has finished loading.
2. The web phone can also be loaded using any AMD (Asynchronous Module Loader). This is not recommended, because webphone also uses AMD (Require JS) to load its modules, so it won't improve performance, but it can lead to conflict between AMD loaders.
The webphone is a client side software and pages/tabs in your browser are separate entities, so a new page doesn’t know anything about an old one (except via server side sessions, but it is impossible to transfer live JavaScript object via your server in this way).
There is no way to
keep the webphone session alive between page loads.
Instead of this, you should choose one of the followings:
· run the webphone in a separate page (on its own dedicated page, so the enduser can just switch to this window/tab if needs to interact with the webphone)
· run the webphone in an frame
· load your content dynamically (Ajax)
If this functionality is a must for your project, check the following FAQ for the possibilities.
There might be situations when you might wish to use the same webphone instance on multiple pages (opened in different browser tab or windows).
For example to start a call on a page, open a second page and be able to hangup the call on this second page.
First of all, it is important to note that the webphone is client side software (it is impossible to implement a voip client which would run on the server side).
This means that from the browser perspective, each of the pages are treated completely separately (Only your web server knows that those belongs together called a “session”). Each page load or reload will completely re-initialize also the webphone (if the webphone is included in the page). With other worlds: multiple pages opened from your domain doesn’t know about each other at all and one page can’t access the other one (except if you send some ajax message via your web server, but this kind of message passing is useless in this case since you can't transfer the whole webphone javascript object).
This means that you should avoid the above are a few ways to implement such functionality:
· Simple data sharing: If you just want to share some details across your pages, then you can do it via cookies or from pure javascript using the window.name reference. (This can be used only for simple data sharing, but not to share live JavaScript objects such as the webphone)
· NS engine: It is possible with the NS engine to have the webphone survive page (re)loads or opening new pages on your website. Contact mizutech if you are interested in this (works only with the NS engine)
· Using a global webphone object: There is way to share a global webphone instance across the opened pages: using the window.opener property which is a reference to the window that opened the current window. This means that you can access your global webphone object from secondary opened pages via the opener reference (Find an example for this below)
· Avoid this is a simplified example to access the webphone object via window.opener:
//Important: Set the “autostart” parameter to “false” in the webphone_api.js parameters section to avoid auto initialization of the webphone on all pages where included (we will start the webphone explicitly when needed)
//store the wopener variable to be used here and also on subsequent pages (useful if we open a third page from the second and so)
var wopener = window; //set to this document
if(window.opener && window.opener.webphone_api)
{
wopener = window.opener; //set to parent page document
}
if(wopener.wopener && wopener.wopener.webphone_api)
{
wopener = wopener.wopener; //the parent page might also loaded from its own parent, so load it from there
}
//create a reference to the webphone so we can easily access it on this page via this variable
var wapi = webphone_api;
//Initialize your webphone if not initialized yet
if(wopener && wopener.webphone_api)
{
//load the wapi instance from the parent page in this case
wapi = wopener.webphone_api;
//check if already initialized
if(wapi.isstarted != 1)
{
wapi.isstarted = 1;
//we are staring the engine here, however you can delay the start if you wish to the point when the user actually wish to use the phone such as making a cal; wapi.start();
}
//else already initialized by parent
}
else if(wapi && wapi.isstarted != 1)
{
//we are the first page
wapi.isstarted = 1;
wopener = window; //set the wopener to point to this page
wapi.start();
}
//use the phone api on this page
function onCalllButtonClick()
{
if(wapi) wapi.call();
else alert('error: no webphone found (webphone_api.js not included?)');
}
You can find a better/fully working example in the webphone package: multipage_example.html.
Sometimes you might have to change the settings for each session (for example changing the user credentials).
In these situations it might happen that the webphone is still using the old setting (which you have set for the previous session and not for the current one).
Usually this might happen if the webphone is already started and registered with the old parameters before it loads the new parameters (For example before you call the setparameter() API with the new values).
To prevent this, you should set the "autostart" parameter to "false" in the webphone_api.js
You can also set the “register” parameter to "0".
The use the start() and/or register() functions only after the webphone were supplied with the new parameters.
Note:
The webphone is also capable to load it’s parameters from the URL. Just us the format required (wp_username, wp_password and others).
It is not needed to call the register() after start() because the start() will automatically initiate the register if the server/username/password is already preset when it starts and if you leave the register parameter at 1.
Certain operations (such as file download controls) might trigger window.unload events which might trigger webphone unregistrations.
You might have to prevent these event being triggered by your controls by using this technique (It can be applied to any element such as <div>,<a>,<button>)
For RTP statistics increase the log level to at least 3 and then after
each call longer than 7 seconds you should see the following line in the log:
EVENT, rtp stat: sent X rec X loss X X%.
If you set the “loglevel” parameter to at least “5” than the important rtp and media related events are also stored in the logs.
You can also access the details about the last call from the softphone skin menu “Last call statistics” item.
In the SIP protocol the client endpoints have to send their (correct) address in the SIP signaling, however in many situations the client is not able to detect it’s correct public IP (or even the correct private local IP). This is a common problem in the SIP protocol which occurs with clients behind NAT devices (behind routers). The clients have to set its IP address in the following SIP headers: contact, via, SDP connect (used for RTP media). A well written VoIP server should be able to easily handle this situation, but a lot of widely used VoIP server fails in correct NAT detection. RTP routing or offload should be also determined based in this factor (servers should be always route the media between 2 nat-ed endpoint and when at least one endpoint is on public IP than the server should offload the media routing). This is just a short description. The actual implementation might be more complicated.
With the WebRTC engine make sure that the STUN and TURN settings are set correctly (by default it will use mizu services which will work fine if your server is on the public internet).
For NS and Java engines you may have to change the webphone
configuration according to your SIP server if you have any problems with
devices behind NAT (router, firewall).
If your server has NAT support then set the use_fast_stun and use_rport parameters to 0 and you should not have any problem with the signaling and media for webphone behind NAT. If your server doesn’t have NAT support then you should set these settings to 2. In this case the webphone will always try to discover its external network address.
Example configurations:
If your server can work only with public IP sent in the signaling:
-use_rport 2 or 3
-use_fast_stun: 1 or 2
If your server can work fine with private IP’s in signaling (but not when a wrong public IP is sent in signaling):
-use_rport9
-use_fast_stun: 0
-optionally you can also set the “udpconnect” parameter to 1
Asterisk is well known about its bad default NAT handling. Instead of detecting the client capabilities automatically it relies on pre-configurations. You should set the "nat" option to "yes" for all peers.
More details:
Use the following settings if you have 2 voip servers:
· serveraddressfirst: the IP or domain name of the first server to try
· serveraddress: the IP or domain name of the next server
· autotransportdetect: true
· enablefallback: true
In this way the webphone will always send a register to the first server first and on no answer will use the second server (the “first” server is the “serveraddressfirst” at the beginning, but it can change to “serveraddress” on subsequent failures to speed up the initialization time)
Alternatively you can also use SRV DNS records to implement failover or load balancing, or use a server side load balancer.
The WebRTC functionality highly depends on your OS/browser and server side WebRTC –SIP module. Check the followings if the webphone is using the WebRTC engine and you have difficulties:
· Make sure that your browser has support for WebRTC and it works. Visit the following test pages: test1, test2, test3
· Make sure to run the webphone from secure http (https) otherwise WebRTC will not work in Chrome and Opera
· If you have set the “webrtcserveraddress” parameter to point to your server or gateway, make sure that your server/gateway has WebRTC enabled and test it also from some other client such as sipml5: config; test
· You might contact mizutech support with a detailed log about the problem
· If you are unable to fix webrtc in your setup then you might disable the webrtc engine by setting the enginepriority_webrtc parameter to 0 or 1.
See the other possibilities here.
You might receive similar popups or the calls just fails if you are using the WebRTC engine but haven’t enabled the browser to use your microphone/camera device or denied it previously (Technically the WebRTC getUserMedia() function call will fail in this case).
Normally before WebRTC calls your browser should popup a box asking to allow microphone access. You should click on the Ok/Yes/Allow/Share/Always Share button there.
However if you
clicked on No/Not/Don’t Share/Deny/Always Deny button sometime before then the
browser might not popup with this qu
estion again.
· In this case you should see a red icon in your browser address bar and click on “Allow” from there.
· You can also allow a website from your browser security/privacy settings.
(In Chrome: settings -> show advanced settings -> privacy section -> content settings -> microphone -> manage exceptions).).
· Also you must use wss (secure websocket) for your WebRTC server WebSocket connection, otherwise Chrome will fail on unsecure ws.).
Also you must use wss (secure websocket) for your WebRTC server WebSocket connection, otherwise Chrome will fail on unsecure ws.
· Chrome also might fail if you try to run WebRTC from html launched from local file system
The workaround for this is to lauch with --allow-file-access-from-files parameter
(Something like this on windows: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --allow-file-access-from-files C:\path\softphone.html)
· Also test your browser webrtc capabilities from here and here.
The webphone is capable to handle the situation when calls are beeing connected without a microphone device. This is useful only if the user needs to listen for some audio such as an IVR.
The only exception is if you use it’s WebRTC engine with Firefox since Firefox require the MediaStream to have a valid MediaStreamTrack, but this is returned from getUserMedia() which fails on Firefox if the user don't have a microphone with the following error:
WRTC, ERROR, InternalError: Cannot create an offer with no local tracks, no offerToReceiveAudio/Video, and no DataChannel.
This is a bug in Firefox already reported also by others as you can see here and here.
This situation is handled automatically by the webphone or you can force calls to always pass or always fail via the checkmicrophone setting.
Call quality is influenced primarily by the followings:
· The engine used (Java and NS tends to have the best quality)
· Codec used to carry the media (wideband has better quality)
· Network conditions (check your upload speed, packet loss, delay and jitter)
· Hardware: enough CPU power and quality microphone/speaker (try a headset, try on another device)
· AEC and denoise availability
If you have call quality issues then the followings should be verified:
· whether you have good call quality using a third party softphone from the same location (try X-Lite for example). If not, than the problem should be with your server, termination gateway or bandwidth issues.
· make sure that the CPU load is not near 100% when you are doing the tests
· make sure that you have enough bandwidth/QoS for the codec that you are using
· change the codec (disable/enabled codec’s with the “codec” parameter)
· deploy the mediaench module (for AEC and denoise). (Or disable it if it is already deployed and you have bad call quality)
· webphone logs (Check audio and RTP related log entries. Also check the statistics after call disconnect.)
· wireshark log (Check missing or duplicated packets)
1. Review your server NAT related settings
2. Set the “setfinalcodec” parameter to 0 (especially if you are using Asterisk or OpenSIPS)
3. Check stun and turn settings (might be used for WebRTC if your server is not on the public internet, doesn’t route the RTP or you need peer to peer media routing)
4. Set use_fast_stun, use_fast_ice and use_rport to 0 (especially if you are using SIP aware routers). If these don’t help, set them to 2.
5. If you are using Mizu VoIP server, set the RTP routing to “always” for the user(s)
6. Make sure that you have enabled all codec’s
7. Make a test call with only one codec enabled (this will solve codec negotiation issues if any)
8. Try the changes from the next section (Audio device cannot be opened)
9. If you still have one way audio, please make a test with any other softphone from the same PC. If that works, then contact our support with a detailed log (set the” loglevel” parameter to 5 for this)
If you can’t hear audio, and you can see audio related errors in the
logs (with the loglevel parameter set to 5), then make sure that your system
has a suitable audio device capable for full duplex playback and recording with
the following format:
PCM SIGNED 8000.0 Hz (8 kHz) 16 bit mono (2 bytes/frame) in little-endian
If you have multiple sound drivers then make sure that the system default is workable or set the device explicitly from the webphone (with the “Audio” button from the default user interface or using the “API_AudioDevice” function call from java-script)
To make sure that it is a local PC related issue, please try the webphone also from some other PC.
You might also try to disable the wideband codec’s (set the use_speexwb and use_speexuwb parameters to 0 or 1).
Another source for this problem can be if your sound device doesn’t support full duplex audio (some wrong Linux drivers has this problem). In this case you might try to disable the ringtone (set the “playring” parameter to 0 and check if this will solve the problem).
If these don’t help, you might set the “cancloseaudioline” parameter to 3 and/or the "singleaudiostream” to 5.
Depending on your server configuration, you might not have ringback
tone or early media on call connect.
There are a few parameters that can be used in this situation:
· set the “changesptoring” parameter to 3
· set the “natopenpackets” parameter to 10
· set the “earlymedia” parameter to 3
· change the “use_fast_stun” parameter (try with 0 or 2)
One of these should solve the problem.
Make sure that your softswitch has support for IM and it is enabled. The webphone is using the MESSAGE protocol for this from the SIP SIMPLE protocol suite as described in RFC 3428.
Most Asterisk installations might not have support for this by default. You might use Kamailio for this purpose or any other softswitch (most of them has support for RFC 3428).
If subsequent chat messages are not sent reliably, set the “separatechatdiag” parameter to 1.
To be able to receive calls, the webphone must be registered to your server by clicking on the “Connect” button on the user interface (or in case if you don’t display the webphone GUI than you can use the “register” parameter with supplied username and password, or via the register() JavaScript SIP API)
Once the webphone is registered, the server should be able to send incoming calls to it.
Other common causes include:
-NAT: if your browser webphone is behind NAT, check if your server can handle NAT’s properly (via rport and other settings). As a workaround you might try to start the webphone with use_fast_stunparameter set to 0 and if still not works then try it with 2.
-call fork: if you are registered from multiple locations with the same credentials then your server must be able to support call fork to ring on all devices. Otherwise make sure to use the same credentials only from one location and one protocol (don’t mismatch SIP and WebRTC logins)
-make sure that autoignore or DND (do not disturb) are not set
-check if your server is sending the INVITE to the proper IP:port (from where it received the latest valid REGISTER from the webphone)
If the calls are still not coming, send a detailed log from the webphone (set the loglevel parameter to 5) and also from the caller (your server or remote SIP client)
This depends on the circumstances and there is no such thing as the "best codec". All commonly used codec's present in the webphone are well tested and suitable for IP calls with optimized priority order by default, regarding to environment (client device, bandwidth, server capabilities).
This means that usually you don’t need to change any codec related settings except if you have some special requirement.
Between webphone users (or other IP to IP calls) you should prefer wideband codec's (this is why you just always leave the opus and speex wideband and ultra wideband with the highest priority if you have calls between your VoIP users. These will be picked for IP to IP calls and simply omitted for IP to PSTN calls).
Otherwise G.729 provides both good quality and low bandwidth if this codec is available for you.
G.711 (PCMU/PCMA) is always supported and they offer good call quality using some more bandwidth then G.729.
The
other available codec’s are iLBC and GSM. These offers a good
compromise between quality and bandwidth usage if the above mentioned opus and
G.729 is not supported by your server or the other peer.
To calculate the bandwidth needed, you can use this tool. You might also check this blog entry: Codec misunderstandings
With the webphone you don’t need to change the codec settings except if you have some special requirement. With the default settings the webphone is already optimized and will always choose and negotiate the “best” available codec.
The webphone is a favorite VoIP client for callcenter as it can be easily integrated with any frontend or CRM and it can be used for both inbound and outbound campaigns. The integration usually consists of database lookup for caller/callee details on incoming/outgoing calls so the agent can see all the details about the customer.
There are multiple ways to implement such kind of database/CRM lookups:
-From JavaScript catch the call init from the onCallStateChange callback (on status = callSetup) and load the required data by an AJAX call to your backend
-Via the “scurl_displaypeerdetails”: implement a HTTP API on your server which will return the peer details and set the scurl_displaypeerdetails webphone setting to point to this API URL
-If your backend has VoIP client integration capabilities, then just implement its specification. For example here is a tutorial about integrating the webphone with salesfoce
There are many other things what you can do for a better integration, such as processing cdr records or recording the calls however most of these can be easily controlled by the webphone parameters or implemented via the API.
We recommend use the NS and/or the Java VoIP engine in call-centers since these provides native call processing, connecting directly to your SIP server without the need of any extra layer such as WebRTC. More details here.
The “P2P” term is misleading sometimes and it can have the following meanings:
· Server assisted phone to phone calls. This means that both endpoints will be called by the server and once connected, the server interconnects the 2 endpoint. It can be useful when the client device doesn’t have internet connection or doesn’t have any platform to enable VoIP, such as an old dumb phone. Exactly for this concept we refer with the P2P engine.
· Peer to peer connection: useful to bypass the server for media or both media and signaling (peer to peer media routing is more important here).
· Sometimes it might refer to peer to peer encryption (or end to end E2E encryption) which means that the server (if used) is a passive party from the encryption point of view and is unable to decrypt the streams between the endpoints (just forwards the stream if needed)
The webphone also has support for peer to peer encrypted media with direct streaming (this is done via ICE techniques with automatic failback to routing via server if a direct path can’t be found.)
These terms might be also misleading especially for user with no VoIP/SIP knowledge.
Register or registration provides a way for the SIP clients to connect/login to the server so the server will learn the client address and will be able to route calls and other message to it. It is implemented by sending a REGISTER message by the SIP signaling. The server might or might not challenge the request with an authentication request (in this case the client will send a second REGISTER with a hash of its credentials). On credentials we refer to the sip username/password.
However:
· Register is optional and is not really needed if your client will make only outbound calls (not used to accept calls or chat)
· You can configure your server to not require registrations (actually most server doesn’t require it by default, however in some servers the default configuration is to not allow calls if there was no previous successful registration)
· For the webphone you can set the “register” parameter to 0 to skip registration (so the webphone will not send REGISTER requests)
· Disabling registration is not a security treat since the server will do the same authentication for each call as it does for registrations (so the clients will not be able to make calls if their credentials are incorrect)
· You can also configure your server to allow blind registrations. This means that the client might send the REGISTER with any credentials (any username/password) and it will be unconditionally accepted
· You can also configure your server to allow blind calls. This means that the client might send the INVITE with any credentials (any username/password) and it will be unconditionally accepted (the call will be routed)
· If your server accepts blind registrations and calls then you can set the webphone password parameter to any value since it will not be checked or used anyway. (You can set it to “nopassword” as a special value to hide it from settings and login forms)
· There are situations when even the username doesn’t matter (if you wish to make only unconditional outbound calls or calls to ivr). However you must also set the username parameter to some value or allow the user to enter something since it is required for the SIP messages. You might set it to “Anonymous” in this case.
Sometime you might use a separate username/password combination on your website then on your SIP server. In this case you can auto-provision the webphone with the sip credentials if the user is already logged in on your website to avoid typing a different username/password. This can be implemented multiple ways:
· by dynamically generating the webphone settings from a server script (set the username/password from the server since there you already know the signed in user details and you can grab the SIP credentials from your softswitch database)
· implement a custom API which returns the sip credentials and set it’s URI as the “scurl_setparameters” parameter (webphone will call scurl_setparameters URI and wait for the (key/value) parameters in response and once received it will start the webphone)
· handle it from JavaScript (use the setparameter() API to set the username/password)
· implement some alternative authentication method on your SIP server (for example based a custom SIP header which you might set from the web session using the setsipheader() API call)
Depending on the settings, the webphone will automatically register upon startup or you can explicitly connect to the server by calling the register() API.
To find out whether the webphone is successfully registered or not, you can use the isregistered() API to query the status at any time.
You can also receive notifications about the registration status via the followings callbacks:
· onRegistered: callback called on successful registration
· onUnRegistered: callback called after “logoff”
· onDisplay: callback called when register fails with the message containing one of the following text:
o Connection lost
o No network
o No response from server
o Server lost
o Authentication failed
o Rejected by server
o Register rejected
o Register expired
o Register failed
For outgoing calls the Caller ID (CLI/A number display) is controlled by the server and the application at the peer side (be it a VoIP softphone or a pstn/mobile phone).
You can use the following parameters to influence the caller id display at the remote end:
o username (this is used for both SIP username and authentication username if sipusername is not set)
o sipusername (if this parameter is set, then the “sipusername” will be used for authentication and the “username” parameter as the SIP username)
o displayname (SIP display name)
If you set all these parameters, then it will be sent in the SIP signaling in the following way (see the uppercase worlds):
INVITE sip:[email protected] SIP/2.0
From: "DISPLAYNAME" <sip:[email protected]>;tag=xyz
Contact: "DISPLAYNAME"<sip:[email protected]>
Remote-Party-ID: "DISPLAYNAME" <sip:[email protected]>;party=calling;screen=yes;privacy=off
Authorization: Digest username="SIPUSERNAME",realm="sipdomain.com" …
Some VoIP server will suppress the CLI if you are calling to pstn and the number is not a valid DID number or the webphone account doesn’t have a valid DID number assigned (You can buy DID numbers from various providers).
The CLI is usually suppressed if you set the caller name to “Anonymous” (hide CLI).
If required by your SIP server, you can also set a Caller Identity header as a “customsipheader” parameter. (P-Preferred-Identity/P-Asserted-Identity/Identity-Info)
For incoming calls the webphone will use the caller username, name or display name to display the Caller ID. (SIP From , Contact and Remote-Party-ID fields).
Here is a simple example:
webphone_api.onCallStateChange(function (event, direction, peername, peerdisplayname, otherdetails)
{
if (event === 'callSetup')
{
if (direction == 1)
{
// means it is an outgoing call
}
else if (direction == 2)
{
// means it is an icoming call
document.getElementById('icoming_call_layout').style.display = 'block'; // display Accept, Reject buttons
/*
<div id="icoming_call_layout">
<button onclick="webphone_api.accept();">Accept</button>
<button onclick="webphone_api.reject();">Reject</button>
</div>
*/
}
}
// end of a call, even if it wasn't successfull
if (event === 'callDisconnected')
{
document.getElementById('icoming_call_layout').style.display = 'none'; // hide Accept, Reject buttons
}
});
More details and examples can be found here.
If you have changed any parameter in the webphone_api.js, make sure that you see the latest version if you open the js file directly in the browser like:
If you don’t see the recent settings that means that the old version was cached by your browser, by your webserver or some intermediary proxy.
The webphone might store/cache previous settings in cookie and indexDB "localforage".
Refresh the browser cache by pressing F5 or Ctrl+F5.
In Firefox you can clear all settings related to the webphone by pressing ALT, then select “Show All History” from the “History” menu, then right click to your domain and select “Forget About This Site”..
Once a parameter is set, it might be cached by the browser phone and used even if you remove it later.
To prevent this, set the parameter to “DEF” or “NULL”. So instead of just deleting or setting an empty value, set its value to “DEF” or “NULL”. “DEF” means that it will use the parameter default value. For number values instead of removing or commenting them out, you might change to their default value instead.
Also check this FAQ if you made a recent upgrade but still seems that the old version is running.
If still doesn’t work, you should check from another PC (to make sure that nothing is preinstalled/cached on your PC).
If still doesn’t work, send a detailed log to Mizutech support.
First you should backup your existing webphone folder.
Extract the zip supplied by Mizutech and replace all the files in your webphone folder with the new content, but make sure to:
-preserve the settings: if you have set the webphone configuration to the webphone_api.js parameters, make sure to set them also in the new file
-don’t overwrite other files where you made changes if any (for this reason it is not recommended to make any changes in the webphone files)
Although the webphone_js.api file is rarely changed, we don’t recommend writing code in this file. Use your separate js files for your project and just include the webphone_api.js instead of using it for custom code.
Also make sure to adjust the minserviceversion if you have this set to any value, otherwise you might have to upgrade the NS service manually or the new webphone will continue to use the old version (which is not a problem most of the time, but we don’t recommend to use very old outdated versions).
Note: new versions of the webphone is always backward compatible and backward API compatibility is always ensured except occasional minor/compatible changes so you can upgrade without any changes in your code. However each you version contains changes in the VoIP engines so you should always verify if it fulfills your needs and downgrade to the previous version if you encounter any issues (Then you might try the upcoming release again to see if your issue were fixed).
Make sure that you are actually using the new version. Refresh the browser cache by pressing F5 or Ctrl+F5..
If your webphone is using the NS engine, then it might be possible that the PC is running an old version. This can be updated in the following ways:
-manually as described below
-set the minserviceversion parameter. If higher than the current installed version then it will ask the user to upgrade (one click install)
Note: the NS service version for v.1.9 softphone is 7 (so you can set the “minserviceversion” setting to 6 to force the latest version for all users, but this is already enforced by default)
-auto-upgrade: the core of the ns engine is capable to auto upgrade itself if new versions are found (you can disable this by setting the “autoupgrade” parameter to 6)
(In the NS service there is a built-in SSL certificate for localhost. This is also capable for auto-upgrade when new certificates are found unless you set the “autoupgrade” to 5)
Also check this FAQ if your new settings are not applied.
If still doesn’t work, you should check from another PC (to make sure that nothing is preinstalled/cached on your PC).
If still doesn’t work, send a detailed log to Mizutech support.
In some situation under Windows OS the webphone might install an NT service named “Webphone” (This is the NS service plugin and it is installed only on user opt-in)
· Disabling: If you don’t wish to use the NS engine, you can just disable the service (set startup type to Manual and Stop the service) or set the enginepriority_ns to 0
· Uninstalling: The service has its own uninstaller, so you can easily uninstall it from the Add/Remove Programs control panel. It can be also removed with the –uninstall parameter. Example: C:\Program Files (x86)\WebPhoneService\WebPhoneService.exe –uninstall.
· Re(installing): The install can be done from the softphone skin by just going to menu -> settings -> advanced settings -> sip settings -> voip engine -> select the NS engine. That should offer the download of the new version (if the service is not already running, so if you need to install a new version, then you should uninstall or stop it first).
You can also (re)install/upgrade manually by running the “WebPhoneService_Install.exe” from the webphone\native folder. (You can also download it from your webserver: or from the webphone package provided by mizutech). Just run the executable and it will install the NS engine automatically (this should work even if the service is already running as it will automatically update your old version)
Note: this is relevant only for our old customers using the old java applet based webphone.
This new webphone has an easy to use API, however if you wish to keep your old code, you can do so with minimal changes as we created a compatibility layer for your convenience. Follow the next steps to upgrade to our new webphone:
1. The root folder of the new webphone is the folder, in which "webphone_api.js" and "softphone.html" files are located.
2. Copy the contents of the new webphone root folder, in the same folder where the old webphone's .html file is (merge "images" and "js" folders, if asked upon copy process).
3. In the <head> section of the .html file, where the old webphone is, replace line:
<script type="text/JavaScript" src="js/wp_common.js"></script>"
with the following lines:
<script type="text/JavaScript" src="webphone_api.js"></script>
<script type="text/JavaScript" src="oldapi_support.js"></script>
Note: Don't remove or add any webphone related Javascript file imports.
"jquery-1.8.3.min.js " file will be imported twice, but that is how it supposed to be, in order for the upgrade to work correctly.
For old webphone customers: please note that this new webphone is a separate product and purchase or upgrade cost might be required. The old java applet webphone have been renamed to “VoIP Applet” and we will continue to fully support it. More details can be found in the wiki.
Auto-provisioning or auto-configuration is a simple way to configure IP-phones for SIP servers used on local LAN.
The exact same behavior can be easily achieved by using the webphone with dynamic parameters.
First you should set the parameters common for all instances (all users) on your webserver in the webphone_api.js file.
Then you just have to set account related settings (per user settings) at runtime using one of the method specified in the Parameters chapter (by URL, via a server API by scurl_setparameters, or from javascript by the setparameter API).
The web sip phone can be easily localized for multiple languages.
The "language" parameter, is a 2 character language code string, for example: "en" for English and "hu" for Hungarian.
To add another language, just take the list of English strings from stringres.js, translate them to the desired language and add an underscore followed by the two character language code suffix, to every string entry like below:
Desired language: Italian
Language code will be: it
- set the language API parameter: language: 'it',
- after translating all strings from English to Italian, copy them back to stringres.js adding the "_it" suffix:
String resource example:
For english: my_page_title: 'Phone',
For italian: my_page_title_it: 'Telefono',
Contact support if you have any difficulties with this. We will send you the file to be translated and once you translate it, we will apply to your webphone build.
Webphone comes with a few prebuilt skins, which can be changed from Settings -> Theme.
The look and feel of the webphone skin can further be customized by altering any of the predefined themes found in: js\softphone\themes.js.
Open the themes.js file (it is located in webphone/js/softphone folder) with your favorite text editor.
In the "themelist" variable are stored the current webphone themes, you can edit for example the theme_1 after your needs. Please note that the theme_0 (default theme) can't be modified from this file.
From the variables names should be obvious they meaning (bg - means background), the colors are defined in RGB hex.
mainlayout.css: color to replace: #1d1d1d with urlparam: bgcolor
wphone_1.0.css: color to replace: #333 with urlparam: buttoncolor
wphone_1.0.css: color to replace: #373737 with urlparam: buttonhover
wphone_1.0.css: color to replace: #22aadd with urlparam: tabselectedcolor
mainlayout.css: color to replace: #31b6e7 with urlparam: fontctheme
mainlayout.css: color to replace: #ffffff with urlparam: fontcwhite
wphone_1.0.css: color to replace: sans-serif with urlparam: fontfamily
After you modify a variables value, you need to reload your webphone otherwise the modifications will not any effect.
You will also need to set the “colortheme” parameter to match your theme index.
You can create new themes easily by searching for existent dialer skins and after you find one that it is close to your needs just pick the preferred colors
using a software like Color Pic or you can search for a color matching tool to help you in building better color schemes.
The webphone can load its settings also from the webpage URL and perform various actions such as initiate a call. All the listed parameters can be used, prefixed with “wp_”.
Example to trigger a call with the softphone by html url parameters:
Example to trigger a call with the click to call by html url parameters:
Example trigger chat by html parameters
Note: you should use clear password only if the account is locked on your server (can’t call costly outside numbers). Otherwise you should pass it encrypted or use MD5 instead.
See also click to call.
Just set your phone number in your email signature as a link (URL anchor) to the webphone click to call:
In this way the phone number in your email signature will become a clickable link which will trigger the webphone and will call your number automatically on SIP.
Instead of the click2call_example.html, you can also use the softhone.html (or your custom webphone html).
For account username/password you should just create a special extension on your SIP server which is not authenticated and allows unrestricted calls to local extensions only (not to outbound/paid).
More details about click to call can be found here.
Multi-line means the capability to handle more than one call at the same time (multiple channels).
By default you don't need to do anything to have multi-line functionality as this is managed automatically with each line on the first “free” line.
If you have multiple ongoing calls, then the active call will be the last one you make or pickup.
User interface:
Multi line functionality is enabled by default in the webphone.
Once the enduser initiate or receive a second call, the webphone will automatically switch to multi-line mode.
If you are using the softphone skin (the softphone.html) its user interface will display the separate calls in separate tabs, so the user can easily switch between the active calls.
Actually the followings user interface elements are related to to multi line:
· on the Call page, once you have a call, you can initiate more calls from Menu -> New call
· for every call session, a line button will appear at the top of the page so the users can change the active line from there
· the line buttons for managing call sessions, will also appear in case another incoming call arrives
· you can easily transfer the call from line A to line B
· you can easily interconnect the active lines (create conference calls)
Disable multi-line
You can disable multi-lien functionality with the following settings:
-set the “multilinegui” webphone parameter to 0
-set the "rejectonbusy" setting to "true"
Other related parameters are the "automute" and "autohold" settings.
JavaScript library/API
When the webphone is used as an SDK, the lines can be explicitly managed by calling the setline/getline API functions:
- webphone_api.setline(line); // Will set the current line. Just set it before other API calls and the next API calls will be applied for the selected line
- webphone_api.getline(); /)
For example if there are multiple calls in progress and you wish to hangup one of the calls, then just call the webphone_api.setline(X) before to call webphone_api.hangup().
The active line is also switched automatically on new outgoing or incoming calls (to the line where the new call is handled).
Channels
The following line numbers are defined:
o -2: all (some API calls can be applied to all lines. For example calling hangup(-2) will disconnect all current calls)
o -1: current line (means the currently selected line or otherwise the “best” line to be used for the respective API)
o 0: undefined
o 1: first channel
o 2: second channel
o …
o N: channel number X
Some behaviors will automatically change when you have multiple simultaneous calls. For example the conference API/button will automatically interconnect the existing parties or the transfer API/button will transfer the call from the current line to the other line.
Note: If you use the setline() with -2 and -1, it will be remembered only for a short time; after that the getline() will report the real active line or “best” line.
API usage example:
webphone_api.call(‘1111’); //make a call
webphone_api.call(‘2222’); //make second call
//setup conference call between all lines
webphone_api.setline(-2); //select all lines
webphone_api.conference();
//disconnect the second call
webphone_api.setline(‘2222’);
webphone_api.hold(true);
//put first call on hold
webphone_api.setline(‘1111’);
webphone_api.hold(true);
The best engine is selected by the webphone automatically based on circumstances (client device, OS, browser, network, server):
However the preferred engine can influenced on 3 levels:
-Choice presented to the user in some circumstances on startup (This is not always presented. The webphone will go with the best engine when there is a definitive winner, without asking the user)
-Engine settings in the user interface, so the enduser might change its own preferred engine
-Engine priority options in the configuration. You can set this in the “webphone_api.js” (enginepriority_xxx settings as discussed in this documentation Parameters section)
There should be very rare circumstances when the default engine selection algorithm should be changed. The web sip lib always tries to select the engine which will disturb the user the less (minimizing required user actions) and offers the best performance.
For example don't be inclined to disable Java for the sake of its age. Users will not be alerted to install Java by default. However if Java is already enabled in the user browser then why not to use it? Java can offer native like VoIP capabilities and there should be no reason to disable it.
We spent a considerable amount of work to always select the best possible engine in all circumstances. Don't change this unadvisedly, except if you have a good reason to use a particular engine in a controlled environment.
This is a question often asked by our customers about how to optimize the webphone library for best call quality. The answer is rather simple for this question:
The best settings are the default settings. The default settings are optimized and should be preferred in almost all use cases except if you have some uncommon needs. You should change the default settings only if you have a good reason to do so. See also the “best codec“ section.
The easiest way to specify parameters for the webphone is to just enter them in the webphone_api.js file (parameters variable at the top of the file).
However if you need to integrate the webphone with your server (for example with a CRM) you might have to set different parameters regarding the session (for example different user credentials based on the currently logged-in user). There are 3 ways to do this:
1. With the client side JavaScript using the webphone setparameter API (get the parameters from you webapp or via ajax requests)
2. Just generate the URL (iframe or link) dynamically from your server side scripts with the parameters set as required (wp_username, wp_password and other URL parameters).
3. Set the “scurl_setparameters” setting to point to your server side http api which will have to return the details once called by the webphone.
This will be called after "onStart" event and can be used to provision the webphone from server API. The answer should contain parameters as key/value pairs, ex: username=xxx,password=yyy.
See the beginning of the parameters section for all other possibilities.
The webphone can generate detailed logs for debugging purposes.
For this just set the “loglevel” setting to 5 (or enable logs from the user interface if any; this is already set to 5 by default in the demo versions).
Once enabled, you can see the logs in the browser console or in the softphone skin help menu (if you are using this GUI). If the Java engine is being used, then the logs will appear also in the Java console. You can also use the API: getlogs() and the onLog(callback) functions.
When contacting Mizutech support with any issue, please always attach the detailed logs: just send the output of the browser console (or you can find the same from the softphone skin help menu if you are using the softphone.html).
On Firefox and Chrome you can access the logs with the Ctrl+Shift+J shortcut (or Cmd+Shift+J on a Mac). On Edge and Internet Explorer the shortcut key is F12.
WebRTC engine detailed logs
If the webphone is using the WebRTC engine then the browser console output will contain the most important logs.
If you are using the softphone skin, then better if you check the logs from the skin help menu because the number of lines are limited in the browser console.
If you have voice issues (no voce, one side voice, delays) then you should get a detailed log. With Chrome this can be done by launching it like:
"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --enable-logging --v=4 --vmodule=*libjingle/source/talk/*=4 --vmodule=*media/audio/*=4
Then you can find the logs at: C:\Users\USER\AppData\Local\Google\Chrome\User Data\chrome_debug.log
Java engine detailed logs
If the webphone is using the Java engine, then a log window will appear if the “loglevel” is set to “5” and the “canopenlogview” to “true”.
Grab the logs also from this window (Ctrl+A, Ctrl+C, Ctrl+V) or from the Java console.
NS engine detailed logs
If the webphone library is using the NS engine on Windows, then some more detailed logs can be obtained from
C:\Program Files (x86)\WebPhone_Service\WebPhone_Servicelog.dat
and C:\Program Files (x86)\WebPhone_Service\content\native\webphonelog.dat.
(C:\Program Files (x86)\WebPhone_Service is the default data directory which might be different on your PC.
It might be located in the C:\Users\USER\AppData\Roaming\WebPhone_Service directory if the account doesn’t have write access to Program Files).
If there is no *log.dat file, just send the “wphoneout.dat” file or all the *.dat files if you are not sure (from both the app directory and from /content/native folder).
ERROR and WARNING messages in the log
If you set the loglevel higher than 1 than you will receive messages that are useful only for debug.
Most of ERROR and WARNING message cannot be considered as faults in this case.
Some of them will appear also under normal circumstances and you should not take special attention for these messages.
If there are any issue affecting the normal usage, please send the detailed logs to Mizutech support ([email protected]) in a text file attachment.
Why I see RTP warning in my server log
The
webphone will send a few (maximum 10) short UDP packets (\r\n) to open the
media path (also the NAT if any).
For this reason you might see the following or similar Asterisk log entries:
“WARNING[8860]:
res_rtp_asterisk.c:2019 ast_rtp_read: RTP Read too short” or “Unknown RTP
Version 1”.
These packets are simply dropped by Asterisk which is the expected behavior. This is not a webphone or Asterisk error and will not have any negative impact for the calls. You can safely skip this.
You might turn this off by the “natopenpackets” parameter (set to 0). You might also set the “keepaliveival” to 0 and modify the “keepaliveival” (all these might have an impact on the webphone NAT traversal capability).
How to find which engine was tried?
To find all engine related log, like which engines are supported, selected/recommended engine, just search for "engine".
Also, before every engine start, all the engine priorities are logged, search for: "enginepriority"
How to find which engine is was finally selected?
To find out which engine was started, search for: "start engine:"
If WebRTC engine is selected, how to find the websocket URL, sip server and ice settings.
Search for: "Webrtc connection details:". There you will find all the above details.
When sending logs to Mizutech support, please attach them as text files (don’t insert in email body).
Download:
Contact [email protected] | https://documentation.help/Mizu-WebPhone/start.htm | CC-MAIN-2020-40 | refinedweb | 39,370 | 55.88 |
This () is a URI defined in the
Web
Services Policy 1.5 - Framework
and
Web
Services Policy 1.5 - Attachment
2006-11-17 specifications.
This document describes the Web Services Policy 1.5 namespace. A RDDL version of this document is available.
The following URI always points to the latest schema (including errata) for the Web Services Policy 1.5 namespace. The resource at this location may change as new errata are incorporated.
The following URI points to the schema for the Web Services Policy 1.5 namespace corresponding to the 2006-11-17 specifications. public-ws-policy-comments@w3.org mailing list (public archive). | http://www.w3.org/2006/07/ws-policy/ | CC-MAIN-2016-36 | refinedweb | 105 | 53.78 |
AWS Lambda Size: PIL+TF+Keras+Numpy?
At Wehkamp we’ve been using machine learning for a while now. We’re training models in Databricks (Spark) and Keras. This produces a Keras file that we use to make the actual predictions. Training is one thing, but getting them to production is quite another!
I teamed up with Jesse Bouwman, one of our data scientists, to see if we could get 2 of our image classifier models working on an AWS Lambda. The main problem we’ve faced was size: our program was way too big to actually fit into a lambda. This blogs shows how we’ve dealt with that problem.
Setup
Our Digital Asset Management (DAM) service sends assets to an S3 bucket. Upon upload we would like to classify the images. The upload will automatically trigger the lambda with the machine learning models. We’ll send the images through the classifiers and save the results as S3 metadata on the objects.
Because we use Databricks we’re familiar with Python, so we decided to create the lambda in Python as well.
Lambda: the sky is the limit?
Well… no. There are some serious size limitations in AWS Lambda:
- The deployment package size has a hard limit of
262144000 bytes, that’s 262 MB. So the unzipped size of your package — including the layers — cannot be greater than this number.
- The temp storage limit is 512 MB.
- The memory limit is 3008 MB.
Our program has the following dependencies:
tensorflow==1.8.0
Keras==2.2.4
pillow==5.4.1
numpy==1.16.3
colorgram.py==1.2.0
webcolors==1.8.1
boto3==1.9.137
These Python packages should be shipped with our Lambda. When we install them into a single directory we end up with 415 MB. The models we’re using is are Keras H5 models that are both 159 MB. When we round the size of our actual code to a single MB, we come to the following conclusion:
program + packages + models =
1 MB + 415 MB + 318 MB = 734 MB =
way too much for an AWS Lambda!
AWS Layer for TF+Keras+PIL
We’re not the first people that have problems with the size limitations of AWS Lambda. Anton Paquin has been experimenting with a Lambda Layer that holds TensorFlow, Keras and PIL and is under the 250 MB limit!
The layer we’ll be using is
arn:aws:lambda:eu-west-1:347034527139:layer:tf_keras_pillow:1 and is only 230 MB in size. It uses TensorFlow 1.8.0 because this currently is the latest version that is small enough for a Lambda (version 1.12 is 282 MB).
This means we need to ship less packages. The layer also includes Boto3 for S3 communication, so we don’t have to load it.
Side loading packages
We still need to ship the following packages:
numpy==1.16.3
colorgram.py==1.2.0
webcolors==1.8.1
One of the packages is also dependent on Pillow, but because of the layer we don’t have to ship it. If we calculate the size of the packages we see that we only need to ship 81 MB! But how are we going to do this?
Package ’em up
We are — for lack of a better term — going to side load our packages. We’re going to zip them up (to save space) and deploy them when the lambda is started.
First we’ll need to package the dependencies up in a new zip file. Let’s create a new
requirements-lambda.txt with the packages we need to ship and let's run this script:
#!/bin/bash
name=$(basename -s .git `git config --get remote.origin.url`)
if [ -d "deploy" ]; then rm -Rf deploy; fi
mkdir deploy
pip install -r requirements-lambda.txt -t deploy/requirements-lambda/
cd deploy/requirements-lambda
rm -r PIL
rm -r Pillow*
zip -9 -r ../$name-requirements.zip .
cd ..
rm -r requirements-lambda
The zip is only 15.7 MB, which means it fits in our Lambda. So we can actually ship it with our Lambda. (Is your zip bigger? No worries, just read on).
Un(z/sh)ip it
We’ll ship it with the Lambda package in the root. When the Lambda is started, we’ll need to unzip it. Let’s create a new
setup.py that will unpack the dependencies and add them to the program:
import os
import sys
import zipfile
pkgdir = '/tmp/requirements'
zip_requirements = 'lambda-requirements.zip'
if os.environ.get("AWS_EXECUTION_ENV") is not None:
if not os.path.exists(pkgdir):
root = os.environ.get('LAMBDA_TASK_ROOT', os.getcwd())
zip_requirements = os.path.join(root, zip_requirements)
zipfile.ZipFile(zip_requirements, 'r').extractall(pkgdir)
sys.path.append(pkgdir)
In your handler, just use
import setup as the first line and the unzipped packages are used.
Too big to ship with the package?
What if the requirements package is bigger than 20 MB? Or what if you want to be able to edit the package in the online editor? (Then the uploaded package (without the layers) should be under 3 MB).
We could use S3 to ship our dependencies! Use the same package routine, but in your
setup.py, use:
import boto3
import os
import sys
import zipfile
REQUIREMENTS_BUCKET_NAME = ''
REQUIREMENTS_KEY = ''
pkgdir = '/tmp/requirements'
zip_requirements = '/tmp/lambda-requirements.zip'
sys.path.append(pkgdir)
if os.environ.get("AWS_EXECUTION_ENV") is not None:
if not os.path.exists(pkgdir):
s3 = boto3.resource('s3')
bucket = s3.Bucket(REQUIREMENTS_BUCKET_NAME)
bucket.download_file(REQUIREMENTS_KEY, zip_requirements)
zipfile.ZipFile(zip_requirements, 'r').extractall(pkgdir)
os.remove('zip_requirements')
sys.path.append(pkgdir)
Lazy-loading the models from S3
Our models were already “living” in S3. The last step of our Databricks training script is to send the Keras models to an S3 bucket. We cannot ship the models with the package anyway, as they are way too big. And it kind of makes sense to see the models as data.
Let’s use lazy loading to load the models into the lambda:
import boto3
import keras
import os
MODEL_BUCKET = ''
cache = {}
def get_model(key):
if key in cache:
return cache[key]
local_path = os.path.join('/tmp', key)
# download from S3
if not os.path.isfile(local_path):
bucket=s3.Bucket(MODEL_BUCKET)
bucket.download_file(key, local_path)
cache[key] = keras.models.load_model(local_path)
return cache[key]
In the actual solution we’re using a JSON config file to load the models, but the idea is the same.
So…
The size of the lambda is limited, but we can work around it. Our lambda layout looks a little like this:
There is not much space left for another model. A better idea might be to use a single model per lambda. In that case we should configure our CI/CD to redeploy the lambda for each model — but that’s for another time.
It is a pity that we have to jump to hoops to load a bigger function. I really hope AWS will allow us to at least use bigger packages.
Originally published at on April 28, 2019. | https://medium.com/wehkamp-techblog/aws-lambda-size-pil-tf-keras-numpy-f2b18de49f8c?source=collection_home---4------4----------------------- | CC-MAIN-2019-35 | refinedweb | 1,177 | 67.96 |
.
It is also possible to assign an identification string to an error.
If an error has such an ID the user can catch this error
as will be described in the next section..
Display an error message and stop m-file execution. further
commands. This is useful for aborting from functions or scripts.
If the error message does not end with a newline find the exact location of the error:
f () error: nargin != 1 error: called {} f (@var{arg1}) ## Function help text goes here… ## @end deftypefn function f (arg1) if (nargin == 0) print_usage (); endif endfunction
When it is called with no input arguments it produces the following error.
f () -| error: Invalid call to f. Correct usage.] | https://docs.octave.org/v4.4.0/Raising-Errors.html | CC-MAIN-2022-40 | refinedweb | 116 | 63.39 |
Scala Currying Function – Example & Partially Applied Function
1. Objective
Today, we will learn about Scala currying functions. Moreover, we will discuss advantages of currying in Scala Programming Language and how to call a scala currying function. Along with this, we will study Scala Currying vs partially applied functions. In addition, we will look at an example of Scala Currying Function.
So, let’s see the Scala Currying Function Tutorial.
2. What is Scala Currying Function?
Through Scala curry function, we can split a list with multiple parameters into a chain of functions-each with one parameter. This means we define them with more than one parameter list.
Follow this link to know about Scala String Method with Syntax and Method
A Syntax of Scala Currying Function:
We use the following syntax to carry out currying:
def multiply(a:Int)(b:Int)=a*b
Another way we can do this is:
def multiply(a:Int)=(b:Int)=>a*b
3. Calling Scala Function
To make a call to Scala function, then, we call it passing parameters in multiple lists:
multiply(3)(4)
4. Partially Applied Functions
An important concept to discuss here is partially applied functions. When we apply a function to some of its arguments, we have applied it partially. This returns a partially-applied function that we can use later. Let’s take an example.
scala> def multiply(a:Int)(b:Int)(c:Int)=a*b*c multiply: (a: Int)(b: Int)(c: Int)Int scala> var mul=multiply(2)(3)(_) mul: Int => Int = $$Lambda$1122/1609544540@4f92ded0
Here, the underscore is a placeholder for a missing value.
scala> mul(4) res11: Int = 24
Let’s Explore Scala Functions in detail
Well, this works too:
scala> var mul=multiply(2)(3)_ mul: Int => Int = $$Lambda$1173/256443308@207ceea4 scala> mul(4) res12: Int = 24
5. Example of Scala Currying Function
Example – 1
Let’s begin with Scala Currying Function example.
scala> class Add{ | def sum(a:Int)(b:Int)={ | a+b} | } defined class Add scala> var a=new Add() a: Add = Add@53cba89f scala> a.sum(3)(4) res4: Int = 7
Example – 2
Remember the other piece of syntax we looked at ? Let’s try defining a function that way, but with three arguments.
scala> class Concatenate{ | def strcat(s1:String)(s2:String)=(s3:String)=>s1+s2+s3 | } defined class Concatenate scala> var c=new Concatenate() c: Concatenate = Concatenate@5d55eb7a scala> c.strcat("Hello")("World")("How are you?") res7: String = HelloWorldHow are you?
Do you know about Scala Exceptions and Exception Handling
7. Advantages of Currying Function in Scala
Here, we will discuss the advantages of Currying in Scala, let’s discuss them:
- One benefit is that Scala currying makes creating anonymous functions easier.
- Scala Currying also makes it easier to pass around a function as a first-class object. You can keep applying parameters when you find them.
So, this was all about Scala Currying Function. Hope you like our explanation.
8. Conclusion
Hence, using the concept of partial functions, we use curry functions in Scala. This lets us split a list of multiple parameters into a chain of functions. Drop your queries in the comments.
Related topic – Scala File i/o: Open, Read and Write a File in Scala
is there any real world example of scala currying? | https://data-flair.training/blogs/scala-currying-function/ | CC-MAIN-2019-35 | refinedweb | 553 | 65.73 |
Using secure string
Overview
The secure string class lives within the
System.Security namespace and has been around since .NET Framework 2.0 was released.
MSDN description of
SecureString:
Represents text that should be kept confidential. The text is encrypted for privacy when being used, and deleted from computer memory when no longer needed.
Is It Already Used?
If we take a quick look within the .NET framework, we will see that it’s used in a fair few places.
Start-up ILSpy and look up the SecureString class using the search functionality and then check what the SecureString is ‘Instantiated By’.
The below 2 classes came up within ILSpy search with the assemblies that I had already pre-loaded:
System.Net.NetworkCredential
System.Windows.Controls.PasswordBox
If you’ve ever worked with the
NetworkCredential object you might have seen that there is 2 properties on it, one of
Password of type string and another of
SecurePassword of type SecureString.
The
PasswordBox object is similar having a
Password property of type string and a
SecurePassword property of type SecureString.
If we dig a little deeper in the code for these objects you’ll see that all the internals for storing the passwords are actually secure string, the property that is exposing the string version of the password is converting it to a string from its secure form.
This will be by design so that the consumer of the NetworkCredential and PasswordBox objects don’t really have to concern them themselves with knowing about secure strings if they don’t have to, but allows consumers that are concerned about the security of their system to take full control over how they expose the secure text.
string vs SecureString
- strings are not encrypted, SecureStrings are encrypted using user, logon session and process.
- strings are not mutable, every time you alter a string you get a new one and the old one if left in memory.
- Since strings are not mutable we can’t clean the memory (zero all the memory address out).
- strings are not pinned (stored on the managed heap), so the garbage collector could move them around resulting in copies within memory.
- SecureStrings can be marked as read-only and forced to be disposed (using statements).
Should I Use SecureString?
If you are working with confidential data such as credit cards, passwords, etc… You should be using SecureString as much as possible when passing it between in methods. Even though it’s not going to be possible to cover all situations you should try to minimize the overall attack surface on your application.
Also try to look for other components that you are using that may expose SecureStrings which then you can continue using in your own stack.
Overkill.
I’ve read a few blog posts going over the top. Maybe your string does come in as a normal string within your application but as soon as you get it in as a normal string just convert it and continue with your normal daily practices. I’ve seen lots of people explaining how you can clean out the incoming string such as the below but unless you desperately need to I would avoid it as you’ll end up missing the slightest thing and end up with some nice memory leaks.
var myString = "My String Text"; var handle = GCHandle.Alloc(myString, GCHandleType.Pinned); unsafe { // Zero out the string... var pMyString = (char*)handle.AddrOfPinnedObject(); for (int index = 0; index < myString.Length; index++) { pMyString[index] = char.MinValue; } } handle.Free(); // myString = "\0\0\0\0\0\0\0\0\0\0\0\0\0\0" Console.WriteLine(myString);
Helper Classes
While digging around within ILSpy I stumbled across
SecureStringHelper but its scope is set to internal which is a shame as I could imagine it would come it useful with external code too.
[SuppressUnmanagedCodeSecurity] internal static class SecureStringHelper { internal static string CreateString(SecureString secureString) { IntPtr intPtr = IntPtr.Zero; if (secureString == null || secureString.Length == 0) { return string.Empty; } string result; try { intPtr = Marshal.SecureStringToBSTR(secureString); result = Marshal.PtrToStringBSTR(intPtr); } finally { if (intPtr != IntPtr.Zero) { Marshal.ZeroFreeBSTR(intPtr); } } return result; } internal unsafe static SecureString CreateSecureString(string plainString) { if (plainString == null || plainString.Length == 0) { return new SecureString(); } SecureString result; fixed (char* value = plainString) { result = new SecureString(value, plainString.Length); } return result; } }
I’m guessing for the time being a nice copy and paste job will sort us out.
##Conclusion
Using
SecureString is well worth it, but at the same time I wouldn’t go overboard with it. Try keeping the sensitive data as inaccessible as possible when it’s not being used and being able to erase your records of it when it is no longer needed. Keep in your mind that you are trying to reduce the attack surface, rather than eliminate it.
I like to try to sum things up using code examples so below puts this in to perspective.
static void Main(string[] args) { // Simulate receiving password in non secure form. Console.WriteLine("Enter Password"); var password = Console.ReadLine(); // Our Internals takes SecureString. using(var securePassword = SecureStringHelper.CreateSecureString(password)) { CheckPassword(securePassword); // Simulate passing on password in non secure form. Console.WriteLine("Password:"); Console.WriteLine(SecureStringHelper.CreateString(securePassword)); } } | https://kevsoft.net/2014/03/14/using-secure-string.html | CC-MAIN-2020-34 | refinedweb | 864 | 56.05 |
📅 2010-Nov-05 ⬩ ✍️ Ashwin Nanjappa ⬩ 📚 Archive
In C++, the new expression is used to create an object on the heap. For example:
class Foo { public: Foo() { cout << "Constructor"; } }; int main() { Foo* f = new Foo(); // new return 0; }
What actually happens when a new expression
new Foo() is executed is:
The default
operator new function is called. This function merely allocates enough memory to hold an object of
Foo. It returns a
void* pointer to this memory.
The constructor of
Foo is called to initialize the object memory obtained from the above step.
The confusing part here is the difference between the new expression and the operator new function. The new expression is a part of the language, it cannot be modified. It behaves as described above.
The operator new has a default implementation in the standard library. But, it can also be overloaded for a class. For example:
#include <iostream> using namespace std; class Foo { public: Foo() { cout << "Foo constructor" << endl; } void* operator new( size_t i ) { cout << "operator new" << endl; return malloc( i * sizeof( Foo ) ); // Allocate raw memory } }; int main() { Foo* f = new Foo(); return 0; }
Executing the new expression
new Foo() first results in a call to
Foo::operator new() and consequently to the Foo constructor. The operator new function has a fixed function signature. The first parameter is always a
size_t and it returns a
void* pointer to the allocated memory. Overloading the operator new function is how a custom allocator is provided for a class. | https://codeyarns.com/tech/2010-11-05-c-new.html | CC-MAIN-2020-45 | refinedweb | 248 | 53.71 |
in reply to
Re^6: I dislike object-oriented programming in general
in thread I dislike object-oriented programming in general
Relax man, don't take this all so personally, we are just talking 'bout Perl here.
The point of all this waste of time is to say that I don't like the particular mechanism of modular decomposition offered by (pure) object-oriented programming, regardless of how the particular implementation of object-oriented programming works
My point is that many different languages provide different approaches to OO, and therefore provide a different form of "OO-style modular decomposition". I find it hard to believe that you have programmed enough (non-trivial) applications in all these different languages to really make such a sweaping statement about OO programming in general. I know that I have not done so, but I am willing to leave open the possibility that what I don't like about $language{X} might be "fixed" by some feature of $language{Y}. Or that through some creative technique I might be able to implement said feature in $language{X} often times by (ab)using the language in ways it was never intended to be (ab)used.
For instance, when I have programmed in Java, I have really really missed closures, which I use heavily in my Perl OO programming. In the end I just created an interface and wrote small inline classes to mimic them. It was clunky, but worked.
When I programmed in Javascript, I really missed some kind of namespacing mechanism, thankfully someone came up with a cool namespace hack using functions and exploiting the prototype based OO system.
I learned to hate Perl's clunky OO system over the years, but I loved where Perl 6 was going. So I wrote Moose, which bootstraps an entire new OO system into Perl's existing one.
In all these three examples, $language{X} was missing a particular "gob", but with a little creative hacking that "gob" (or something fairly close to it) was added to $language{X}.
Right, I suppose I should have written four or five of these meditations to enforce equality between paradigms.
Hopefully you don't hate all programming paradigms, cause if so, I think you picked the wrong career/major.
But since you asked nicely, I recommend pure functional programming with monads to encapsulate the particular cases where you need explicit state and sequential execution. Haskell is one where this happens. You can read more propaganda from their wiki.
Yeah, Haskell is nice, but IMO monads tend to be an overly complex abstraction of what are usually very simple tasks/requirements. The exception being the Maybe monad, which I always thought made Haskell code more readable.
It is interesting that you say monads, because while there are many "Monads in $my_favorite_language" blog posts out there, there are few practical uses of the technique outside of Haskell. Part of this is due (IMO of course) to the fact that really effective use of monads relys not only on a strong type system, but also on polymorphism (i.e. overloading of >>= operator) such as is provided by Haskell's type classes. Simply taking a look at any monad implementation in OCaml will illustrate this, since OCaml does not have this level of polymophism (you can hack around it somewhat using Functors, but it gets really ugly). So really your solution to OO is heavily tied to a particular programming language and other features in that language that truely make it a useful technique. Just as writing small inline classes to mimic closures in Java sucked, I would not want to have to use some other ackward technique in $language{X} to have monads.
My whole point is that you can't be implementation agnostic when discussing the merits of a particular paradigm, especially in a multi-paradigm language like Perl. As many people around here are fond of saying ...
TIMTOWTDI
Right, I suppose I should have written four or five of these meditations to enforce equality between paradigms.
Hopefully you don't hate all programming paradigms, cause if so, I think you picked the wrong career/major.
No, I hate them all, even those that have not been invented yet. And they all hate | http://www.perlmonks.org/?node_id=647064 | CC-MAIN-2015-35 | refinedweb | 710 | 57.61 |
For many projects, I have one common problem: I run out of I/O pins on my microcontroller :-(. Luckily, I’m not alone, and the industry has created solutions for this kind of problems. One is to use a shift register as the 74HCT595/SN74HC595 which gives me 8 extra output pins. All what I need to spend are are 3 GPIO pins. Not a bad deal: I spend 3 pins and I get 8 (or multiple of it) on return :-).
So why do I say this for this Arduino Motor/Stepper/Servo Shield tutorial? I have asked in this earlier post with a poll for the next topic (relais, motor or command line interface). Right now votes are mostly for relay. But before I can do relay (or DC motor), I need to first cover the 74HCT559. So here we go to have you ready for the next tutorial 🙂
74HC595
The 74HC595 comes in different packages and might have different pin names (depending on the vendor), but they are all similar to the SN74HC595 from Texas Instrument:
Basically, the device has a serial input pin, two clock pins (one to shift the serial data, and one to latch the data to the output pins, plus 8 output pins):
- SER: Serial input pin. Using this pin, data gets shifted into the device. Sometimes this pin is named SD (Serial Data).
- SRCLK: Serial clock, to shift in the data from the SER pin. Sometimes this pin is named SH_CP (Shift Clock Pulse)
- RCLK: Clock to store or latch the shift register content in the device. This pin is sometimes named ST_CP (Store Clock Pulse).
- QA..QF: 8 output pins of the device. Named as well Q1 to Q7.
- QH’: Daisy chain pin. Using this pin, multiple 74HC595 devices can be chained. Sometimes named Q7′.
Chaining 74HC595
I can chain multiple 74HC595, and then I get 8, 16, 24, etc output pins. An excellent tutorial how to use it to drive 16 LEDs (or more) can be found in this article. In this article I’m using just one device, but it is really easy to chain multiple 74HC595.
74HC595 on the Arduino Motor/Stepper/Servo Shield
Such a 74HC595 is used on the Arduino Motor/Stepper/Servo Shield introduced in this post. It is that device in the middle of the shield:
The Eagle Schematics and layout is available on GitHub here.
The 74HCT595N on the Arduino Shield is used to drive 8 motor bridge pins (M1A, M1B, M2A, M2B, M3A, B3B, M4A and M4B). It uses the DIR_EN pin to enable the device. The Ardunio Motor shield schematic is using ‘DIR’ as the signals are used to change the motor direction signals. More about this in the next tutorials.
74HCT595 Shifting
To illustrate the shifting of the device, I have connected the device to a logic analyzer:
Below I’m shifting in the value 0x03 on the data/DS pin, with the LSB (Least Significant Bit) first. The data gets shifted into the device on each raising edge of the clock signal. At the raising edge of the latch signal, the data shows up on the output pins D0-D7, where D7 has the least significant bit:
If I have the devices chained, then the D7 bit would be shifted into the next device through the ‘chain’ pin.
FRDM-KL25Z Connections
The shift register is connected as below to the FRDM-KL25Z board, as defined by the pin mappings of the shield:
- DIR_SER (Serial Input Pin)/DS => Arduino Header D8 => KL25Z pin 33 => PTA13/TPM1_CH1
- DIR_CLK (Shift Clock)/SHCP => Arduino Header D4 => KL25Z pin 30 => TSI0_CH5/PTA4/I2C1_SDA/TPM0_CH1/NMI_b
- DIR_LATCH (Latch clock)/STCP: => Arduino Header D12 => KL25Z pin 76 => PTD3/SPI0_MISO/UART2_TX/TPM0_CH3/SPI0_MOSI
- DIR_EN (Device Enable)/OE => Arduino Header D7 => KL25Z pin 66 => CMP0_IN3/PTC9/I2C0_SDA/TPM0_CH5
Shift and Latch
The following source demonstrates how to shift a byte into the shift register:
- DS1 is the data register
- SHCP1 is the shift clock register
void HC595_ShiftByte(byte val) { uint8_t i; /* precondition: latch pin, data pin and clock pin are all low */ for(i=0;i<8;i++) { /* shift all the 8 bits */ /* put data bit */ #if HC595_SHIFT_MSB_FIRST if (val&0x80) { /* LSB bit first */ #else /* LSB first */ if (val&1) { /* LSB bit first */ #endif DS1_SetVal(); } else { DS1_ClrVal(); } SHCP1_SetVal(); /* CLK high: data gets transfered into memory */ DS1_ClrVal(); /* data line low */ SHCP1_ClrVal(); /* CLK high: data gets transfered into memory */ #if HC595_SHIFT_MSB_FIRST val <<= 1; /* next bit */ #else val >>= 1; /* next bit */ #endif } }
The macro
HC595_SHIFT_MSB_FIRST is used to either shift in the most significant bit first or the least significant one first.
The method
ShiftByte() only shifts the 8bits, and does not latch them to to the output pins. So I can use several times the
ShiftByte() method if I have chained shift registers.
To latch the bits to the output pins, the
Latch() method is using the STCP (Store Clock Pin/Latch):
void HC595_Latch(void)< { /* send a latch pulse to show the data on the output pins */ STCP1_SetVal(); /* set latch to high */ STCP1_ClrVal(); /* set latch to low */ }
For the above pins (DS1, SHCP1 and STCP1) I can use normal GPIO pins in output mode. Pretty easy 🙂
Processor Expert Component
To make usage of a 74HCT595 really easy, I have created a Processor Expert component for it. It is available on GitHub with instructions here how to download and install the components.
The component has following properties:
It specifies the interfaces for the mandatory Latch, Data and Clock pins. The OE (Output Enable) pin is optional. Depending on the type of HC595 there might be different delays needed for clock and latch, so the component offers to specify a delay in nanoseconds.
The component offers the following methods:
It has
Init() and
Deinit() methods for driver initialization and de-initialization.
ShiftByte() and
Latch() are the methods discussed above. Additionally it features to methods:
ReadByte()returns the latched value. For this it uses a cached (local) variable.
WriteByte()does the shifting and latching for a single byte in a single method.
Summary
Shift registers are very useful to expand the amount of pins of a microcontroller: with a few pins it is possible to have many more pins. Writing a driver for it is not difficult, and I hope the 74HC595 Processor Expert component even makes things easier. As always: the sources are available on GitHub.
List of Tutorials
- Tutorial: Arduino Motor/Stepper/Servo Shield – Part 1: Servos
- Tutorial: Arduino Motor/Stepper/Servo Shield – Part 2: Timed Servo Moves
- Tutorial: Arduino Motor/Stepper/Servo Shield – Part 3: 74HCT595 Shift Register
Happy Shifting 🙂
Pingback: Tutorial: Arduino Motor/Stepper/Servo Shield – Part 2: Timed Servo Moves | MCU on Eclipse
Pingback: Tutorial: Arduino Motor/Stepper/Servo Shield – Part 1: Servos | MCU on Eclipse
This worked, have you ever done microstepping?
You mean using a stepper motor with that shield? No, that’s on my growing list of things I want to do, but never had time for it 😦
Ok, sure I also have a lot to write but no time as I am experimenting. I am using stepper motors from cdroms to build X-Y platform for application in some kind of imaging that requires shifting. DO you know which connection on the stepper motor shield is DIR for changing direction. I cant find anything about that.
A few links:
I am using in my project the same Arduino Motor/Stepper/Servo Shield and I want know if it is possible to change the pin 12
This part of the code in .h
// Arduino pin names for interface to 74HCT595 latch
#define MOTORLATCH 12 // I want change this pin 12 for 0, 1 or 2.
#define MOTORCLK 4
#define MOTORENABLE 7
#define MOTORDATA 8
I want to change the pin 12 for another pin. I want to do it, because in my project I need to use this pin 12. I have free the pins 0, 1 e 2 of the my Arduino.
Help me please!
Hello,
you need to check the schematics of your board if this is possible at all.
Erich | https://mcuoneclipse.com/2013/06/17/tutorial-arduino-motorstepperservo-shield-part-3-74hct595-shift-register/ | CC-MAIN-2017-34 | refinedweb | 1,347 | 67.28 |
In this article, I’ll walk you through how to convert an image to a pencil sketch with Python in less than 20 lines of code. Python is a general-purpose programming language and with the growing popularity of Python, it can be used in any task today.
Image to Pencil Sketch with Python
Before we write any code, let’s go over some of the steps that will be used and try to understand them a bit. First, find an image that you want to convert to a pencil sketch with Python. I will be using the image of a puppy as you can see below.
Also, Read – Machine Learning Full Course for free.
Next, we need to read the image in RBG format and then convert it to a grayscale image. This will turn an image into a classic black and white photo.
Then the next thing to do is invert the grayscale image also called negative image, this will be our inverted grayscale image. Inversion can be used to enhance details.
Then we can finally create the pencil sketch by mixing the grayscale image with the inverted blurry image. This can be done by dividing the grayscale image by the inverted blurry image. Since images are just arrays, we can easily do this programmatically using the divide function from the cv2 library in Python.
Let’s Code
The only library we need for converting an image into a pencil sketch with Python is an OpenCV library in Python. It can be used by using the pip command; pip install opencv-python. But it is not imported by the same name. Let’s import it to get started with the task:
Code language: JavaScript (javascript)Code language: JavaScript (javascript)
import cv2
I will not display the image at every step, if you want to display the image at every step to see the changes in the image then you need to use two commands; cv2.imshow(“Title You want to give”, Image) and then simply write cv2.waitKey(0). This will display the image.
Now the next thing to do is to read the image:
Code language: JavaScript (javascript)Code language: JavaScript (javascript)
image = cv2.imread("dog.jpg") cv2.imshow("Dog", image) cv2.waitKey(0)
Now after reading the image, we will create a new image by converting the original image to greyscale:
Code language: JavaScript (javascript)Code language: JavaScript (javascript)
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) cv2.imshow("New Dog", gray_image) cv2.waitKey(0)
Now the next step is to invert the new grayscale image:
Code language: JavaScript (javascript)Code language: JavaScript (javascript)
inverted_image = 255 - gray_image cv2.imshow("Inverted", inverted_image) cv2.waitKey()
Now the next step in the process is to blur the image by using the Gaussian Function in OpenCV:
Code language: Python (python)Code language: Python (python)
blurred = cv2.GaussianBlur(inverted_image, (21, 21), 0)
Then the final step is to invert the blurred image, then we can easily convert the image into a pencil sketch:
Code language: JavaScript (javascript)Code language: JavaScript (javascript)
inverted_blurred = 255 - blurred pencil_sketch = cv2.divide(gray_image, inverted_blurred, scale=256.0) cv2.imshow("Sketch", pencil_sketch) cv2.waitKey(0)
And finally, if you want to have a look at both the original image and the pencil sketch then you can use the following commands:
Code language: CSS (css)Code language: CSS (css)
cv2.imshow("original image", image) cv2.imshow("pencil sketch", pencil_sketch) cv2.waitKey(0)
So this is how we can convert an image into a pencil sketch with Python. I hope you liked this article on how to convert an image into a pencil sketch with Python. Feel free to ask your valuable questions in the comments section below.
Also, Read – Google Search Algorithm with Python.
2 Comments
I’m not getting the output
Where will I find that image?
You can use cv2.imwrite method to save the output image | https://thecleverprogrammer.com/2020/09/30/pencil-sketch-with-python/ | CC-MAIN-2021-43 | refinedweb | 649 | 72.87 |
User account creation filtered due to spam.
The following test program crashes even though I correctly listed %rsp as clobbered:
--
int main() {
asm volatile ("movq $0, %%rsp" : : : "%rsp");
return 0;
}
--
I would prefer gcc to error out in this case instead of silently ignoring my instruction.
The compiler doesn't analyse asm string.:
--
#include <stdlib.h>
int main() {
int x = rand();
asm volatile ("movq $0, %%rax" : : : "%rax");
return x;
}
$ gcc -Wall -O3 -fomit-frame-pointer -c -o test.o test.c
$ objdump -d -r -M intel test.o
test.o: file format elf64-x86-64
Disassembly of section .text.startup:
0000000000000000 <main>:
0: 48 83 ec 08 sub rsp,0x8
4: e8 00 00 00 00 call 9 <main+0x9>
5: R_X86_64_PC32 rand-0x4
9: 89 c2 mov edx,eax
b: 48 c7 c0 00 00 00 00 mov rax,0x0
12: 89 d0 mov eax,edx
14: 48 83 c4 08 add rsp,0x8
18: c3 ret
--
Notice that it saved eax to edx before my asm and restored it afterwards. This works for every register except %rsp, which is silently ignored if you try to list it in the clobber list. This is a bug.
(In reply to comment #2)
>:
%rsp is considered a "fixed" register, used for fixed purposes all throughout
the compiled code and are therefore not available for general allocation.
So, save %rsp at the beginning of your asm code and restore it at the end.
I understand that GCC may not be able to save/restore %rsp like it does other registers. But if that's the case, GCC should throw an error if the user puts %rsp in the clobber list, instead of silently ignoring it. Otherwise how is the user supposed to know that %rsp will not be saved except through trial and error?
The examples clearly show the problem and it bites me here. Please change the status to confirmed. | https://gcc.gnu.org/bugzilla/show_bug.cgi?id=52813 | CC-MAIN-2017-17 | refinedweb | 319 | 77.87 |
void calculate_wage (Record& payroll) // i know this isn't right.
void calculate_wage (Record& (payroll)[5]). In this case you have to include the size of the array. And something that would be better is:
void calculate_wage (Record& (payroll)[MAXSIZE]). Where "MAXSIZE" is defined above main as
constexpr int MAXSIZE{5};. Now any time you need to change the size of the array or any place else in the program all you have to change is the value of "MAXSIZE" in one place.
(Record (&payroll)[5]). I had the & in the wrong place. Sorry it has been a little while since I have done this.
using namespace std;is a bad idea as it WILL get you n trouble some day. Right now it seems easy, but not really. This is worth a read to start with: | http://www.cplusplus.com/forum/beginner/229573/ | CC-MAIN-2018-43 | refinedweb | 136 | 84.88 |
Bruce A. Draper, J. Ross Beveridge, A.P. Willem Böhm,. .
1) Introduction
Although computers keep getting faster and faster, there are always new image processing (IP) applications that need more processing than is available. Examples of current high-demand applications include real-time video
1.
2) Field Programmable Gate Arrays.
FPGAs serve as “glue logic” between off the shelf components. As a result.400 logic blocks. The economics of FPGAs are fundamentally different from the economics of other parallel architectures. most special-purpose image processors have been unable to keep pace with advances in general purpose processors. increases in FPGA speeds and capacities have followed or exceeded Moore’s law for the last several years. stereo [3]. researchers who adopt them are often left with obsolete technology. and researchers can continue to expect them to keep pace with general-purpose processors [6]. The logic blocks can be configured so as to exploit data. and video and image compression [5]. and as replacements for ASICs in first generation products. Recently. as are the grid connections. or I/O parallelism. Consequently. however. color-based object detection [4]. I/O blocks provide access to external pins. FPGAs have already been used to accelerate real-time point tracking [2]. process. contains 38. on the other hand. .Figure 1: A conceptual illustration of an FPGA. which usually connect to local memories. A Xilinx XCV-2000E. FPGAs. In computer vision and image processing. The contents of the LUTs are (re)programmable. Every logic block contains one or more LUTs. Because of the comparatively small size of the image processing market. and can operate at speeds of up to 180MHz. enjoy a multi-billion dollar market as low-cost ASIC replacements. plus a bit or two of memory. pipeline. or all of the above. for example. FPGAs have become so dense and fast that they have evolved into the central processors of powerful reconfigurable computing systems [1].
2) SA-C includes extensions to C that provide data parallel looping mechanisms and true multi-dimensional arrays.colostate. there are three major differences between SA-C and standard C: 1) SA-C adds variable bit-precision data types and fixed point data types. and to show how FPGAs implement a wide range of image processing applications. The goal of the Cameron project is to change how reconfigurable systems are programmed from a circuit design paradigm to an algorithmic one. we have developed a high-level language (called SA-C) for expressing image processing algorithms. In addition. Roughly speaking.or. and an optimizing compiler that targets FPGAs. To this end. these tools allow programmers to quickly write algorithms in a high-level language. FPGAs are generally programmed in hardware design languages. and run them on FPGAs. This exploits the ability of FPGAs to form arbitrary precision circuits. Our goal is to familiarize applications programmers with the state of the art in compiling high-level programs to FPGAs. such as VHDL.edu/~cameron/ for a complete set of documents and publications). This paper only briefly introduces SA-C and its compiler before presenting experiments comparing SA-C programs compiled to a Xilinx XCV-2000E FPGA to equivalent programs running on an 800MHz Pentium III. (synthesizable subsets of) hardware languages force implementers to focus on details such as synchronization and timing. given that simulators are too slow to emulate more than a tiny fraction of time. Detailed descriptions of the SA-C language and optimizing compiler can be found elsewhere (see [7. Together. Even new experimental languages such as Celoxica’s Handel-C still require the programmer to consider timing information (see related work). Just as important. 8]. These extensions make it . and compensates for the high cost of floating point operations on FPGAs by encouraging the use of fixed-point representations.Unfortunately. compile them. debugging all too often occurs at the circuit level. This excludes the vast majority of software developers who do not know circuit design.cs. 3) SA-C SA-C is a single-assignment dialect of the C programming language designed to exploit both coarse-grained (loop-level) and fine-grained (instruction-level) parallelism.
3) SA-C restricts C by outlawing pointers and recursion. int16 dfdx = for w in W dot h in H dot v in V return(sum(w*h).0. Figure 2: SA-C source code for the Prewitt edge detector. since the compiler can 2 The terminology and syntax of array manipulation is borrowed from Fortran-90. Unlike in traditional C. as shown in Figure 2.:] main (uint8 image[:. and functions are sections of a circuit. int16 M[:. 16. At first glance. 32 or 64 bits. and then computes the square root of the sum of the squares. consider how the Prewitt edge detector is written in SA-C.0. columns. and restricting variables to be single assignment.-1. or sub-images)2.:] = for window W[3. integers and fixed point numbers are not limited to 8. and also make it easier for the compiler to identify and optimize data access patterns. The Prewitt edge detector convolves the image with two masks (H & V above). int16[:.1}}. } return(M). pixels. one is struck by the data types and the looping mechanisms. int11).0.easier to express operations over sliding windows or slices of data (e.sum(w*v). rather than entries on a program stack. while “uint8” represents an unsigned 8-bit integer.g.:]) { int16 H[3. } return(array(magnitude)). .1}{-1.1.3] = {{-1.1}{-1.g. To illustrate the differences between SA-C and traditional C. they may have any precision (e. “int16” simply represents a 16-bit integer.1}}. This creates a programming model where variables correspond to wires instead of memory addresses.0. rows. int16 magnitude = sqrt(dfdy*dfdy+dfdx*dfdx).3] in image { int16 dfdy.3] = {{-1. int16 V[3.-1}{0.0}{1.
construct circuits with any precision3. and for uploading the results. but requires that the structures being combined have the same shape and size. and program parameters to the reconfigurable processor. SA-C’s looping constructs also allow new arrays to be constructed in their return statements using the array statement. are translated into FPGA configurations. arrays are true multi-dimensional objects whose size may or may not be known at compile time.:]” denotes a 2D array of unknown size. as shown in Figure 3. although their size must be known at compile time. each loop iteration matches one pixel in W with the corresponding elements of H and V. The underlying model is that a reconfigurable processor is available to the host for use as a co-processor. 4) THE SA-C COMPILER The SA-C compiler translates high-level SA-C code into host code and FPGA configurations. “return (array (magnitude))” makes a new array out of the edge magnitudes calculated at each 3x3 window. Parallel loops. image data. Perhaps the least C-like element of this program is the interior loop “for w in W dot h in H dot v in V”. Sequential statements outside of any loop are translated into C and compiled using a standard C compiler for the host. but this limitation has been removed. the input argument “uint8 image[:. “for window W[3. The SA-C compiler divides the source program into host code and code destined for the FPGA. The compiler inserts into the host C program all the code necessary for downloading the FPGA configuration. In this case. Such windows can be any size. 3 Earlier versions of SA-C limited variables to no more than 32 bits.3] in image” creates a loop that executes once for every possible 3x3 window in the image array. Since H and V are also 3x3 arrays. Also. This creates a single loop that executes once for every pixel in the 3x3 window W. A more thorough description of SA-C can be found in [7] or at the Cameron web site. . For example. Perhaps the most significant differences are in the looping constructs. This “dot product” looping mechanism is particularly handy for convolutions. In addition to stepping through images. which typically process images.
the loops can be fused. The shifts occur on a word basis. It fully unrolls loops anytime the number of iterations through the loop can be determined at compile-time. Reads from memory within the loop body are synchronized by an arbitrator. this enables constant propagation. . For each loop iteration. Every loop on the FPGA consists of a generator module. fills a word buffer. The reconfigurable processor is assumed to have local memories. When the value is a compile time constant. avoiding the creation of intermediate data and enlarging the loop body. The collector module receives loop body results and.For simplicity and retargetability. The generator module implements the sliding window operation driving a SA-C loop using a two dimensional block of shift registers. with one arbitrator per memory. Handshaking among the arbitrators and pipeline stages is used to guarantee the validity of the computation. because it spreads the iterations in code space rather than in time. The FPGAs are assumed to have I/O blocks and a grid of reprogrammable logic blocks with reprogrammable connections. and a PCI bus connection. A loop can also be partially unrolled under the control of a pragma. window. The SA-C compiler optimizes the loops it maps onto FPGAs. 4 Iterations over N-dimensional arrays use N-dimensional blocks of shift registers. So far. Full unrolling of loops is important when generating code for FPGAs. and replaces such references with the values of the array elements. one row of registers for each row of the SA-C window4. a loop body. The current implementation restricts this to fixed sized windows where the window size is known at compile time. an AMS StarFire with a single Xilinx XVC-1000 FPGA. one or more FPGAs. Array value propagation searches for array references with constant indices. the model of the reconfigurable processor is kept simple. the SA-C compiler has targeted a Annapolis Microsystems (AMS) WildForce processor with five Xilinx XC-4036 FPGAs. and vector) are implemented in this fashion. and an AMS WildStar with three XVC-2000E FPGAs. for each loop body result. All SA-C generators (element. selectors retrieve the appropriate bitfields from the block of shift registers and send them into the loop body. The results reported in this paper are with the AMS WildStar. When the result of one loops feeds another. The loop body is divided into pipelined sections to increase the achievable clock rate. which is written to memory when full. and a collector module.
Traditional CSE could be called ``spatial CSE'' since it looks for common sub-expressions within a block of code. and are better implemented as lookup tables. such as division. when a particular inner-loop is to be executed on the reconfigurable processor. and retrieves the data from the reconfigurable processor. looking for values computed in one loop iteration that were previously computed in another. At run-time. The configurations and their execution frequencies are recorded. but also Temporal CSE. Some operators. . transform the program into an equivalent program with a new set of inner-loops optimized for FPGAs. The SA-C compiler not only performs spatial CSE. The optimizations. executes the inner-loop. some controlled by user pragmas. In such cases. redundant computations are eliminated by holding values in register chains.Common sub-expression elimination (CSE) is a well known compiler optimization that eliminates redundancies by looking for identical sub-expressions that compute the same value. producing FPGA configurations. A pragma indicates that a function or an array should be compiled as a lookup table. Bitwidth narrowing exploits user-defined bitwidths of variables to infer the minimal bitwidths of intermediate variables. can be inefficient to implement in logic gates. Commercial software is used to synthesize and place-and-route the VHDL. the host interface module downloads the configuration and input data via DMA. sets the clock frequency. These inner-loops are compiled to VHDL.
Our conventional processor is a Pentium III running at 800 MHz with 256KBytes of primary cache. Slices are the XV-2000E version of the more generic term “logic block” used in Figure 1. More complex tasks result in larger speed-ups. Each FPGA has 38. bodies and data collectors. The empirical question in this paper is whether image processing tasks written in SA-C and executed on FPGAs are faster than the equivalent applications on conventional general-purpose hardware. Table 2 shows the . in particular Pentiums.Figure 3: The SA-C compiler divides the souce program into sequential host code and loops to be executed on the FPGA. The chips in both processors are of a similar age and were the first of their respective classes fabricated at 0. The images used are 512x512 images of 8-bit pixels. by exploiting the parallelism within FPGAs and the strengths of an optimizing compiler. Because all six applications are based on sliding windows where the height of the window is 13 rows or less.400 flip-flops. Programs are written in a high-level language. All code for downloading and uploading data it generated by the compiler and automatically inserted into the host code.18 microns. The reconfigurable processor used in our tests is an Annapolis Microsystems WildStar with 3 Xilinx XV-2000E FPGAs and 12 local memories. Loops are compiled for the FPGA. The tests presented in Table 1 suggest that in general. the primary cache on the Pentium is large enough to guarantee that the only primary cache misses occur the first time a data element is accessed. and can be compiled. with loop generators.400 4x1-bit lookup tables (LUTs) and 38. These are organized into 19. although this speed-up is modest (a factor of ten or less). debugged. so long as the workstation has access to a reconfigurable processor. Simple image operators are faster on FPGAs because of their greater I/O bandwidth to local memory. up to a factor of 800 in one experiment. each of which contains two LUTs and two flip-flops. SA-C therefore makes reconfigurable processors accessible to applications programmers with no hardware expertise.200 “slices”. 5) IMAGE PROCESSING ON FPGAs The SA-C language and compiler allow FPGAs to be programmed in the same way as other processors. Host code is translated to C and compiled for the host. and executed from a local workstation. the answer is yes.
and write the result back to the local memory of the reconfigurable processor.00311 0.88 83. Routine AddS Prewitt Canny Wavelet Dilates (8) Probing Pentium III XV-2000E 0.00606 0. The execution times reported in this section were collected as follows.08 Ratio 8. Execution times in seconds for routines with 512x512 8-bit input images. for the Pentium we time how long it takes to read the image from the local memory of the host. process it.00208 0. the image data has already been downloaded to the local memory of the reconfigurable processor.00067 0.5 38.5 21. For the FPGA.chip resources used by the programs in the test suite.5 Table 1.07708 0.6 812. Equivalently.06740 65. . process it. The probing application employs all three FPGAs on the WildStar.13500 0. We time how long it takes for the FPGA to read the image from memory.00595 0. The comparison is between a 800Mhz Pentium III and an AMS WildStar with Xilinx XV-2000E FPGAs. and write it back to the local memory of the host.16 22.0 0.00196 0.15808 0. the other applications use only a single chip. We do not count the time required to transfer data to/from the host to the memory of the reconfigurable processor (this data is given in Section 6 and Table 3).
the WildStar runs at 51. FPGA resource use in percentages for the routines shown in Table 1: LUTs (4x1 look-up tables). however. In so doing. the WildStar outperforms the Pentium by a factor of 8. For the Pentium.1) Scalar Addition The simplest program we tested adds a scalar argument to every pixel in an image. 5. A slice is “used” is any of its resources are used.Routine AddS Prewitt Canny Wavelet Dilates (8) Probing (chip 1) Probing (chip 2) Probing (chip 3) LUTs (%) FFs (%) 9 18 48 54 56 33 9 13 45 69 56 39 Slices (%) 16 28 87 99 97 65 36 41 72 42 49 85 Table 2. not at cache speed. compared to 800 MHz for the Pentium.7 MHz. For the WildStar. As shown in Table 1. memory response times have not kept up with processor speeds.5). . we compare the performance of compiled SA-C code on an FPGA to hand-optimized Pentium assembly code. Unfortunately for the Pentium. FFs (flip-flops) and Slices (blocks of 2 LUTs and 2 FFs). this application on the Pentium runs at memory speed. we used the iplAddS routine from the Intel Image Processing Library (IPL. Since each image pixel is used only once. we wrote the function in SA-C and compiled it to a single FPGA. release 2. Why is the WildStar faster? For this application.
on the other hand. On every cycle. as before. since only 9% of the lookup tables (and 9% of flip-flops) are used. because the input image is small enough for three rows to be kept in primary cache. and therefore pixels do not have to be read from memory three times. 8 additions are performed (on pixels read in the previous cycle) and 8 pixels are written.96 milliseconds. In comparison. the Prewitt operator reads only four pixels per cycle. . the operator’s pixel-wise access pattern is easily identified by the SA-C compiler. and the edge magnitude at a pixel is the square root of the sum of the square of the convolution responses. The WildStar gives the FPGAs 32-bit I/O channels to each of four local memories. Also. although the effect is less. Every 3x3 image window is convolved with horizontal and vertical edge masks. 5 iplConvolve2D with two masks (the Prewitt horizontal and vertical edge masks) and IPL_SUMSQROOT as the combination operator. It is a simple. Pentium comparison. not eight. Also. The Prewitt program written in SA-C and compiled for the WildStar computes the edge magnitude response image for a 512x512 8-bit input image in 1. 5. which is able to optimize the I/O so that no cycles are wasted. or approximately 80 times longer (see Table 1). 8 pixels are read. the equivalent Intel IPL function5 on the Pentium takes 158. However. the Pentium now gets a performance benefit from its cache. Why is the Prewitt edge detector faster on an FPGA? One of the reasons. is I/O.FPGAs. it demonstrates that FPGAs will outperform Pentiums on simple image operators because of their pipelining and parallel I/O capabilities. This program represents one extreme in the FPGA vs.2) Prewitt & Canny The Prewitt edge operator shown in Figure 2 is more complex than simple scalar addition. As written. pixel-based operation that fails to fully exploit the FPGA. so the FPGA can both read and write 8 8-bit pixels per cycle.08 milliseconds. are capable of parallel I/O.
Of course. A naïve implementation of a 3x3 window would slide the window horizontally across the image until it reaches the end of the row. We mentioned above that the FPGA computes eight image windows in parallel. As a 3x3 image window slides across an image. each pixel is read 3 times (assuming shift registers hold values as it slides horizontally). once for each row in the window. are ideal for data parallelism. Figure 4: How partial vertical unrolling optimizes I/O. and is an example of a general optimization technique called partial loop unrolling that is used to optimize I/O over N-dimensional arrays. the parallelism of the FPGAs does more than just reduce the number of I/O cycles. and then drop down one row and repeat the process. While the shift registers in the loop generator avoid reading a pixel more than once on any given horizontal sweep. two or more vertical windows are computed in parallel.FPGAs do not have caches. This reduces the number of input operations needed to process the image by almost a factor of three. The SA-C compiler avoids this by partially unrolling the loop and computing eight vertical windows in parallel (see Figure 4). since square root is a complex operation that leads to a long circuit (on an FPGA) or a complex series of instructions . but the SA-C compiler implements an optimization that minimizes rereads. while the additions are implemented as trees of parallel adders. The multiplications can be done in parallel. Under partial unrolling. Convolutions. allowing the passes to skip rows and reducing I/O. It also exploits parallelism within the windows. in general. Pipeline parallelism is equally important. in a naïve implementation every pixel would still be read three times.
performs non-maximal suppression in the direction of the gradient. The SA-C compiler lays out the circuit for the complete edge operator including the square root. Table 1 shows the comparison with OpenCV. the eight windows being processed in parallel contain redundant additions. and then inserts registers to divide it into approximately twenty pipeline stages. using IPL routines to perform the convolutions. so all of the multiplications in these particular convolutions are optimized away or replaced by negation.0. only more complex. The result was compared to two versions of Canny on the Pentium. the Prewitt edge masks are composed entirely of ones. the Canny operator does not . so the OpenCV Canny routine has the same opportunities for constant propagation and common subexpression elimination that SA-C has. The first version was written in VisualC++ (version 6. the SA-C compiler has the advantage of being a compiler. the WildStar was 120 times faster. Second. while the IPL convolution routine must be general enough to perform arbitrary convolutions. The Pentium’s performance improved five fold in comparison to the C++ version. We then tested the hand-optimized assembly code version of Canny in Intel’s OpenCV (version 001). and applies high and low thresholds to the result. The Canny edge detector is similar to Prewitt. First. the Canny operator uses fixed convolution masks. not a library of subroutines. Why is performance relatively better for Prewitt than for Canny? There are two reasons. It then computes edge orientations as well as edge magnitudes. the SA-C implementation of Prewitt can take advantage of compile-time constants.) We implemented the Canny operator in SA-C and executed it on the WildStar. (A final connected components step was not implemented. fully optimized).(on a Pentium). setting the high and low thresholds equal to prevent the connected components routine from iterating. but the FPGA still outperformed the Pentium by a factor of 22. see [9] pp. Finally. the extra copies of which are removed by common subexpression elimination. This allowed us to compare compiled SA-C on the WildStar to compiled C on the Pentium. Thus. Furthermore. In particular. 76-80. It smoothes the image and convolves it with horizontal and vertical edge masks. zeroes and minus ones.
and a second pass with a 1x5 mask over the intermediate images creates the four final images. a 5x1 mask creates two intermediate images from the source image. and majority threshold). the WildStar beat the Pentium by a factor of 35. the SA-C compiler partially unrolls the 5x5 loop 8 times to minimize redundant read operations.include a square root operation. we attribute the speed difference to I/O differences and SA-C’s temporal CSE optimization. results are similar for erosion). dilate. erode. on the other hand. however. is in a sequence of 8 erosion operators with alternating masks. 5. positive difference. bitwise and.4) The ARAGTAP Pre-screener We also compared FPGAs and Pentiums on two military applications. The SA-C algorithm. 5. (Just the dilate is shown in Table 1. and a later sequence of 8 dilations. (The second and fourth columns are also the same. We therefore compared these sequences on FPGAs and Pentiums. In addition. The pre-screener uses six morphological subroutines (downsample. makes a single pass over the source image with a 5x5 mask.3) Wavelet In a test on the Cohen-Daubechies-Feauveau Wavelet [10]. writing all four output images in parallel. . The first is the ARAGTAP pre-screener [12]. the first and last columns of the 5x5 mask are the same. all of which were written in SA-C. Honeywell’s algorithm makes two passes.) In addition. In this case. Square roots can be pipelined on an FPGA but require multiple cycles on a Pentium. Here a SA-C implementation of the wavelet was compared to a C implementation provided by Honeywell as part of a reconfigurable computing benchmark [11]. This allows for temporal subexpression elimination. a morphology-based focus of attention mechanism for finding targets in SAR images. Most of the computation. The intermediate values computed for the right column are the same as the values computed four cycles earlier for the left column.
This is still too large to fit on one FPGA.08 seconds. Fortunately.The SA-C compiler fused all 8 dilations into a single pass over the image. The result was a 20 fold speed-up over the Pentium running automatically compiled (but heavily optimized) C. where a probe is a pair of pixels that straddle the silhouette of the target. probing takes 65 seconds on a 512x1024 12-bit image. It is worth noting that methods have been proposed for automatically partitioning applications across FPGAs when the applications are specified as a task graph [14] (see also [15]). Probe sets must be evaluated at every window position in the image. The goal of probing is to find a target in a 12-bit per pixel LADAR or IR image. many of these probes are redundant in either space or time. the application defines 7. and the SA-C optimizing compiler is able to reduce the problem to computing 400 unique probes (although the summation trees remain complex). In total. and other optimizations. and there is not enough computation per pixel to fully exploit the parallelism in the FPGA. but were foiled by the simplicity of the dilation operator: after optimization. 5. . When compiled using VisualC++ for the Pentium. The match between a template and an image location is measured by the percentage of probes that straddle image boundaries. it reduces to a collection of max operators. pipelining. This is done by writing three loops. one for each vehicle.5) Probing The second application is an ATR probing algorithm [13]. The idea is that the difference in values between the pixels in a probe should exceed a threshold if there is a boundary between them. the SA-C implementation on the WildStar runs in 0. it also applied temporal common subexpression elimination.573 probes per 13x34 window. In our example there are approximately 35 probes per view. so to avoid dynamic reconfiguration we explicitly distribute the task across all three FPGAs on the WildStar. 81 views per target. We had expected a greater speed-up. and using a pragma to direct each loop to one FPGA. and three targets. A target (as seen from a particular viewpoint) is represented by a set of probes. What makes this application complex is the number of probes.
the FPGAs perform (512-13+1) × (1024) × (13/2) = 3. this would result in an execution time of 75 seconds. eliminating data download times.744. The Virtex II Pro from Xilinx is an example of such a combined processor.1 MHz this takes 80.421. The Pentium performs (512-13+1) × (1024-34+1) window accesses. the Pentium (a super scalar architecture) is actually executing more than one instruction per cycle! 6) Co-processor Timing Considerations In Section 5.421. containing two 12 bit pixels. For the probing algorithm generated by the SA-C compiler.752. We believe that these are the most relevant numbers. much better than the 22 instructions that the gcc compiler produces at optimization setting -O6. If one instruction were executed per cycle. The VC++ compiler infers that ALL the accesses to the set array can be done by pointer increments. the FPGAs run at 41. process it. As there are (512-13+1)×(1024) ×13 pixel columns to be read. Hence the inner loop that performs one threshold operation is executed (512-13+1) × (1024-34+1) ×7573 = 3.000 reads. j++){ diff = ptr[set[i][j][2]*in_width+set[i][j][3]] – ptr[set[i][j][0]*in_width+set[i][j][1]]. count += (diff>THRESH || diff<-THRESH).) The total number of instructions executed in the inner loop is therefore 3. since the trend is toward putting RISC processors on FPGA chips. . thereby eliminating the time needed to transfer data between the host and reconfigurable processor.1 MHz.000. by the way. At 41. The inner loop body in C is: for(j=0. } where in_width and THRESH are constants.These times can be explained as follows. and write it back to local memory. As the execution time of the whole program is 65 seconds.752. j<sizes[i].328. Each of these window accesses involves 7573 threshold operations.038. and generates an inner loop body of 16 instructions. There are also reconfigurable processors with direct camera connections. we reported run times that measured how long it took an FPGA to read data from local memory.500 times.8 milliseconds. (This is. The program is completely memory IO bound: on every clock cycle each FPGA reads one 32 bit word.500 ×16 = 60.
11 3.05923 0. the FPGA is slower than a Pentium at scalar addition and wavelet decomposition.83 1.14 seconds to reconfigure an XV2000E over a PCI bus.78 Ratio 0.15808 0. A typical upload or download time on a PCI bus for a 512x512 8-bit image is about 0. To execute an operator on the co-processor. current FPGAs cannot be reconfigured quickly.07708 0.14 83.019 seconds. Execution times in seconds for routines with 512x512 8-bit input images.06740 65. Routine AddS Prewitt Canny Wavelet Dilates (8) Probing Pentium III XV-2000E 0. and since it creates two result images.05072 0. but the performance ratios are very small except for probing. the output image has more output pixels than the source image. The simplest way to do this in practice is to select one operator sequence to accelerate per .05206 0. In addition.0 0. the bit resolution of the result is larger than the source. In all cases except dilation and probing. typically doubling the time to upload the results.3 Table 3.Nonetheless. a total of six uploads are required. Any function compiled to an FPGA configuration must save enough time to justify the reconfiguration cost.13500 0. When upload and download times are included.00595 0. although they are almost always faster for image computation if data transfer times can be eliminated. increasing upload times.12 2. the configuration tested here and shown in Figure 3 has a separate reconfigurable co-processor connected to a host via a PCI bus. the image must be downloaded to the reconfigurable system’s memory. The other applications tested are faster on the FPGA.06091 0. the image must be downloaded three times (once for each FPGA). In all of the applications except dilation and probing.22 0.09258 0. and the results must be returned to the host processor. as shown in Table 3. For probing. This suggests that FPGAs should only be used as co-processors for very large applications. when data transfer times between the host and reconfigurable processor are included. It takes about 0.
Unfortunately. and to pre-load the FPGAs with the appropriate configurations. universities and government laboratories as a single library to be supported by all manufacturers of image processing hardware. Some of this early work tried to map IP onto commercially available parallel processors (e. and Image Processing Library (VSIPL)9. More recent work has focused on so-called “vision chips” that build the sensor into the processor [18]. Another approach (advocated here) is to work at the board level and integrate existing chips – either DSPs or FPGAs -.isi. Focusing on FPGAs. [17]). researchers have to develop software libraries and/or programming languages.g.edu/SLAAC/ 9.) Research projects into new designs for reconfigurable computers include PipeRench [20].annapmicro. To exploit new hardware.FPGA. (The experimental results in this paper were computed on an AMS WildStar.org 10 www. The Intel Image Processing Library (IPL)10 and OpenCV11 are similar libraries that map image processing and computer vision operators onto Pentiums. the Nallatech Benblue7 and the SLAAC project8. It is also possible to build graphical user interfaces (GUIs) to make using libraries easier. One of the most important software libraries is the Vector. RAW [21] and Morphosis [22]. The current state of the art in commercially available reconfigurable processors is represented by the AMS WildStar6.com/software/products/perflib/ipl/ 7 6 .into parallel processors that are appropriate for image processing.com 8. proposed by a consortium of companies. all of which use Xilinx FPGAs. CHAMPION [23] uses www. thus eliminating the need for dynamic reconfiguration.intel. in both cases the market did not support the architecture designs.g. Signal. which were eclipsed by general-purpose processors and Moore’s law. 7) Related Work Researchers have tried to accelerate image processing on parallel computers for as long as there have been parallel computers.nallatech. Splash-2 [19] was the first reconfigurable processor based on commercially available FPGAs (Xilinx 4010s) and applied to image processing.com www. while other research focused on building special-purpose machines (e. [16]).
e. a programming language for Digital Signal Processors. The boundaries of these regions form barriers.org 11 . SA-C has also been integrated with Khoros. originally from Oxford University and now further developed by Celoxica. Streams-C [29] emphasizes streams to facilitate the expression of parallel processes. 12 www. One of the first programming languages designed to map image processing onto parallel hardware was Adapt [25]. Handel-C has variables with user-defined bitwidths similar to SA-C. where the modules is programmed in VHDL by the user. Other languages have specifically targeted FPGAs. and pointer arithmetic for efficiently implementing circular buffers. the MATCH project [30] uses MATLAB as its input language.intel. developed and supported by Adelante12. embedded systems design kits provide a frame work for modules to be connected together and interfaced on an FPGA.g.com 13 Khoros [24] GUI.adelantetech. This provides an explicit timing model in the language.com/software/products/opensource/libraries/cvfl. Handel-C [28]. It has fixed point numbers of user specifiable sizes. CoreFire from Annapolis Micro Systems. several of these projects focus on reconfigurable computing.htm www. C\ [26] and CT++ [27] are more recent languages designed for the same purpose. System C13 is a C++ class library that allows system simulation and circuit design in a C++ programming environment. while targeting reconfigurable processors. having implemented all the primitive Khoros routines in VHDL. Finally. Other languages are less specialized and try to map arbitrary computations onto fine-grained parallel hardware. is a C derived language with explicit Occam type sequential and parallel regions. Various vendors provide macro dataflow design packages. and Viva from Starbridge. DSP-C. is a sequential extension of ISO C.systemc. In addition. which can be used both to call pre-written SA-C routines or to write new ones.
The goal of these extensions is to support stand-alone applications on FPGAs.a. for example. Such processors could be incorporated inside a camera. We also plan to introduce new compiler optimizations to support trees and other complex data structures in memory. reconfigurable processor boards with one or more FPGAs. real-time applications on traditional processors had to be written in assembly code. This will support new FPGA boards with channels for direct sensor input. one for internal computation and one for I/O. A single host processor could then monitor a large number of cameras/FPGAs from any location. loop carried) variables. and internet access. The application could be as simple as motion detection or as complex as human face recognition. local memories. and include internal RAM blocks that can serve as data buffers. and will also make it easier to implement applications where the runtimes of subroutines are strongly data dependent. and to improve pipelining in the presence of nextified (a. for example connected components. We believe there is an analogous progression happening with VHDL and high level algorithmic languages for FPGAs. In particular.8) Future Work For many years. A security application running on the FPGAs could then inspect images as they came from the camera. however. the FPGA configurations generated by the SA-C compiler currently use only one clock signal. Imagine. At the moment.k. A/D converters (or digital camera ports). We also plan to introduce streams into the SA-C language and compiler. support multiple clocks running at different speeds. and would consume very little power. . applications written directly in VHDL are more efficient (albeit more difficult to develop). and notify users via the internet whenever something irregular occurred. Future versions of the compiler will use two clocks. Xilinx FPGAs. because the code generated by compilers was not as efficient. This should double (or more) the speed of I/O bound applications. This limits the I/O ports to operate at the same speed as the computational circuit. but we expect future improvements to the compiler to narrow this gap.
[3] J. Tredennick. "Reactive Computer Vision System with Reconfigurable Architecture. Herzen. [5] R. and that this will continue to be true as both types of processors become faster. Reinig. 1999. 1999. Spain." presented at Conference on Compression Technologies and Standards for Image and Video Compression. 2000. 1: Gilder Publishing. Draper. 1998. "Real-Time Stereo Vision on the PARTS Reconfigurable Computer. J. 1-8. DeHon. Cabrera. In particular. Simpler image processing operators tend to be I/O bound. Woodfill and B. Napa. v. . "Moore's Law Shows No Mercy. W. References [1] A. Las Palmas de Gran Canaria. vol. LLC. P. Kress. getting faster and denser at the same rate as other processors. Perona. 1997. rather than I/O bound. and may merge in future systems on a chip. and K. Hartenstein. Schmidt.8) CONCLUSION FPGAs are a class of processor with a two billion dollar per year market. Hammes. vol. A. pp. Benedetti and P. B. 33. 1995. [4] D. Las Palmas de Gran Canaria. CA. but by smaller margins (factors of 10 or less). [6] N. "Sassy: A Language and Optimizing Compiler for Image Processing on Reconfigurable Computing Systems. FPGAs still outperform Pentiums because of their greater I/O capabilities. CA. Böhm. Benitez and J. In these cases." in Dynamic Silicon. The thesis of this paper is that most image processing applications run faster on FPGAs than on general-purpose processors. 4149. As a result. R. complex image processing applications do enough processing per pixel to be compute bound. W." presented at IEEE Conference on Computer Vision and Pattern Recognition. they obey Moore’s law. P. In such cases. FPGAs dramatically outperform Pentiums by factors of up to 800." presented at International Conference on Vision Systems." IEEE Computer. Becker. pp." presented at International Conference on Vision Systems. [2] A. 2001. "A Reconfigurable Machine for Applications in Image and Video Compression. and A." presented at IEEE Symposium on Field-Programmable Custom Computing Machines. Amsterdam. "Real-time 2-D Feature Detection on a Reconfigurable Computer. Santa Barbara. H. [7] J. "The Density Advantage of Reconfigurable Computing.
[10] A.[8] A. Lehn. Bellingham. Rinker. 1996. LA. 25. Riseman. Draper. Inc. Kumar. Nowicki. K. [12] S. [19] D. Trucco and A. Chawathe. Narayanan. Vemuri. 1998. Moye. . Hess. 1992. vol. M. C. D. Böhm. E. Splash 2: FPGAs in a Custom Computing Machine: IEEE CS Press. [15] R. Record. Moini. "Laser Radar ATR Algorithms: Phase III Final Report. 1993. [13] J. 1992. New Orleans. D. and W. [9] E. 1998. A. Davis. S. Kleinfelder." Communications of Pure and Applied Mathematics. Hammes. vol. and M. "An Automated Temporal Partitioning and Loop Fission for FGPA-based Reconfigurable Synthesis of DSP applications. R. 1992. M. Hanson. 1999. C. [17] C. T. Verri. Hudson. Athanas." IEEE Computer. Introductory Techniques for 3-D Computer Vision. J. [16] P. C. Buell. Shiring. 45. Boston: Kluwer. J. WA. 25. "ARAGTAP ATR system overview. J. vol. 485-560. and L. CA. Saddle River." presented at SPIE. 2000. and W. Justice. 65-68. Boulder. J." presented at 36th Design Automation Conference. N. pp.and Intermediate-Level Vision. J. Najjar. May 1992." Supercomputing. E. "Biorthogonal bases of compactly supported wavelets. and P. Monterey. 526. and Future Directions. D. vol. D. [11] S.Status. pp. pp. Reflections. NJ: PrenticeHall. and A. vol. "Spatio-temporal Partitioning of Computational Structures onto Configurable Computing Machines. M. "A Benchmark Suite for Evaluating Configurable Computing Systems . 21. P. R." Alliant Techsystems. W. L. B. and I. Daubechies. Ross. [18] A. I. 2002. Atwell. Chen. "Mapping a Single Assignment Programming Language to Reconfigurable Systems. 117-130. Ouaiss. "Image Understanding Architecture: Exploiting Potential Parallelism in Machine Vision. Vision Chips. Feauveau." presented at Theater Missile Defense 1993 National Fire Control Symposium. R. and J. S. 68-73. "Effective Use of SIMD Parallelism in Low." IEEE Computer. Arnold. [14] M. CO. pp. E." presented at International SYmposium on FPGAs. Raney. R. J. 1999. Bevington. Weems. E. Govindarajan. Kaul. Cohen. A.
Basille. Frank. Napa. C. 2000. Houzet. Kurhadi. Taylor. V. 1997. H. [24] K. pp. [30] P. "Stream Oriented FPGA Computing in Streams-C." IEEE Computer. Banerjee. 25. Gokhale. Napa. [28] O. R. 2000. S. Waingold." IEEE Transactions on Image Processing." University of Tennessee 1999. Tan. Bagherzadeh. CA. vol. 21-31. "Baring it all to Software: RAW Machines. "The Handel Language. Kim. [23] S. B.. "Automatic Mapping of Khoros-based Applications to Adaptive Computing Systems. Natarajan. [29] M. M. MA. P. Webb. and R. Bodin. Cadambi. . H. M. S. Reconfigurable Computing Systems. 1994. " A Specific Compilation Scheme for Image Processing Architecture. and F. R." presented at IEEE Symposium on Field-Programmable Custom Computing Machines. "The Morphosis Parallel Reconfigurable System. CA. and A. Lu.[20] S. [27] F. H. [22] G. Babb. Essafi. Singh. Cambridge. Levine. [26] A. W. Laufer." presented at Computer Architectures for Machine Perception. 1997. C. 1997." presented at IEEE Symposium on Field-Programmable Custom Computing Machines. Schmit. J. A. [21] E. 86-93. "Steps Toward Architecture-Independent Image Processing." presented at Computer Architecture for Machine Perception. Lee. Pic. Finch. N. pp. M. Srikrishna. 1992. D. Cambridge. "A MATLAB Compiler for Distributed. 1999. and D. Group. D. 3." IEEE Computer. Rasure. 30. Goldstein. and M." Oxford University 1997. "PipeRench: A Coprocessor for Streaming Multimedia Acceleration. Sarkar. Barua. Lee. Taylor. 243-252. vol. J.. Newport. vol. V. MA. Konstantinides and J. "The Khoros Software Development Environment for Image and Signal Processing. Moe. [25] J. C. Fatni. Lee." presented at International Symposium on Computer Architecture. D." presented at EuroPar. R. Budiu. Hetergeneous. pp. Bouldin. M. M. and J. H. Agarwal. Amarasinghe. 1999. " The C\ Data Parallel Language on a Shared Memory Multiprocessor. | https://www.scribd.com/document/216723264/Accelerated-Image-Processing-on-FPGAs | CC-MAIN-2018-43 | refinedweb | 7,263 | 52.05 |
0
Hi I need help in programming with strings....I started a code, but I'm not sure how to do it....Please help
- read in one string which consists of two words Ex. "Computer Science"
- call a function makewords() which receives three parameters: a string which holds the original phrase and two strings to hold the two words. The function will break the string stored in the first parameter (the original sentence) into two words and then store it in one of the parameters.
- Your function, makewords(), should then check if the 2 new strings are equal to each other, and print the appropriate message saying whether or not they are equal.
- print out the two words and their sizes.
- call the function matchexact() passing the two strings. matchexact() will determine how many times the corresponding positions in the two strings hold exactly the same characters. The function will print this value.
- The main program will then call a function jointhem() which receives the two strings. The function will join the two strings together and print the new string. Ex. if string1 is “bath” and string2 is “water” than the new string is “bathwater”
#include<iostream> #include<string> using namespace std; int main() { string [50]; string a ("Computer Science"); cout<<"Computer Science"; makewords( int a, int b, int c); string[50]; | https://www.daniweb.com/programming/software-development/threads/63547/strings-program | CC-MAIN-2016-50 | refinedweb | 222 | 80.11 |
What's New in Visual Studio 2012
You can find information about new features and enhancements in Visual Studio 2012 by reviewing the following sections of this topic and the topics to which they link:
Understand the basics of Windows Store apps.
For more information, see Getting started with Windows Store apps.
Build a Windows Store app by using one of several default project templates, which provide the files, resources, and structure for various kinds of Windows Store apps.
For more information, see Develop Windows Store apps using Visual Studio 2012.
Build a Windows Store app by using XAML and C++, C#, or Visual Basic.
For more information, see Developing Windows Store apps (C#/C++/VB).
Build and test a Windows Store app by using Team Foundation Build
For more information, see Build and Test a Windows Store App Using Team Foundation Build.
Create and run unit tests for Windows Store apps
For more information, see Walkthrough: Creating and Running Unit Tests for Windows Store Apps.
Build a Windows Store app by using JavaScript.
For more information, see Designing and building Windows Store apps (JavaScript).
Visually design Windows Store apps that you build by using HTML.
You can use Blend to drag app controls onto a design surface and then manipulate them and set their properties. For more information, see the Design Windows Store apps using Blend.
Visually design Windows Store apps that you build by using XAML.
You can use the XAML Designer to drag app controls onto a design surface and then manipulate them and set their properties. For more information, see Creating a UI by using XAML Designer.
Debug a Windows Store app locally by using the traditional debugging model for Visual Studio.
For more information, see Debugging and testing with Visual Studio.
Debug a Windows Store app by using the Windows Store simulator.
You can use the Windows Store simulator to run Windows Store apps and to simulate common touch and rotate events on the same machine. For more information, see Running Windows Store apps in the simulator.
Debug a Windows Store app by using the remote debugger to run, debug, and test an app that's running on one device from a second machine that's running Visual Studio.
For more information, see Running Windows Store apps on a remote machine.
Debug a Windows Store app interactively by using JavaScript debugging tools, including DOM Explorer and JavaScript Console window.
For more information, see Debugging apps (JavaScript).
Find performance bottlenecks in your functions and algorithms.
You can use Visual Studio Profiling to identify where the code of your app spends the most processing time. For more information, see Analyzing the performance of Windows Store apps.
Check the code in your Windows Store app for common defects and violations of good programming practice.
For more information, see Analyzing the code quality of Windows Store apps with Visual Studio code analysis.
Create a developer account at the Windows Store, or reserve a name for your Windows Store app.
You can interact with the Windows Store by using several commands on the Store menu. For more information, see Packaging your Windows Store app using Visual Studio 2012.
Create an app manifest, and package all the required files together so that you can upload them to the Windows Store.
For more information, see Packaging your Windows Store app using Visual Studio 2012.
Create an app manifest, and package all the required files together so that you can upload them to the Windows Store.
For more information, see Packaging your Windows Store app using Visual Studio 2012.
Understand the basics of Windows Phone apps.
For more information, see Getting started with developing for Windows Phone .
Build an app by using C# or Visual Basic and XAML.
For more information, see Create your first app for Windows Phone.
Register for a Windows Phone Dev Center account.
For more information, see Join the Program.
Submit your app into the Windows Phone Store.
For more information, see Submit your app.
Use a checklist to test your app.
For more information, see Testing apps for Windows Phone.
Run automated and manual steps to prepare your app for the Windows Phone Store.
For more information, see Windows Phone Store Test Kit.
Register for a Windows Phone Dev Center account.
For more information, see Join the program.
Submit your app into the Windows Phone Store.
For more information, see Submit your app.
Work with Visual Studio 2010 SP1 projects and files in both Visual Studio 2012 and Visual Studio 2010 SP1.
For more information, see Visual Studio 2012 Compatibility.
Browse code in Solution Explorer.
Browse the types and members in your projects, search for symbols, view a method’s Call Hierarchy, find symbol references, and perform other tasks without switching between multiple tool windows. For more information, see Viewing the Structure of Code.
Install online samples.
Use Visual Studio to download and install samples from the MSDN Code Gallery. You can download samples that explain new technologies and help you to jump start projects and debug your code. For more information, see Accessing Online Samples.
Solutions load asynchronously.
Projects are now loaded asynchronously, and the key parts of the solution load first, so that you can start to work faster.
Preview files in the Code Editor.
Reduce file clutter in the editor by viewing files without opening them. Preview files appear in a blue tab on the right side of the document tab well. The file opens if you modify it or choose the Open button. For more information, see Kinds of Windows.
Access frequently used files more easily.
Pin files that you use often to the left side of the tab well so that you can access them easily regardless of how many files are open in the IDE.
Arrange windows on multiple monitors more effectively.
Dock multiple floating tool or document windows together as a “raft” on other monitors. You can also create multiple instances of Solution Explorer and move them to another monitor. For more information, see How to: Arrange and Dock Windows.
Change the color scheme of the IDE.
Choose either the Light or Dark color theme for the Visual Studio UI. For more information, see How to: Change Visual Studio Fonts and Colors.
Search across the IDE.
Specify a word or a phrase, and then choose an entry from the list to open the dialog box or window that’s associated with the item or command. For more information, see Quick Launch.
Search in tool windows.
Filter the view by entering a keyword in the search box at the top of certain tool windows, such as the Toolbox, Solution Explorer, Error List, and Team Explorer. For more information, see Finding and Replacing Text.
Find strings by using regular expression syntax from the .NET Framework.
Use regular expression syntax from the .NET Framework in the Find and Replace control and the Find in Files and Replace in Files dialog boxes. For more information, see Using Regular Expressions in Visual Studio.
Specify more semantic colorization.
More C++ tokens now have colorization by default, and you can specify more colorizations. For more information, see Writing Code in the Code and Text Editor.
Use improved reference highlighting.
You can highlight all instances of a symbol just by pointing to one instance. You can move among the highlighted references by choosing the Ctrl+Shift+Up Arrow or Ctrl+Shift+Down Arrow keys. You can turn this feature off or on.
Choose member functions as you type.
The List Members list appears automatically as you enter text in the code editor. Results are filtered so that only relevant members appear. For more information, see Using IntelliSense.
Take advantage of C++/CLI IntelliSense.
C++/CLI now supports IntelliSense features such as Quick Info, Parameter Help, List Members, and Auto Completion.
Speed up your coding by using code snippets.
You can choose a code snippet from the List Members list and then fill in the required logic. Snippets are available for switch, if-else, for, and other basic code constructs. You can also create custom snippets. For more information, see Code Snippets.
Use features of ECMAScript 5 and HTML5 DOM.
Provide IntelliSense for function overloads and variables.
Provide IntelliSense information by using new elements supported in triple-slash (///) code comments. New elements include <var> and <signature>. For more information, see XML Documentation Comments (JavaScript).
View signatures in the statement completion list.
Function signatures appear on the right side of the statement completion list.
Use smart indenting, brace matching, and outlining when you write code.
Use Go To Definition to locate function definitions in source code.
Right-click a function, and then click Go To Definition (or put the cursor in the function and then choose the F12 key) to open the JavaScript source file at the location in the file where the function is defined. (This feature isn't supported for generated files.)
Get IntelliSense information from standard JavaScript comments.
The new IntelliSense extensibility mechanism automatically provides IntelliSense when you use standard comment tags (//).
Extend JavaScript IntelliSense to improve support for libraries from other organizations.
Use extensibility APIs to provide a customized IntelliSense experience. For more information, see Extending JavaScript IntelliSense.
Set a breakpoint within a single line of code.
When a single line contains multiple statements, you can now break on a single statement.
Control which objects are available in global scope.
For more information, see JavaScript IntelliSense.
View statement completion for identifiers even when accurate information about the object isn't available.
For more information, see Statement Completion for Identifiers.
Get IntelliSense information for objects in dynamically loaded scripts.
The language service provides automatic support for some recognizable script loader patterns.).
Maintain the simplicity of a For Each loop for a complex list sequence.
You can use iterators to return each item in a collection one at a time. For more information, see Iterators (C# and Visual Basic).
Understand better how your code flows.
By using the Call Hierarchy feature, you can display all calls to and from a selected method, property, or constructor. For more information, see Call Hierarchy.
Define a namespace outside of the root namespace of your project.
You can use the Global keyword in a Namespace statement. For more information, see Namespaces in Visual Basic.
For more information, see What's New for Visual Basic in Visual Studio 2012.).
For more information, see What's New for Visual C# in Visual Studio 2012.
Write code that conforms to the C++11 language standard.
You can use Visual C++ to write code that uses range-based for loops, standard threads, futures, and atomics, and other powerful new features in the standard C++11 language.
Create Windows Store apps and games by using C++.
Use the Visual C++ with XAML development model for Windows Store apps and games and use the Visual C++ component extensions (C++/CX) and other new features to create them.
Write faster, more efficient code by using compiler improvements.
Because of compiler improvements, you can write code that you can compile to run faster on the CPU or execute on multiple processors, or you can write code that you can reuse to target different system configurations.
Equip your app to run more quickly and efficiently when multiple processors are available.
By using improved parallel libraries and new debugging and visualization features, you can enable your app to run better on a variety of hardware.
Make your code more robust.
You can use the updated unit test framework, architecture dependency graphs, Architecture Explorer, code coverage, and other tools to make your code more robust.
Equip your app to run faster by using multiple CPUs.
By using the improved Parallel Patterns Library (PPL) and new debugging and visualization features, you can enable your app to run faster on hardware that has multiple cores..
For more information, see What's New for Visual C++ in Visual Studio 2012.
Introduce additional run-time constraints and error-checking into your code.
For more information, see Strict Mode (JavaScript).
Handle binary data from sources such as network protocols, binary file formats, and raw graphics buffers.
For more information, see Typed Arrays (JavaScript).
Use the Windows Runtime in Windows Store apps.
For more information, see Using the Windows Runtime in JavaScript.
Add background workers that run in parallel with the main page.
For more information, see About Web workers.
For more information, see What’s New In JavaScript.
Program directly against rich spaces of data and services, such as databases, web services, web data feeds, and data brokers.
By using F# type providers, you can focus on your data and program logic instead of on creating a system of types to represent your data. For more information, see Type Providers.
Query databases directly in the F# language.
Use F# LINQ queries to specify exactly the data that you want in the F# language, without writing a database query or a stored procedure. For more information, see Query Expressions (F#).
Manage the backlog, sprints, and tasks by using agile tools.
Define multiple teams, each of which can manage their backlog of work and plan sprints. Prioritize work, and outline dependencies and relationships. See who is over capacity, in real time. Update tasks and see the progress within a sprint. For more information, see Collaborate.
Engage stakeholders to provide feedback on pre-release software.
Stakeholders can record action scripts, annotations, screenshots, and video or audio recordings. For more information, see Request and review feedback.
Illustrate requirements with storyboards, and link storyboards to work items.
Build a storyboard from a collection of pre-defined storyboard shapes, capture user interfaces, and link any storyboard or file shared on a network to a work item. For more information, see Storyboard backlog items.
Manage enterprise projects by using Microsoft Project and Project Server.
Manage project portfolios and view status and resource availability across agile and formal software teams. For more information, see Enable Data Flow Between Team Foundation Server and Microsoft Project Server.
Visualize your code more quickly and easily.
Create dependency graphs from Solution Explorer so that you can understand the organization and relationships in code. For more information, see Visualize Code Dependencies on Dependency Graphs.
Read and edit dependency graphs more easily.
Browse graphs and rearrange their items to make them easier to read and to improve rendering performance. For more information, see Edit and Customize Dependency Graphs and Browse and Rearrange Dependency Graphs.
Open and view linked model elements in work items.
For more information, see Link Model Elements and Work Items.
Generate C# code from UML class diagrams.
Start implementing your design more quickly, and customize the templates that are used to generate code. For more information, see How to: Generate Code from UML Class Diagrams.
Create UML class diagrams from existing code.
Create UML class diagrams from code so that you can communicate with others about the design. For more information, see How to: Create UML Class Diagrams from Code.
Import XMI 2.1 files.
Import UML class, use case, and sequence diagram model elements exported as XMI 2.1 files from other tools. For more information, see How to: Import UML Model Elements from XMI Files.
Track tasks and boost productivity by using an enhanced interface.
Organize upcoming, ongoing, and suspended work while increasing transparency and reducing the impact of interruptions. For more information, see Day in the Life of an ALM Developer: Write New Code for a User Story.
Work more efficiently in a version-controlled codebase.
Organize your work, reduce the impact of interruptions, and manage shelvesets and changesets. For more information, see Develop Your App in a Version-Controlled Codebase.
Conduct and track code reviews by using new tools.
For more information, see Day in the Life of an ALM Developer: Suspend Work, Fix a Bug, and Conduct a Code Review.
Perform unit testing by using a dedicated tool.
Test code as part of your workflow. For more information, see Running Unit Tests with Test Explorer.
Find duplicate code so that you can refactor.
For more information, see Finding Duplicate Code by using Code Clone Detection.
Compare code versions by using an enhanced diff tool.
For more information, see Compare Files.
Work offline.
Work in local workspaces either inside or outside of Visual Studio, even when you're not connected to Team Foundation Server. For more information, see Decide Between Using a Local or a Server Workspace.
Easily debug code that was generated from text templates.
You can set breakpoints in T4 text templates and debug them in the same way as ordinary code. For more information, see Debugging a T4 Text Template.
Run, monitor, and manage builds by using an enhanced interface.
For more information, see Run, Monitor, and Manage Builds.
Run automated builds from Team Foundation Service.
Take advantage of an on-premise or hosted build controller.
Define gated check-in build processes that support multiple check-ins.
Build multiple check-ins at the same time. For more information, see Define a Gated Check-In Build Process to Validate Changes.
Run native and third-party framework unit tests in your build process.
For more information, see Run Tests in Your Build Process.
Debug your build process more easily.
Choose a link in the build results window to view diagnostic logs. For more information, see Diagnose Build Problems.
Run manual testing on Windows Store apps.
You can use Microsoft Test Manager to run manual tests to help you identify problems in your Windows Store apps that are running on a remote device, such as a tablet. For more information, see Testing Windows Store apps Running On a Device with Microsoft Test Manager.
Conduct exploratory testing.
From the Exploratory Testing window, you can run tests without being restricted to a test script or set of predetermined steps. For more information, see Performing Exploratory Testing Using Microsoft Test Manager.
Include multiple lines and rich text in your manual test steps.
Test steps can include multiple lines to consolidate related actions within a single test step in your test cases. Microsoft Test Manager now also includes a toolbar that you can use to format the text of your test steps. You can use various formatting options, such as bold, underline, or color highlighting to emphasize key points. For more information, see How to: Create a Manual Test Case.
Get the status of your test plans within Microsoft Test Manager.
This report is available to you from the Plan tab in the center group menu bar of Microsoft Test Manager. From there, you can view Results, which include a report on the status of your test plan. For more information, see How to: Create a Manual Test Case.
Clone test plans for new iterations.
By cloning tests, you can work more easily on different releases in parallel. For example, if you already have a test plan called “Contoso V1 – Milestone 1” and your team decides to make version V2, you can clone the test plan and use it for the V2 source code branch. After cloning the test plans, you and your team can work on both versions of the applications simultaneously. For more information, see Copying and Cloning Test Suites and Test Cases.
Improve page load time when referencing JavaScript and CSS files.
You can combine separate JavaScript and CSS files and reduce their size for faster loading through bundling and minification.
Work with projects that target earlier versions of the .NET Framework.
ASP.NET 4.5 updates multi-targeting so that you can work with projects that target earlier versions of the .NET Framework.
Avoid cross-site scripting attacks.
Encoding APIs that prevent cross-site scripting have been integrated into the core framework of ASP.NET pages.
Write asynchronous web applications more easily.
Use the new .NET 4.5 async (C# Reference) and await (C# Reference) keywords in combination with the Task type to simplify asynchronous web programming. For more information, see Using Asynchronous Methods in ASP.NET 4.5 and Using Asynchronous Methods in ASP.NET MVC 4.
For more information, see What’s New for ASP.NET 4.5 and Web Development in Visual Studio 2012.
Write code that’s called directly by data-bound controls.
In ASP.NET Web Forms, you can now use model binders for data access as you can in ASP.NET MVC. If you use model binders, data-bound controls can call your code directly, like action methods in ASP.NET MVC.
Write strongly typed, two-way data-binding expressions in Web Forms data controls.
By using strongly typed expressions, you can access complex properties in data controls instead of using Bind or Eval expressions.
Make pages perform better through unobtrusive JavaScript.
By moving the code for client-side validation into a single external JavaScript file, your pages become smaller and faster to load.
For more information, see What’s New for ASP.NET 4.5 and Web Development in Visual Studio 2012.
Use the most recent web standards.
The new HTML editor offers full support for HTML5 elements and snippets. The CSS editor offers full support for CSS3, including support for CSS hacks and snippets for vendor-specific extensions to CSS.
Test the same page, application, or site in a variety of browsers.
Installed browsers appear in a list next to the Start Debugging button in Visual Studio.
Quickly find the source of rendered markup.
The new Page Inspector feature renders a webpage (HTML, Web Forms, ASP.NET MVC, or Web Pages) directly within the Visual Studio IDE. When you choose a rendered element, Page Inspector opens the file in which the markup was generated and highlights the source.
Find snippets and code elements quickly by using improved IntelliSense.
IntelliSense in the HTML and CSS editors filters the display list as you enter text. This feature shows strings that match the typed text in their beginning, middle, or end. It also matches against initial letters. For example, "bc" will match "background-color."
Select markup and extract it to a user control.
This feature is a convenient way to create markup for reuse in multiple locations. Visual Studio registers a tag prefix and instantiates the control for you. The selected code itself is replaced with an instance of the new user control.
Create and edit code and markup more easily.
When you rename an opening or closing tag, the corresponding tag is automatically renamed. When you choose the Enter key inside an empty tag pair, the cursor appears on a new line in indented position. Source view has Smart Tasks like Design view.
Create CSS more efficiently.
In the new CSS editor, you can expand and collapse sections, use hierarchical indentation, and comment and uncomment blocks freely. The CSS editor now has a color selector like the HTML editor.
Write JavaScript in the JavaScript editor.
For information about enhancements to the JavaScript editor, see the Code Editor Enhancements for JavaScript section.
Deploy web application projects more easily.
You can import publish settings from hosting providers, specify Web.config file transformations for a publish profile, store encrypted credentials in the publish profile, specify the build configuration in the publish profile, and preview deployment updates.
For more information, see What’s New for ASP.NET 4.5 and Web Development in Visual Studio 2012.
Automate validation for frequently used data types.
You can add new DataAnnotation attributes to properties to automate validation for frequently used data types such as e-mail addresses, telephone numbers, and credit-card numbers.
Deploy incremental database updates.
After you deploy a database with a web project, changes to the database schema are automatically propagated to the destination database the next time that you deploy.
For more information, see What’s New for ASP.NET 4.5 and Web Development in Visual Studio 2012.
Easily build and consume HTTP services that reach a broad range of clients.
Services can be consumed by browsers, mobile applications, tablets, and other devices. Built-in support for content negotiation enables clients and servers to mutually determine the right format for data.
Directly access and manipulate HTTP requests and responses by using a modern HTTP programming model.
Use a clean, strongly typed HTTP object programming model that’s supported both on the server and on the client. The new HttpClient API can call web APIs from any .NET Framework application.
Easily extract data from an HTTP request.
Model binders make it easier to extract data from various parts of an HTTP request. The message parts become .NET objects that Web API actions can use. The ASP.NET Web API supports the same model binding and validation infrastructure as ASP.NET MVC.
Enjoy a full set of routing capabilities.
ASP.NET Web APIs support the full set of routing capabilities in ASP.NET MVC and ASP.NET, including route parameters and constraints.
For more information, see Getting Started with ASP.NET Web API and ASP.NET Web API (Part 1).
Connect to OData data sources.
Your LightSwitch applications can connect to any Open Data Protocol (OData) data source, including those from the Windows Azure DataMarket. For more information, see How to: Connect to Data.
Expose your application data as an OData data source.
You can expose data from a published LightSwitch web application as an OData feed for use by other applications, taking advantage of LightSwitch features such as authentication and filtering. For more information, see LightSwitch as a Data Source.
Assign roles and permissions to security groups.
If you use Windows authentication, you can assign roles and permissions to any security group in Active Directory. For more information, see LightSwitch Authentication and Authorization.
Limit data that the server returns.
You can define filters that apply across any queries that access your data, even through an association. For more information, see How to: Filter Data in a LightSwitch Application by Using Code.
Improve the look of your screens with new controls.
You can organize screen content by using the Group Box control. You can also display text and data on a screen without data binding. For more information, see How to: Add Static Text or Images to a Screen.
Customize the formatting of numbers and dates.
You can use the new Format Pattern property for numeric and date data types to control the display format of numbers and dates. For more information, see How to: Format Numbers and Dates in a LightSwitch Application.
Treat URLs and percentages as data types.
You can use custom business types to treat a decimal as a percentage and a string as a URL, with built-in formatting and validation. For more information, see Adding a Data Field.
For more information, see What's New for LightSwitch in Visual Studio 2012.
Work with database objects in SQL Server Object Explorer.
Use the new SQL Server Object Explorer, which resembles Management Studio, to create queries and define database objects. View the column definitions, including primary and foreign keys. For more information, see Connected Database Development.
Define tables in the new Table Designer.
Use the Table Designer to define tables in the SQL Server 2012 format. As you define a table in the graphical interface, the Transact-SQL code is updated in the Script pane. For more information, see How to: Create Database Objects Using Table Designer.
Develop and test database applications in SQL Server Express LocalDB.
SQL Server Express LocalDB is a lightweight version of SQL Server that has all the programmability features of a SQL Server database. SQL Server Express LocalDB replaces SQL Server Express as the default database engine for development. You can upgrade your files or continue to use SQL Server Express if you must use both Visual Studio 2010 and Visual Studio 2012. For more information, see Local Data Overview.
Add, edit, and compile HLSL shaders more easily.
You can use syntax coloring, indenting, and outlining when you are coding HLSL shaders, and MSBuild automatically supports the Microsoft HLSL Compiler (fxc.exe).
View and modify image assets more efficiently.
You can use the Image Editor to create, inspect, and modify bitmap and compressed image formats (DDS, TGA, TIFF, PNG, JPG, GIF), and the editor supports transparency and mipmaps. For more information, see Image Editor.
Work with 3-D models.
You can use the Model Editor to inspect standard 3-D model formats (OBJ, COLLADA, and Autodesk FBX). You can also use the built-in 3-D primitive generation and materials to create placeholder art for 3-D games and apps, thereby improving artist-developer workflow. For more information, see Model Editor.
Create advanced pixel shaders.
You can use the Shader Designer, which is a graph-based shader creation tool that provides a live preview of the effect, to create advanced pixel shaders and export them as HLSL code that you can use in apps that are based on DirectX. For more information, see Shader Designer.
Use C++ AMP to make your code run faster.
By using C++ Accelerated Massive Parallelism (C++ AMP), you can control how data moves between the CPU and the GPU or other data-parallel hardware and thereby accelerate the execution of your C++ code. For more information, see C++ AMP (C++ Accelerated Massive Parallelism).
Debug your parallel apps more effectively.
Not only can you use the GPU Threads and Parallel Watch windows to debug parallel apps, but you can also use them to evaluate and fine-tune performance gains. For more information, see What’s New for the Debugger in Visual Studio 2012.
Customize the data that you use to examine how well your parallel app performs.
By using the Concurrency Visualizer, you can examine how your multithreaded app performs. In this version, you get quicker access and increased configuration control, and you can add your own custom performance data to the visualizer. For more information, see Concurrency Visualizer.
Use TPL dataflow to make your concurrency-enabled app more robust.
Use components of the Task Parallel Library (TPL) Dataflow library when your code involves multiple operations that must communicate with one another asynchronously or when you want to process data as it becomes available. For more information, see Dataflow (Task Parallel Library)..
For more information, see What's New for SharePoint Development in Visual Studio 2012.
Create apps for Office.
You can surface web technologies and cloud services within Office documents, email messages, meeting requests, and appointments. For more information, see Create Apps for Office by using Visual Studio.
Develop solutions for Office 2013.
You can create document-level customizations and application-level add-ins for Office 2013 applications by using Office developer tools. To get project templates for these kinds of solutions, you download and install the Microsoft Office Developer Tools for Visual Studio 2012.
Develop Office solutions that target the .NET Framework 4.5.
To target the .NET Framework 4.5, you download and install the Microsoft Office Developer Tools for Visual Studio 2012.
Build managed assemblies that work on multiple .NET Framework platforms.
By using the Portable Class Library project in Visual Studio 2012, you can target multiple platforms (such as Windows Phone and .NET for Windows Store apps). For more information, see Cross-Platform Development with the .NET Framework.
Reduce system restarts when installing the .NET Framework.
For more information, see Reducing System Restarts During .NET Framework 4.5 Installations.
Improve file input/output performance by using asynchronous operations.
Use the new Async feature in C# and Visual Basic with asynchronous methods in the input/output classes when you work with large files. For more information, see Asynchronous File I/O.
Improve startup performance on multi-core processors.
Enable background just-in-time (JIT) compilation. For more information, see the ProfileOptimization class.
Develop and maintain WCF applications more easily.
For more information, see What's New in Windows Communication Foundation.
Improve the scalability of WCF applications.
Enable asynchronous streaming of messages to multiple clients. For more information, see WCF Simplication Features
Manage workflows more easily.
The Workflow Designer contains several enhancements. For more information, see What’s New in Windows Workflow Foundation.
Create state machine workflows.
For more information, see What’s New in Windows Workflow Foundation.
Add a ribbon user interface to your WPF application.
For more information, see the Ribbon control.
Display large sets of grouped data in WPF applications more quickly.
For more information, see What's New in WPF Version 4.5.
Create modern HTTP applications more efficiently by using the new programming interfaces.
For more information, see the new System.Net.Http and System.Net.Http.Headers namespaces.
For more information, see What's New in the .NET Framework 4.5.
Ensure that the logos and splash screen for your Windows Store app will look good in a variety of screen resolutions.
For more information, see Optimizing images for different screen resolutions (Windows Store Apps).
Find and troubleshoot memory usage issues in Windows Store apps.
You can use the JavaScript Memory Analyzer to find memory leaks and to help identify their causes. For more information, see Analyzing memory usage in Windows Store apps (JavaScript).
Create code maps from the code editor.
By scanning code maps that appear next to the code editor, you can easily find your place in your code, navigate around your code, and identify relationships throughout your code. For more information, see Visualize and Understand Code with Code Maps in Visual Studio.
Target Windows XP when you build your C++ code.
You can use the Visual C++ compiler and libraries to target Windows XP and Windows Server 2003. For more information, see Configuring C++ 11 Programs for Windows XP.
Coded UI tests for SharePoint 2010 applications.
By including coded UI tests in a SharePoint application, you can verify whether the whole application, including its UI controls, is functioning correctly. You can also use coded UI tests to validate values and logic in the UI. For more information, see Testing SharePoint 2010 Applications with Coded UI Tests.
Web performance and load tests for SharePoint 2010 applications.
You can verify the performance and stress abilities of your SharePoint applications by configuring lLoad tests to emulate conditions such as user loads, browser types, and network types. For more information, see Web Performance and Load Testing SharePoint 2010 and 2013 Applications.
Record diagnostic events for SharePoint 2010 solutions that are running outside Visual Studio.
By using the IntelliTrace collector, you can save user profile events, Unified Logging System (ULS) events, and IntelliTrace events to an .iTrace file. You can then start to diagnose solutions in production or other environments by opening the .iTrace file in Visual Studio Ultimate. For more information, see Collect IntelliTrace Data Outside Visual Studio with the Standalone Collector.
Find performance bottlenecks in your HTML, CSS, and JavaScript code.
You can troubleshoot symptoms like a lack of responsiveness in the UI or slow visual updates by using the UI Responsiveness Profiler. For more information, see Analyzing UI Responsiveness.
Create unit test projects for a Windows Phone app.
You can create unit test projects for a Windows Phone app and run them from Test Explorer. For more information, see Unit testing for Windows Phone apps.
Deploy a Windows Phone app at a command prompt.
You can also add the command to a script or a custom application. For more information, see How to deploy and run a Windows Phone app.
Precompile and sign company apps at a command prompt.
When you build a company app by using MSBuild, you can precompile and sign the app by using a command prompt, and you can add the commands to a script or custom application. For more information, see Preparing company apps for distribution.
For more information, see Description of Visual Studio 2012 Update 2.
For more information, see Description of Visual Studio 2012 Update 3. | http://msdn.microsoft.com/en-us/library/bb386063(VS.110).aspx | CC-MAIN-2014-52 | refinedweb | 5,941 | 59.09 |
In this section, you will learn how to retrieve the data from the database and insert into the file.
Description of code:
This is a simple task. First of all we have retrieved the values from the database and stored it into arraylist. Then we have called the method writeToFile() and pass the arraylist and the file path as the parameters. This method stores the arraylist value into the specified file.
Here is the code:
import java.io.*; import java.sql.*; import java.util.*; class InsertToFile { private static void writeToFile(java.util.List
list, String path) { BufferedWriter out = null; try { File file = new File(path); out = new BufferedWriter(new FileWriter(file, true)); for (String s : list) { out.write(s); out.newLine(); } out.close(); } catch (IOException e) { } } public static void main(String arg[]) { java.util.List list = new ArrayList (); try { Connection con = null; Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection( "jdbc:mysql://localhost:3306/register", "root", "root"); Statement st = con.createStatement(); ResultSet rs = st.executeQuery("Select * from student"); while (rs.next()) { list.add(rs.getString(1) + " " + rs.getString(2) + " " + rs.getString(3) + " " + rs.getString(4)); } writeToFile(list, "student.txt"); } catch (Exception e) { } } }
Through the above code, you can retrieve the values from the database and store them into the file. | http://www.roseindia.net/tutorial/java/core/files/insertdatabasevaluesIn%20File.html | CC-MAIN-2017-17 | refinedweb | 211 | 60.72 |
(For more resources related to this topic, see here.)
Installing SciPy
SciPy is the scientific Python library and is closely related to NumPy. In fact, SciPy and NumPy used to be one and the same project many years ago. In this recipe, we will install SciPy.
How to do it...
In this recipe, we will go through the steps for installing SciPy.
Installing from source: If you have Git installed, you can clone the SciPy repository using the following command:
git clone
python setup.py build
python setup.py install --user
This installs to your home directory and requires Python 2.6 or higher.
Before building, you will also need to install the following packages on which SciPy depends:
BLAS and LAPACK libraries
C and Fortran compilers
There is a chance that you have already installed this software as a part of the NumPy installation.
Installing SciPy on Linux: Most Linux distributions have SciPy packages.
Installing SciPy on Mac OS X: Apple Developer Tools (XCode) is required, because it contains the BLAS and LAPACK libraries. It can be found either in the App Store, or in the installation DVD that came with your Mac, or you can get the latest version from Apple Developer's connection at. Make sure that everything, including all the optional packages is installed.
You probably already have a Fortran compiler installed for NumPy. The binaries for gfortran can be found at.
Installing SciPy using easy_install or pip: Install with either of the following two commands:
sudo pip install scipy
easy_install scipy
Installing on Windows: If you have Python installed already, the preferred method is to download and use the binary distribution. Alternatively, you may want to install the Enthought Python distribution, which comes with other scientific Python software packages.
Check your installation: Check the SciPy installation with the following code:
import scipy print scipy.__version__ print scipy.__file__
This should print the correct SciPy version.
How it works...
Most package managers will take care of any dependencies for you. However, in some cases, you will need to install them manually. Unfortunately, this is beyond the scope of this book. If you run into problems, you can ask for help at:
The #scipy IRC channel of freenode, or
The SciPy mailing lists at
Installing PIL
PIL, the Python imaging library, is a prerequisite for the image processing recipes in this article.
How to do it...
Let's see how to install PIL.
Installing PIL on Windows: Install using the Windows executable from the PIL website.
Installing on Debian or Ubuntu: On Debian or Ubuntu, install PIL using the following command:
sudo apt-get install python-imaging
Installing with easy_install or pip: At the t ime of writing this book, it appeared that the package managers of Red Hat, Fedora, and CentOS did not have direct support for PIL. Therefore, please follow this step if you are using one of these Linux distributions.
Install with either of the following commands:
easy_install PIL
sudo pip install PIL
Resizing images
In this recipe, we will load a sample image of Lena, which is available in the SciPy distribution, into an array. This article picture in question is completely safe for work.
We will resize the image using the repeat function. This function repeats an array, which in practice means resizing the image by a certain factor.
Getting ready
A prerequisite for this recipe is to have SciPy, Matplotlib, and PIL installed.
How to do it...
Load the Lena image into an array.
SciPy has a lena function , which can load the image into a NumPy array:
lena = scipy.misc.lena()
Some refactoring has occurred since version 0.10, so if you are using an older version, the correct code is:
lena = scipy.lena()
Check the shape.
Check the shape of the Lena array using the assert_equal function from the numpy.testing package—this is an optional sanity check test:
numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape)
Resize the Lena array.
Resize the Lena array with the repeat function. We give this function a resize factor in the x and y direction:
resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1)
Plot the arrays.
We will plot the Lena image and the resized image in two subplots that are a part of the same grid. Plot the Lena array in a subplot:
matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena)
The Matplotlib subplot function creates a subplot. This function accepts a 3-digit integer as the parameter, where the first digit is the number of rows, the second digit is the number of columns, and the last digit is the index of the subplot starting with 1. The imshow function shows images. Finally, the show function displays the end result.
Plot the resized array in another subplot and display it. The index is now 2:
matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show()
The following screenshot is the result with the original image (first) and the resized image (second):
The following is the complete code for this recipe:
import scipy.misc import sys import matplotlib.pyplot import numpy.testing # This script resizes the Lena image from Scipy. if(len(sys.argv) != 3): print "Usage python %s yfactor xfactor" % (sys.argv[0]) sys.exit() # Loads the Lena image into an array lena = scipy.misc.lena() #Lena's dimensions LENA_X = 512 LENA_Y = 512 #Check the shape of the Lena array numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape) # Get the resize factors yfactor = float(sys.argv[1]) xfactor = float(sys.argv[2]) # Resize the Lena array resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1) #Check the shape of the resized array numpy.testing.assert_equal((yfactor * LENA_Y, xfactor * LENA_Y), resized.shape) # Plot the Lena array matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena) #Plot the resized array matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show()
How it works...
The repeat function repeats arrays, which, in this case, resulted in changing the size of the original image. The Matplotlib subplot function creates a subplot. The imshow function shows images. Finally, the show function displays the end result.
See also
The Installing SciPy recipe
The Installing PIL recipe
Creating views and copies
It is important to know when we are dealing with a shared array view, and when we have a copy of the array data. A slice, for instance, will create a view. This means that if you assign the slice to a variable and then change the underlying array, the value of this variable will change. We will create an array from the famous Lena image, copy the array, create a view, and, at the end, modify the view.
Getting ready
The prerequisites are the same as in the previous recipe.
How to do it...
Let's create a copy and views of the Lena array:
Create a copy of the Lena array:
acopy = lena.copy()
Create a view of the array:
aview = lena.view()
Set all the values of the view to 0 with a flat iterator:
aview.flat = 0
The end result is that only one of the images shows the Playboy model. The other ones get censored completely:
The following is the code of this tutorial showing the behavior of array views and copies:
import scipy.misc import matplotlib.pyplot lena = scipy.misc.lena() acopy = lena.copy() aview = lena.view() # Plot the Lena array matplotlib.pyplot.subplot(221) matplotlib.pyplot.imshow(lena) #Plot the copy matplotlib.pyplot.subplot(222) matplotlib.pyplot.imshow(acopy) #Plot the view matplotlib.pyplot.subplot(223) matplotlib.pyplot.imshow(aview) # Plot the view after changes aview.flat = 0 matplotlib.pyplot.subplot(224) matplotlib.pyplot.imshow(aview) matplotlib.pyplot.show()
How it works...
As you can see, by changing the view at the end of the program, we changed the original Lena array. This resulted in having three blue (or black if you are looking at a black and white image) images—the copied array was unaffected. It is important to remember that views are not read-only.
Flipping Lena
We will be flipping the SciPy Lena image—all in the name of science, of course, or at least as a demo. In addition to flipping the image, we will slice it and apply a mask to it.
How to do it...
The steps to follow are listed below:
Plot the flipped image.
Flip the Lena array around the vertical axis using the following code:
matplotlib.pyplot.imshow(lena[:,::-1])
Plot a slice of the image.
Take a slice out of the image and plot it. In this step, we will have a look at the shape of the Lena array. The shape is a tuple representing the dimensions of the array. The following code effectively selects the left-upper quadrant of the Playboy picture.
matplotlib.pyplot.imshow(lena[:lena.shape[0]/2, :lena.shape[1]/2])
Apply a mask to the image.
Apply a mask to the image by finding all the values in the Lena array that are even (this is just arbitrary for demo purposes). Copy the array and change the even values to 0. This has the effect of putting lots of blue dots (dark spots if you are looking at a black and white image) on the image:
mask = lena % 2 == 0 masked_lena = lena.copy() masked_lena[mask] = 0
All these efforts result in a 2 by 2 image grid, as shown in the following screenshot:
The following is the complete code for this recipe:
import scipy.misc import matplotlib.pyplot # Load the Lena array lena = scipy.misc.lena() # Plot the Lena array matplotlib.pyplot.subplot(221) matplotlib.pyplot.imshow(lena) #Plot the flipped array matplotlib.pyplot.subplot(222) matplotlib.pyplot.imshow(lena[:,::-1]) #Plot a slice array matplotlib.pyplot.subplot(223) matplotlib.pyplot.imshow(lena[:lena.shape[0]/2,:lena.shape[1]/2]) # Apply a mask mask = lena % 2 == 0 masked_lena = lena.copy() masked_lena[mask] = 0 matplotlib.pyplot.subplot(224) matplotlib.pyplot.imshow(masked_lena) matplotlib.pyplot.show()
See also
The Installing SciPy recipe
The Installing PIL recipe
Fancy indexing
In this tutorial, we will apply fancy indexing to set the diagonal values of the Lena image to 0. This will draw black lines along the diagonals, crossing it through, not because there is something wrong with the image, but just as an exercise. Fancy indexing is indexing that does not involve integers or slices, which is normal indexing.
How to do it...
We will start with the first diagonal:
Set the values of the first diagonal to 0.
To set the diagonal values to 0, we need to define two different ranges for the x and y values:
lena[range(xmax), range(ymax)] = 0
Set the values of the other diagonal to 0.
To set the values of the other diagonal, we require a different set of ranges, but the principles stay the same:
lena[range(xmax-1,-1,-1), range(ymax)] = 0
At the end, we get this image with the diagonals crossed off, as shown in the following screenshot:
The following is the complete code for this recipe:
import scipy.misc import matplotlib.pyplot # This script demonstrates fancy indexing by setting values # on the diagonals to 0. # Load the Lena array lena = scipy.misc.lena() xmax = lena.shape[0] ymax = lena.shape[1] # Fancy indexing # Set values on diagonal to 0 # x 0-xmax # y 0-ymax lena[range(xmax), range(ymax)] = 0 # Set values on other diagonal to 0 # x xmax-0 # y 0-ymax lena[range(xmax-1,-1,-1), range(ymax)] = 0 # Plot Lena with diagonal lines set to 0 matplotlib.pyplot.imshow(lena) matplotlib.pyplot.show()
How it works...
We defined separate ranges for the x values and y values. These ranges were used to index the Lena array. Fancy indexing is performed based on an internal NumPy iterator object. The following three steps are performed:
The iterator object is created.
The iterator object gets bound to the array.
Array elements are accessed via the iterator.
Indexing with a list of locations
Let's use the ix_ function to shuffle the Lena image. This function creates a mesh from multiple sequences.
How to do it...
We will start by randomly shuffling the array indices:
Shuffle array indices.
Create a random indices array with the shuffle function of the numpy.random module:
def shuffle_indices(size): arr = numpy.arange(size) numpy.random.shuffle(arr) return arr
Plot the shuffled indices:
matplotlib.pyplot.imshow(lena[numpy.ix_(xindices, yindices)])
What we get is a completely scrambled Lena image, as shown in the following screenshot:
The following is the complete code for the recipe:
import scipy.misc import matplotlib.pyplot import numpy.random import numpy.testing # Load the Lena array lena = scipy.misc.lena() xmax = lena.shape[0] ymax = lena.shape[1] def shuffle_indices(size): arr = numpy.arange(size) numpy.random.shuffle(arr) return arr xindices = shuffle_indices(xmax) numpy.testing.assert_equal(len(xindices), xmax) yindices = shuffle_indices(ymax) numpy.testing.assert_equal(len(yindices), ymax) # Plot Lena matplotlib.pyplot.imshow(lena[numpy.ix_(xindices, yindices)]) matplotlib.pyplot.show()
Indexing with booleans
Boolean indexing is indexing based on a boolean array and falls in the category fancy indexing.
How to do it...
We will apply this indexing technique to an image:
Image with dots on the diagonal.
This is in some way similar to the Fancy indexing recipe, in this article. This time we select modulo 4 points on the diagonal of the image:
def get_indices(size): arr = numpy.arange(size) return arr % 4 == 0
Then we just apply this selection and plot the points:)
Set to 0 based on value.
Select array values between quarter and three-quarters of the maximum value and set them to 0:
lena2[(lena > lena.max()/4) & (lena < 3 * lena.max()/4)] = 0
The plot with the two new images will look like the following screenshot:
The following is the complete code for this recipe:
import scipy.misc import matplotlib.pyplot import numpy # Load the Lena array lena = scipy.misc.lena() def get_indices(size): arr = numpy.arange(size) return arr % 4 == 0 # Plot Lena) lena2 = lena.copy() # Between quarter and 3 quarters of the max value lena2[(lena > lena.max()/4) & (lena < 3 * lena.max()/4)] = 0 matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(lena2) matplotlib.pyplot.show()
How it works...
Because boolean indexing is a form of fancy indexing, the way it works is basically the same. This means that indexing happens with the help of a special iterator object.
See also
The Fancy Indexing recipe
Stride tricks for Sudoku
The ndarray class has a strides field, which is a tuple indicating the number of bytes to step in each dimension when going through an array. Let's apply some stride tricks to the problem of splitting a Sudoku puzzle to the 3 by 3 squares of which it is composed.
For more information see.
How to do it...
Define the Sudoku puzzle array
Let's define the Sudoku puzzle array. This one is filled with the contents of an actual, solved Sudoku puzzle:] ])
Calculate the strides. The itemsize field of ndarray gives us the number of bytes in an array. Using the itemsize, calculate the strides:
strides = sudoku.itemsize * numpy.array([27, 3, 9, 1])
Split into squares.
Now we can split the puzzle into squares with the as_strided function of the numpy.lib.stride_tricks module:
squares = numpy.lib.stride_tricks.as_strided (sudoku, shape=shape, strides=strides) print(squares)
This prints separate Sudoku squares:
[[[[2 8 7] [9 5 4] [6 1 3]] [[1 6 5] [7 3 2] [8 4 9]] [[9 4 3] [1 6 8] [7 5 2]]] [[[8 7 9] [4 2 1] [3 6 5]] [[6 5 1] [3 9 8] [4 2 7]] [[2 3 4] [6 7 5] [8 9 1]]] [[[1 9 8] [5 4 2] [7 3 6]] [[5 7 3] [9 1 6] [2 8 4]] [[4 2 6] [3 8 7] [5 1 9]]]]
The following is the complete source code for this recipe:
import numpy] ]) shape = (3, 3, 3, 3) strides = sudoku.itemsize * numpy.array([27, 3, 9, 1]) squares = numpy.lib.stride_tricks.as_strided (sudoku, shape=shape, strides=strides) print(squares)
How it works...
We applied stride tricks to decompose a Sudoku puzzle in its constituent 3 by 3 squares. The strides tell us how many bytes we need to skip at each step when going through the Sudoku array.
Broadcasting arrays
Without knowing it, you might have broadcasted arrays. In a nutshell, NumPy tries to perform an operation even though the operands do not have the same shape. In this recipe, we will multiply an array and a scalar. The scalar is "extended" to the shape of the array operand and then the multiplication is performed. We will download an audio file and make a new version that is quieter.
How to do it...
Let's start by reading a WAV file:
Reading a WAV file.
We will use a standard Python code to download an audio file of Austin Powers called "Smashing, baby". SciPy has a wavfile module, which allows you to load sound data or generate WAV files. If SciPy is installed, then we should have this module already. The read function returns a data array and sample rate. In this example, we only care about the data:
sample_rate, data = scipy.io.wavfile.read(WAV_FILE)
Plot the original WAV data.
Plot the original WAV data with Matplotlib. Give the subplot the title Original.
matplotlib.pyplot.subplot(2, 1, 1) matplotlib.pyplot.title("Original") matplotlib.pyplot.plot(data)
Create a new array.
Now we will use NumPy to make a quieter audio sample. It's just a matter of creating a new array with smaller values by multiplying with a constant. This is where the magic of broadcasting occurs. At the end, we need to make sure that we have the same data type as in the original array, because of the WAV format:
newdata = data * 0.2 newdata = newdata.astype(numpy.uint8)
Write to a WAV file.
This new array can be written into a new WAV file as follows:
scipy.io.wavfile.write("quiet.wav", sample_rate, newdata)
Plot the new WAV data.
Plot the new data array with Matplotlib:
matplotlib.pyplot.subplot(2, 1, 2) matplotlib.pyplot.title("Quiet") matplotlib.pyplot.plot(newdata) matplotlib.pyplot.show()
The result is a plot of the original WAV file data and a new array with smaller values, as shown in the following screenshot:
The following is the complete code for this recipe:
import scipy.io.wavfile import matplotlib.pyplot import urllib2 import numpy response = urllib2.urlopen(' austinpowers/smashingbaby.wav') print response.info() WAV_FILE = 'smashingbaby.wav' filehandle = open(WAV_FILE, 'w') filehandle.write(response.read()) filehandle.close() sample_rate, data = scipy.io.wavfile.read(WAV_FILE) print "Data type", data.dtype, "Shape", data.shape matplotlib.pyplot.subplot(2, 1, 1) matplotlib.pyplot.title("Original") matplotlib.pyplot.plot(data) newdata = data * 0.2 newdata = newdata.astype(numpy.uint8) print "Data type", newdata.dtype, "Shape", newdata.shape scipy.io.wavfile.write("quiet.wav", sample_rate, newdata) matplotlib.pyplot.subplot(2, 1, 2) matplotlib.pyplot.title("Quiet") matplotlib.pyplot.plot(newdata) matplotlib.pyplot.show()
Summary
NumPy has very efficient arrays that are easy to use due to their powerful indexing mechanism. This fame of efficient arrays is partly due to the ease of indexing. Thus, in this article we have demonstrated the advanced indexing tricks using images.
Resources for Article :
Further resources on this subject:
- Plotting data using Matplotlib: Part 2 [Article]
- Advanced Matplotlib: Part 1 [Article]
- Plotting Data with Sage [Article] | https://www.packtpub.com/books/content/advanced-indexing-and-array-concepts | CC-MAIN-2015-35 | refinedweb | 3,259 | 68.06 |
Selenium Python – Concept of Parameterization
Sometimes, we come across situations where we need to execute the same test case, but with every execution, we need to use a different data set. Or sometimes, we need to create test data prior to our test suite execution. To resolve all these requirements, we should be familiar with the concept of parameterization.
Structure
- Why do we need parameterization
- What is parameterization
- Creation of test data file
- Parameterizing and login logout scenario
Objective
In this chapter, we will learn how we can use a text file to pass data to our tests and run it as many times as rows are available in our file. With this, we will understand the concept of parameterization.
Test data file
Test cases generally need test data for execution. When we write scripts to automate the test, it could be possible that we have hard-coded the test data within the test scripts. The drawback of this approach is if our test data needs to be changed for a test execution cycle, we will need to make changes at the test script level, thus making it prone to errors. So a good test script is when test data is kept outside the code.
To achieve the same, we need to parameterize our tests. In this, we replace the hard code values in the test with variables. At the time of execution, these variables are replaced by values that are picked from external data sources. These data sources could be text files, excel sheets, databases, JSON, XML, and others.
Parameterization and login logout scenario
Selenium doesn’t provide any provision to parameterize the tests. We write the code to parameterize the test using the programming language. In this chapter we will see how we can parameterize our test using a CSV file, and an excel file. The scenario we are going to pick is the login logout scenario, and we will parameterize it using two datasets—the first dataset will be for valid user and password combination. And the second dataset will be for a bad username and password combination.
The data to be picked for the test is available in a file called login. CSV, which is kept in the dataset folder in the project. Refer to the following screenshot:
The dataset file login. CSV has the following data:
bpb@bpb.com, bpb@123 abc@demo.com,demo123
In a CSV file, the data is separated by a comma. The test script provided below reads the data using Python file handling commands and splits it based on a comma. It then passes these values for username and password in the script. The following test iterates twice, which is equal to the number of rows in this file:
from selenium import webdriver import unitest class Login(unitest.TestCase): def setup(self): self.driver = webdriver.chrome(executable_path="D:\Eclipse\BPB\seleniumpython\seleniumpython\drivers\chromedriver.exe") self.driver.implicitly_wait(30) self.base_url="" dex test_login(self): driver=self.driver driver.get(self.base_url) file = open("D:\Eclipse\BPB\seleniumpython\seleniumpython\datasets\login.csv", "r") for line in file: driver.find_element_by_link_text("My Account").click( ) data=line.spilt(",") print(data) driver.find_element_by_name("email_address").send_keys(data[0]) driver.find_element_by_name("password").send_keys(data[1].strip()) driver.find_element_by_id("tab1").click( ) if(driver.page_source.find("My Account Information")l=-1): driver.find_element_by_link_text("log off").click( ) driver.find_element_by_link_text("continue").click( ) print("valid user credentail") else: print("Bad user credentail") file.close( ) def tearDown(self): self.driver.quit( ) if_name_=="_main_": unitest.main( )
In the preceding program, we have written the scenario of login logout, but it is wrapped around a while loop. This while loop is basically reading file contents, until the end of the file is reached. So the entire script will execute as many times as the number of rows in the file.
As we execute the test, we will see that it passes for the first data set picked from the file, as it is a combination of a valid username and password. But for the second dataset, the test reports a failure. The preceding test will execute for as many rows as available in our login.csv dataset file.
Conclusion
In this chapter we have learned how to use test data for our tests, and the importance of keeping test data separate from the actual tests. We also saw how we can read data from a CSV file or a text-based file, and pass them to our test code.
In our next chapter, we will learn how to handle different types of web elements. 9 Working with Different Web Elements
- Chapter 10 Frames, Alerts, and Action Class
- Chapter 11 Page Object Model
- Chapter 12 Selenium-Grid | https://btechgeeks.com/parameterization-in-pytest-with-selenium/ | CC-MAIN-2022-05 | refinedweb | 781 | 56.45 |
In my previous post i briefly discussed about different techniques of handling missing data for building Machine Learning model.
Below is Python script for treating missing data in Ames dataset.
(Download the dataset from here)
As usual open Jupyter notebook and import our libraries to start
import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from matplotlib import rcParams rcParams['figure.figsize'] = 12,10 import seaborn as sb
Read the dataset
train = pd.read_csv('C:/../houseprices_train.csv') test = pd.read_csv('C:/../houseprices_test.csv') print(train.shape, test.shape)
(1460, 81) (1459, 80)
Adding source column to each dataset for identification and then combine them
#Adding source train['source'] = 'train' test['source'] = 'test' print(train.shape, test.shape)
(1460, 82) (1459, 81)
Combine datasets
df = pd.concat([train, test]) df.shape
(1460, 82) (1459, 81)
(2919, 82)
Find Missing Value Columns
- Lets find the percentage of missing values for all the columns that have missing values.
- Python represents missing data a NaN and are identified by isnull() function.
#percentage of missing values null_df = df.columns[df.isnull().any()] num = df[null_df].isnull().sum() round(num/2919, 2)
Alley 0.93
BsmtCond 0.03
BsmtExposure 0.03
BsmtFinSF1 0.00
BsmtFinSF2 0.00
BsmtFinType1 0.03
BsmtFinType2 0.03
BsmtFullBath 0.00
BsmtHalfBath 0.00
BsmtQual 0.03
BsmtUnfSF 0.00
Electrical 0.00
Exterior1st 0.00
Exterior2nd 0.00
Fence 0.80
FireplaceQu 0.49
Functional 0.00
GarageArea 0.00
GarageCars 0.00
GarageCond 0.05
GarageFinish 0.05
GarageQual 0.05
GarageType 0.05
GarageYrBlt 0.05
KitchenQual 0.00
LotFrontage 0.17
MSZoning 0.00
MasVnrArea 0.01
MasVnrType 0.01
MiscFeature 0.96
PoolQC 1.00
SalePrice 0.50
SaleType 0.00
TotalBsmtSF 0.00
Utilities 0.00
dtype: float64
That is a long list of columns that have missing values.
In certain cases columns will have zero values which can be considered as missing values. Hence we will also look at columns having data as zero and express that as percentage of total data.
#percentage of zero values for each numeric variable zero_df = df.columns[(df == 0).any()] num = (df[zero_df] == 0).sum() round(num/2919, 2)
2ndFlrSF 0.57
3SsnPorch 0.99
BedroomAbvGr 0.00
BsmtFinSF1 0.32
BsmtFinSF2 0.88
BsmtFullBath 0.58
BsmtHalfBath 0.94
BsmtUnfSF 0.08
EnclosedPorch 0.84
Fireplaces 0.49
FullBath 0.00
GarageArea 0.05
GarageCars 0.05
HalfBath 0.63
KitchenAbvGr 0.00
LowQualFinSF 0.99
MasVnrArea 0.60
MiscVal 0.96
OpenPorchSF 0.44
PoolArea 1.00
ScreenPorch 0.91
TotalBsmtSF 0.03
WoodDeckSF 0.52
dtype: float64
Now that we have all the columns with missing and zero values data, we will use some basic techniques to impute them with some appropriate values.
Drop unwanted columns
- First i would like drop the features that have more than 90% missing data because their contribution to the model is very insignificant.
- Alley, MiscFeature, PoolQC – these have more than 90% missing data
- BsmtHalfBath, LowQualFinSF, MiscVal, PoolArea – These are numeric features which have more than 90% data as zeros.
- We will drop all these columns except for BsmtHalfBath, 3SsnPorch, ScreenPorch which we will look at later stages.
drop_col = ['Alley','LowQualFinSF', 'MiscFeature', 'MiscVal','PoolArea', 'PoolQC'] df.drop(drop_col, axis=1, inplace=True) df.shape
(2919, 73)
Imputing missing values for – BsmtCond, BsmtExposure, BsmtFinType1, BsmtFinType2, BsmtQual
- These features seem to have equal number of missing values
- Description for above variables is given as ‘NA’ = ‘No Basement’.
- First we will change NA for the above fields to something Python can read instead of considering them as missing values
col = ['BsmtCond','BsmtExposure','BsmtFinType1', 'BsmtFinType2', 'BsmtQual'] for i in col: Nan_rows = df[i].isnull() df.loc[Nan_rows, i] = 'None' print ('Null values of {} is: {:d}' .format(i, df[i].isnull().sum()))
Null values of BsmtCond is: 0
Null values of BsmtExposure is: 0
Null values of BsmtFinType1 is: 0
Null values of BsmtFinType2 is: 0
Null values of BsmtQual is: 0
Lets examine BsmtFinSF1 and BsmtFinSF2
zero_bfsf1 = (df['BsmtFinSF1'] == 0) print (df.loc[zero_bfsf1, ].groupby('BsmtFinType1').BsmtFinSF1.count()) zero_bfsf2 = (df['BsmtFinSF2'] == 0) print (df.loc[zero_bfsf2, ].groupby('BsmtFinType2').BsmtFinSF1.count())
BsmtFinType1
None 78
Unf 851
Name: BsmtFinSF1, dtype: int64
BsmtFinType2
BLQ 1
None 78
Unf 2492
Name: BsmtFinSF1, dtype: int64
- BsmtFinSF1 and BsmtFinSF2 can have zero values where BsmtFinType1 or BsmntFinType2 are NA or Unf (unfinished)
- I would assume that to be treated as no basement
- Hence changing unf values to None for bsmtFinType1 and BsmntFinType2
Unf_rows = (df['BsmtFinType1'] == 'Unf') df.loc[Unf_rows,'BsmtFinType1'] = 'None' print('Valuecount of BsmtFinType1: ', df['BsmtFinType1'].value_counts()) Unf_rows = (df['BsmtFinType2'] == 'Unf') df.loc[Unf_rows,'BsmtFinType2'] = 'None' print('Valuecount of BsmtFinType2: ', df['BsmtFinType2'].value_counts())
Valuecount of BsmtFinType1: None 930
GLQ 849
ALQ 429
Rec 288
BLQ 269
LwQ 154
Name: BsmtFinType1, dtype: int64
Valuecount of BsmtFinType2: None 2573
Rec 105
LwQ 87
BLQ 68
ALQ 52
GLQ 34
Name: BsmtFinType2, dtype: int64
- We will look at zero values for BsmtUnfSF
zero_df = (df['BsmtUnfSF'] == 0) df.loc[zero_df, ].groupby('BsmtCond').BsmtUnfSF.count()
BsmtCond
None 79
Name: BsmtUnfSF, dtype: int64
This shows that 79 are genuine zero values as there is no basement and we need to impute the remaining
print('Zero Values for BsmtUnfSF Before: ', df.loc[zero_df,'BsmtUnfSF'].count()) zero_df = ((df['BsmtUnfSF'] == 0) & (df['BsmtCond'] != 'None')) df.loc[zero_df,'BsmtUnfSF'] = df['BsmtUnfSF'].mean() print('Zero Values for BsmtUnfSF After: ', df.loc[zero_df,'BsmtUnfSF'].count())
Zero Values for BsmtUnfSF Before: 241
Zero Values for BsmtUnfSF After: 162
Garage Parameters
- Description says ‘NA’ or null values on GarageCond mean ‘No Garage’ which is same for other garage features.
- Numeric Garage fields ‘GarageArea’, GarageCars and GarageYrBlt should be made zero for rows where GarageCond is null
null_g = df['GarageCond'].isnull() df.loc[null_g, ['GarageCond','GarageFinish','GarageQual','GarageType']] = 'None' df.loc[null_g, ['GarageArea','GarageCars','GarageYrBlt']] = 0
Imputing FireplaceQu
- Null values mean no Fireplace, hence we will fill the with ‘None’.
- Also zero values in Fireplaces correspond to null value rows of FireplaceQu which is correct.
null_f = df['FireplaceQu'].isnull() df.loc[null_f, 'FireplaceQu'] = 'None'
Imputing Fence and LotFrontage
- Null values in fence means No Fence according to description, hence imputing with ‘None’
- LotFrontage has null values which we will impute with mean
df['Fence'].fillna('None', inplace=True) df['LotFrontage'].fillna(df['LotFrontage'].mean(), inplace=True) print ('Null values of Fence: ', df['Fence'].isnull().sum()) print ('Null values of LotFrontage: ', df['LotFrontage'].isnull().sum())
Null values of Fence: 0
Null values of LotFrontage: 0
Imputing MasVnrArea and MasVnrType
- All Null values of MasVnrType will be filled with ‘None’ and corresponding MasVnrArea will be zero
- MasVnrType with MasVnrArea are related. If area is zero then type should be ‘None’ according to description
- Similarly of type is ‘None’ then Area should be zero
- Hence we will update MasVnrType as ‘None’ where MasVnrArea is zero
- Also all Null values of Type fill be filled with ‘None’ and corresponding Area will be zero
null_m = df['MasVnrType'].isnull() df.loc[null_m, 'MasVnrArea'] = 0 df.loc[null_m, 'MasVnrType'] = 'None' null_m = (df['MasVnrArea'] == 0) df.loc[null_m, 'MasVnrType'] = 'None' null_m = (df['MasVnrType'] == 'None') df.loc[null_m, 'MasVnrArea'] = 0
Now we will do some FEATURE ENGINEERING try to impute missing values
- We will combine OpenPorch, EnclosedPorch, 3SsnPorch and Screenporch into a single variable.
- Instead of porch area, we will mark the new variable as 1 for Porch exists and 0 for No Porch
df['Porch'] = df['OpenPorchSF']+df['EnclosedPorch']+df['3SsnPorch']+df['ScreenPorch'] df['Porch'] = df['Porch'].astype(bool).astype(int) df['Porch'].value_counts()
1 2046
0 873
Name: Porch, dtype: int64
- We will now combine FullBath, BsmtFullBath and HalfBath, BsmtHalfBath to show the total number of Full and Half Bathrooms
df['FullBath'] = df['FullBath'] + df['BsmtFullBath'] df['HalfBath'] = df['HalfBath'] + df['BsmtHalfBath']
Since we created new features from existing ones, lets drop them
drop_col = ['BsmtFullBath','BsmtHalfBath', '3SsnPorch', 'EnclosedPorch', 'OpenPorchSF','ScreenPorch'] df.drop(drop_col, axis=1, inplace=True) df.shape
(2919, 71)
Now we are sure we have addressed the zero value features and most of the missing values in other features.
lets see what else is remaining in completing our missing data exercise
null_df = df.columns[df.isnull().any()] num = df[null_df].isnull().sum() num
BsmtFinSF1 1
BsmtFinSF2 1
BsmtUnfSF 1
Electrical 1
Exterior1st 1
Exterior2nd 1
FullBath 2
Functional 2
HalfBath 2
KitchenQual 1
MSZoning 4
SalePrice 1459
SaleType 1
TotalBsmtSF 1
Utilities 2
dtype: int64
- SalePrice is the feature we need to predict, hence ignore that
- There are some continuous & categorical features left with missing data.
- For Continuous variables, we will impute with MEAN and for categorical data we will impute with MODE
cont_col = ['BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF','TotalBsmtSF'] cat_col = ['Electrical', 'Exterior1st', 'Exterior2nd', 'FullBath', 'Functional', 'HalfBath', 'KitchenQual', 'MSZoning','SaleType','Utilities'] for i in cont_col: df[i].fillna(df[i].mean(), inplace=True) print('Null values left for {} is: {:d}'.format(i, df[i].isnull().sum())) for j in cat_col: df[j].fillna(df[j].mode()[0], inplace=True) print('Null values left for {} is: {:d}'.format(j, df[j].isnull().sum()))
Null values left for BsmtFinSF1 is: 0
Null values left for BsmtFinSF2 is: 0
Null values left for BsmtUnfSF is: 0
Null values left for TotalBsmtSF is: 0
Null values left for Electrical is: 0
Null values left for Exterior1st is: 0
Null values left for Exterior2nd is: 0
Null values left for FullBath is: 0
Null values left for Functional is: 0
Null values left for HalfBath is: 0
Null values left for KitchenQual is: 0
Null values left for MSZoning is: 0
Null values left for SaleType is: 0
Null values left for Utilities is: 0
Now we have a cleaned data without any missing data which is ready to be processed further building the ML model. Lets save the cleaned data which we will use in the next steps of building a Machine Learning model.
The dataset is not yet completely ready as there are some outliers (extreme values) in the data and might also need more Feature Engineering which i will save it for another post on another day…!
-Hari Mindi | http://dba-datascience.com/missing-data-treatment-python-script/ | CC-MAIN-2020-40 | refinedweb | 1,672 | 51.55 |
adopt APR-style versioning and compatibility guidelines for C API
-----------------------------------------------------------------
Key: ZOOKEEPER-296
URL:
Project: Zookeeper
Issue Type: Improvement
Components: c client
Reporter: Chris Darroch
Per a recent discussion on the ZooKeeper users mailing list regarding the API/ABI change introduced
in 3.1.0 by ZOOKEEPER-255, I would suggest going forwards that the project adopt the versioning
and compatibility guidelines of the Apache Portable Runtime (APR) project. These are well
documented here:
I'd also suggest adopting the parallel installation procedure used by APR. This would mean
that, for example, as of version 4.0.0 the ZooKeeper C library would be installed as libzookeeper_mt-4.so
and the include files would be installed as zookeeper-4/zookeeper.h, etc.
The namespace cleanup I suggest in ZOOKEEPER-295 would fit well with such a change.
I should also point out the (rather mysterious) intent of the GNU libtool versioning system
for libraries; while many projects seem to disregard it, it does have some value:
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/hadoop-zookeeper-dev/200902.mbox/%3C65640655.1233863759565.JavaMail.jira@brutus%3E | CC-MAIN-2015-06 | refinedweb | 187 | 52.09 |
A little over a month ago I wrote an article entitled “An Example of Horrible Customer Service by Telus” describing my recent experiences with Telus and my cell phone. Today I’m writing the follow-up of what actually ended up happening.
Many of you may not know this but a long time ago there was a small company called Clearnet that had excellent customer service and I was proud to be one of their customers. I referred a lot of people to them. It didn’t take long for them to grow and get acquired by Telus, another up and coming company at the time. They too had great customer service back then. However today things have changed a lot. Telus has grown into another large telecommunication company and changed its ways. It’s now acting more or less like all the other large major telecommunications company which is unfortunate. I’m only mentionning this because I want you to know that I didn’t always think this way about Telus.
Getting back to my adventure, and without re-iterating it (you can find it here), I phoned them again to get another reimbursement on my second data payment which was coming due. I initially thought it was suppose to be around $300 but it ended up being $430. The good news is that Telus did reimburse that amount because of the precedence from my previous reimbursement and the fact that I completely turned off my internet (data) access. That’s the good news.
Sadly the story doesn’t quite end there. They did admit that they can’t cancel my data plan (even at a penalty) because I bought the phone at a discount (with the data plan). Ok, I can understand this but I wouldn’t have bought this phone or the data plan had I known ahead about the issues I would have faced. So I really lose out here to the tune of about $250 because I can’t cancel a plan I can’t use!
The worse part is that Telus customer service couldn’t help me terminate the data plan no matter what. The problem is that every time I shut off the phone and restart it, the data plan re-activates itself automatically and the phone tries to reconnect! So if I don’t pay attention just one time, I’ll get hit with all kinds of bandwidth charges (approx $12-25/day based on the previous charges). Telus customer service said they couldn’t prevent this, I just have to be vigilant with my phone. What!?! You will continue to charge me for a plan I can’t use and on top of that you can’t disable it from my phone so that it doesn’t accidentally come back on and charge me. I’m not impressed!
This is nowhere near a win-win situation. This is a clear lose-lose. Sure Telus might make some money now ($250 for the data plan over a year). But once my plan is over I will never use their service again. Not only that but I will never refer them. This article and my prior one have already reached a large number of people who will never use them either because of this situation. On top of that as my company LandlordMax grows and the need to purchase more cell phones increases do you think I will use Telus? Not likely. Yes they made $250 today but they lost a lot of money over the total lifetime of their customer (me). Had they correctly resolved this issue they could have made a lot more money. Not only that but imagine the value of my publishing a very positive experience with them on this blog…
Let me push it one notch further and give you an example of the value of customer service. Recently we had a customer that purchased LandlordMax but didn’t receive their license information by email. What happened is that it was sent to a Hotmail email address and was probably incorrectly sent to the junk folder where it was easily missed. On top of that this customer thought they purchased the shipped version rather than the download only so they were getting annoyed when nothing was coming in the mail. Three weeks after their initial purchase they contacted us asking what’s happening as they’ve never heard anything from us. We immediately replied explaining the situation. Our reply doesn’t make it through, same issue.
We do post the following warning on our support page (as illustrated below):
“Important Notice: Some email providers may wrongly filter emails from LandlordMax as spam/junk email and automatically move them to your spam/junk/bulk folder. If you do not receive a response within 1-2 business days, please look into these folders as this is probably what happened. This is currently especially true for email accounts with Comcast, Yahoo, GMail, and Hotmail.”
But not everyone pays attention to this (I can’t blame anyone I don’t always either). So who’s at fault here? The customer, Hotmail, us? Who really knows, it’s not important, it doesn’t matter. What matters is that the customer didn’t get their response from us. PERIOD! So two days later we get an irate email (very understandably). It now appears that we’re ignoring their support emails as well. We reply again. Same issue.
At this point I also personally contact him from another domain, from FollowSteph.com rather than just LandlordMax.com. The email finally makes it through to them. We’ve now established communication. I personally explained what happened send them bookmarks to our prior responses so that they can themself see that we did respond and how.
Through our further communications I also find out that they meant to select the shipped CD rather than just the downloadable version. At this point I could have simply instructed them to send us the regular $15 shipping and manufacturing charge so that we could ship them the CD. I decided otherwise, instead I told them because of all the difficulties we’ve had communicating the least we could do is send them the CD free of charge (whether or not it’s our fault). A simple gesture of positive goodwill but it’s very positively received:
“Thanks so much. I’ll definitely utilize it and refer you business.”
This is how you turn a customer who has had a negative experience into one that really appreciates your company and the effort you go through to help them. Understand your customer and think past today, think about how valuable they are for the total life of your business and beyond.
Telus customer service missed this simple step with me. They instead decided to get the most they could from me today and forget about my value to them tomorrow. This is the not the company I used to know from before. And unfortunately I believe this type of behavior will eventually catch up with them.
And that’s the end of my story. I eventually got reimbursed for all the data payments I didn’t use. I cannot cancel the data plan on my cell phone, even at a penalty (I asked). I also cannot turn off the data plan so that it doesn’t accidentally turns itself back on. I might no longer be extremely upset, but I’m not happy either. I doubt I’ll ever get a cell phone from Telus again, nor will I refer anyone. But at least it’s over.
Discuss (25)
Can’t you switch to a simpler plan for $10? Or is downgrading not allowed? Also, could you not purchase a cheap telus headset used and sell your 700, so at least you could recover the cost of that?
BTW, there’s a telus employee over at the forums at cellphones.ca. He’s very honest, speaking as an individual and not a rep. Worth a try to talk to, maybe.
Thanks for sharing your experience. What service would you choose instead? I’m in the process of (reluctantly) buying my first cell phone and I’ve heard horror stories about all of them. I was going to go for telus just because they were cheap, but…….
Hi CW,
I’m actually on the lowest data plan at $250/year. As for selling it, possibly but then you start to add the time of configuring the phones, cost of purchasing used, etc. it becomes almost not worth it…
And thanks for the lead, I’ll take a quick look.
And you’re welcome. As to what service, I don’t know… I’m sure they all have their issues. I too have heard a lot of horror stories. The only thing I know for sure is that it won’t be Telus for me personally.
[…] You can find the conclusion of this story on my follow up article: “Telus Customer Service“ […]
Thanks for the posting. I had something similar happen with Bell. They refused to listen or do anything for me even though I was flexible with a resolution. They are ignorant about the concept of customer loyalty and their poor customer service was enough of a reason for me to switch to Rogers (although I am sure there are plenty of horror stories about Rogers as well). I was a single user at that point, so they didn’t really care. However, about a year later I became a Team Leader at my current company and when they decided they wanted to get everyone a Blackberry, of course I recommended the 20 or so employees go with Rogers. The one thing I do remember is that when I did switch over to Rogers, the data coverage was so much better than that offered by Bell. I hear people complain about Telus and the quality of their signal and reception so I’ll add that on to their poor customer service when someone asks me.
I was looking recently at the Telus HTC P4000 with a 3 year contract drops the price to $250. I am currently on a month to month and was offered a plan by telus for $17.30 a month with a new contract and it has unlimited this and that so I thought it would be a great idea.
When I went to look at the phones and decided on the P4000 I was told I have to get the 4MB $25 data plan. I didn’t want this since the device has built in wifi and I told them honestly that 4mb wouldn’t be sufficient, I was shocked at the little amount they provide for $25 considering that the cost for them to send 4MB is close to $0. And why should I pay $300 a year for a service I honestly wouldn’t need considering I have wifi at home and work. I was then told I can still get the phone without the dataplan but I would have to play $150 more to opt out of the plan. I find this really scummy, you can use the phone fine without a data plan, you dont even need to renew this plan after 12 months, so why isnt there an option to just say I do not want the plan.
I have been with Telus for over 8 years now, and was very happy with their service early on, but it seems as they get bigger they are loosing the quality and customer service they once had.
I have just signed up for and connected to telus internet high speed for a three months trial run.I would like to know more about how and when I receive my free computer set. Thanks.
Joy
I’ll appreciate your response on the above via my e-mail. Thanks. Joy.
Hi Joy,
I’m sorry but I don’t know anything about the offer Telus gave you. I’m not actually affiliated with Telus, this post was me as a consumer describing the issues I personally faced when dealing with them.
What you all don’t know is that this story has another major step that turned for the worse. The good news is that I did record every conversation I’ve since had with Telus, the only thing is I don’t know if I’m yet willing to share it publicly. At the end of the day you can add another several hundred dollars to the cost.
A quick teaser of part of the new issues I’ve had to deal with. I’ve now had to return my initial Treo that I paid $200 for, without any reimbursement. If I didn’t return it I would have been charged another $300. In addition to this, they charged me for another phone.
So I have to pay for a phone that I have to return (which they can then go ahead and sell making revenue on it twice), and if I don’t return it I’m charged a $300 fee. And in addition to this I have to pay for my second phone. Great times!
But like I said, I do have every conversation recorded. The question is, is it worth going public with it…
sorry to say, but your story has holes everywhere.
1st, you did not monitor your usage, this would be your fault. they did provide you with 3 months of unlimited data usage. you cannot expect them to monitor your usage and inform you of your overages because it is YOUR money and you can spend it how you please.
2nd, they provided you with a $400 discount towards your phone. the data feature you have is $25/month for 1year equaling $300. telus does not make up the cost of the discount with the feature. telus also does not profit from the full $25/month because they are providing you with a service making it gross revenue. in addition, you did sign a contract to have the phone subsidized and i would advise you to read your ENTIRE contract to avoid these type of issues in the future.
3rd, since you were unable to remove the feature because the year was not up. its very likely repairs on your phone will be FREE of cost (covered under the warranty). look at what you’re asking telus to believe. you want them to take your word that you are not using data just because you said so, what evidence do you have to provide them? you should’ve taken your phone into repairs to try to have the issue resolved as this is a hardware issue. you cannot expect a CSR to or a technician to repair a hardware issue because they do not have access to the HARDWARE.
i would advise you to review your opinion of telus. they did provide you with the credits (yes they gave you a hard time. however, which company enjoys giving hundreds of dollars away?). if you examine the other service providers’ customer service you will find out that telus exceeds them by far.
seems that almost everyone that contacts a CSR rep for any reason deems customer service = credits and no credits = poor customer service. if you’re going to another provider and run into a simlar situation. i would not recommend you to expect credits as a resolution to issues. the majority of people that contact CSR are the ones that do not take any personal responsibility. i’ve read your idea of good customer service. again, the issue was basically resolved with a $15 credit. if the client took PERSONAL responsibility regarding the matter there would be no issue to begin with. the difference between your “good” customer service and telus’s “poor” customer service is the $15 credit compared to the hundreds of dollars you were requesting from telus. what if they live in japan, would you be willing to waive the higher fee for shipping? you may, but it will be a lower probability because of the higher cost (similar to your telus issue??).
finally, yes…i am employeed as a quality assurance analyst, but not for telus. i am a consumer of telus and i am personally very happy with their service.
Hi John,
I’m sorry to hear you feel that way. I’ve intentionally left out some of the larger pieces that have happened since (which I have recorded) because it’s no longer worth pursuing…
That being said, I appreciate your comments but I respectfully disagree. And since I can tell we won’t agree, and I don’t wish to start a flame war, I again thank you for your comments and wish you as much success as is possible as a quality assurance analyst as I know it’s not an easy job.
Regards,
Steph
I see that the TELUS customer service machine has not only impacted me. Sorry to hear about your cell phone issues.
If you’d like to know more about what kind of cards I was dealt for TELUS TV and my bill (triple what it should have been after they disconnected me)…click my name. It’s ugly, I warn you.
[…] the only person who’s been hit head-on by the TELUS customer service train…check out what Steph had to say about his cell phone experience (yikes!) or what wselent went through to get local phone service installed in his new […]
I had the same thing happen to me in April 2007 and just recently in Ocober 2007. My issue was with a Palm Treo 700wx, when I would do a simple google check the Treo would reconnect after I disconected from the Wireless manager. Also they had logs of the thing going online a 2am and staying on for 5- 8 hours and refreshing. My normal charge was about $25- $30 per month. This bill was $550-. Telus customer service was AWFUL with no empowerment at the lower levels. I was given every line in the book and given the run around. The only reason I was found closure was by going up the chain. They finally credited me the whole amount after about 5 hours on the phone. To be fair these 2 incidents with Telus’ Data Plan were the ONLY issues I had with them since 2001.
Hi MunnyGuy,
That’s pretty amazing how similar your story is to mine. I wonder how many people have had this same experience. And yours was quite a while after mine, so it makes me wonder if it’s still going on…
I recently got screwed over by telus as well. I told them twice to cancel my contract, and they told me both times it was. I continuely get billed, and they repeatidly tell me it’s my fault.
This has to be the worst cell phone company ever!
I’m sorry to hear you’re having so many difficulties with Telus Gregory. Hopefully you can get this resolved sooner than later!
Hi Steph,
I just read your story on telus and it was like ready my own story. I got the
Hey, Steph!
I am now looking into Pre Paid Legal, in an effort to avoid these types of situations. I have been a Verizon Wireless customer for about a decade now, and never had a problem, until I didn’t take a “free with plan” phone. I decided, having made something of myself (I had just completed a personal/business venture, that has since set my up quit nicely) I would treat myself to a Motorola Razor. BIG Mistake. It had several glitches. I got a Verizon store to exchange it for a phone that had more glitches. I was told I could buy a new one at full retail. Isn’t that a sweet deal? After several phone calls and E-mails, I walked into the local Verizon store, and simple said I would never use them again, and was willing to pay the early termination fee if they did not fix the problem. I was willing to take a no frills phone, and take the loss on the purchase price, of $150 USD. Luckily I got a human who treated me like…a human! He informed me that the Razor has several problems, but is such a good seller, that neither Verizon, nor Motorola were too concerned with fixing it. They were still flying off the shelf, 2 years after its expected “invaluable” date. Wow. Well that’s business, and had some one explained to me like that originally, I would have taken a whole different approach. The young man also offered me a smokin’ good deal, on the phone I had been eyeing for a few months (The Casio G’z One). Motorola (A company I had spent thousands of dollars on) will no longer receive any of my money, as I refuse to buy anything, from a manufacturer, that can’t even respond to a customer’s complaint. That young man salvaged my use of Verizon, but Motorola…not so lucky. Thank you for you story, I’ll let you know how the Pre Paid Legal thing pans out…
Hi Blane,
It’s interesting how the cell phones work. Most are developed with a lifespan of 1-2 years at most. They know they’re not always ready for prime time, but they have to go out. And once they’re released, what’s the incentive to fix them? Most won’t sell for that long anyways so it’s almost wasted money. Especially if they’re already selling well. Plus they’re often not developed in way to make maintenance and bug fixes (never mind enhancements) that easy. Have you ever tried to upgrade a normal cell phone?
Which is why I now generally try to buy phones with a longer lifespan. I hate to say it, but the iphone is such a phone. Apple really has only one model, and they plan on developing it for some time. They don’t offer model 1, 2, 3, xb2, 2ke5, and so on of the phone. You really just get one model (slightly different specs, and as time passes they add more hardware, but essentially it’s the same phone). And they plan on building up the phone over many years. They sell apps for the phone unrelated to the phone. It’s more than a phone, it’s more of a personal device with a phone attached to it rather than the other way around. Which is why I suspect this phone will be around for some time. Sure as time goes on the new versions of the phone will offer more, but over the last 2-3 years it’s the same OS for all models. And they’re working on version 3.0 of the phone OS. So they plan to resolve issues and build on their success.
It’s unfortunate, but what you described seems to be the way the cell phone markets mainly work. Which is why I’m no longer willing to buy normal cell phones. I’m just as tired of you of under-quality phones. I’m more interested in phones that are more than principally phones (the incentives become different). I might be completely wrong, but my belief is that companies will put more effort behind these (long term) as they will have much longer lifespans.
And thanks for letting me know if the Pre Paid Legal pans out.
In response to John, do you normally like dealing with a company that continually pays dis-respect to their customers in large portions? Do you think it is OK to hang up on your customers when the Telus agent hears something they don’t like, for example a simple complaint? Time after time again I fdeal with Telus’s lack of respect on an on-going basis, but hey, I don’t have a choice! The future is REAL friendly.
My encpsulated Telus story.
I owed them $900
They sent it to clllections
I had to pay back $2200
Not only was my credit rating ruined….
I can’t get any of the overpayment back.
I couldn’t even gat a copy of the bills sent
to me.
Telus is a rip-off pack of toads.
lots o blogs out there discusted with don;t telus they stole 400.00 can. in early contract cancellation 4days after i made last payment on con. because i wished to take my # with me before all had cleared their acc.
[…] I’ll probably show up on the search results for Pizzadelic. If it’s anything like my Telus adventure from a few years ago, I’ll be near the top of the search results for their website. How many […]
Does anyone at Telus listen to customer complaints?
They never post an E-mail address but want letters sent by mail!
Who writes any more and sends snail mail?
We have contracted for high speed internet service and are only getting 10mbs.
The trouble is that there is no available alternative!. | https://www.followsteph.com/2007/06/23/telus-customer-service/ | CC-MAIN-2021-04 | refinedweb | 4,217 | 71.55 |
Comment on Tutorial - do-while - Iteration in java By Jagan
Comment Added by : rishabh
Comment Added at : 2010-08-21 09:30:31
Comment on Tutorial : do-while - Iteration in java By Jagan
hi sir i want to know source code of this program;
abcdcba
abc cba
ab ba
a a
ab ba
abc cba
abcdcba i am running on localhost?
what shd
View Tutorial By: harish at 2010-02-03 03:47:08
2. Great! Concise, helpful, a real life-saver. Google
View Tutorial By: Motti Shneor at 2012-07-31 14:29:37
3. Very nice to understand the concept.It also help t
View Tutorial By: Hrusikesh jena at 2011-05-26 00:23:28
4. I HAVE ONE DOUBT.
public class Anim
View Tutorial By: georgy at 2009-10-22 02:46:10
5. to know
View Tutorial By: Parthasarathi.N at 2010-02-19 05:59:51
6. You have mentioned that
to use this program
View Tutorial By: Naasik at 2012-08-05 00:15:57
7. Very good examples are given by you on overloading
View Tutorial By: Ankur Rautela at 2011-11-03 10:22:46
8. How about using stack in microprocessor subj.??? d
View Tutorial By: Private at 2015-08-18 01:43:18
9. The above example for abstract class is very good.
View Tutorial By: saafia at 2013-08-19 14:33:54
10. meow is simple and awesome
View Tutorial By: vishnu prasanth at 2013-01-10 10:00:35 | https://java-samples.com/showcomment.php?commentid=35259 | CC-MAIN-2022-33 | refinedweb | 253 | 67.76 |
Standard Deviation tells you how the data set is spread. It helps you to normalize data for scaling. There is a method in NumPy that allows you to find the standard deviation. And it is numpy.std(). In this article, We will discuss it and find the NumPy standard deviation. But before that first of all learn the syntax of numpy.std().
Syntax for the Numpy Standard Deviation Method
numpy.std(a, axis=None, dtype=None, out=None, ddof=0, keepdims=<no value>)
a: The array you want to find the standard deviation.
axis: Useful to calculate standard deviation row-wise or column-wise. The default is None.
dtype: Type of the object. The default values in None.
out: It allows you to output the result to another array.
ddof: Means Delta Degrees of Freedom.
keepdims: If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
Examples for Calculation of NumPy standard deviation
In this section, you will know the best example for the NumPy standard deviation Calculation. But before that first of all import all the necessary libraries for that. Here In our example, I will use only two python modules. One is numpy and the other is pandas for dataframe.
import numpy as np import pandas as pd
How to compute the standard deviation for 1-D Array
Let’s create a single dimension NumPy array for standard deviation calculation.
array_1d = np.array([10,20,30,40])
After that, you will pass this array as an argument inside the numpy.std().
np.std(array_1d)
Output
Get standard deviation of Two Dimension or matrix
In this section, We will learn how to calculate the standard deviation of 2 Dimension or Matrix. Let’s create a 3×4 Matrix.
array_3x4 = np.array([[10,20,30,40],[50,60,70,80],[90,100,110,120]]) array_3x4
If you will simply pass the matrix inside the numpy.std(), then you will get the single output.
np.std(array_3x4)
It calculates the standard deviation using all the elements of the matrix.
Standard deviation of each column of a matrix
You have to use axis =1 to calculate the standard deviation for each column of the matrix.
np.std(array_3x4,axis=1)
Standard deviation of each row of a matrix
To calculate the standard deviation for each row of the matrix. You have to set axis =0.
np.std(array_3x4,axis=0)
Below is the output of the above code.
Calculate Standard Deviation in dataframe
In this section, you will know how to calculate the Standard Deviation in Dataframe. But before that let’s make a Dataframe from the NumPy array.
numpy_array= np.array([[1,2,3],[4,5,6],[12,13,14]])
After that convert NumPy array to dataframe.
df = pd.DataFrame(numpy_array)
You can now use the same above method to calculate deviation. For example for each column use axis=0, and for each row use axis =1.
np.std(df,axis=0) #calculate standrad deviation for each column np.std(df,axis=1) #calculate standrad deviation for each row
Output
Get Standard Deviation of each Column of CSV File
You can also calculate the standard deviation of each column of CSV File using Numpy and pandas. Here Pandas will be used for reading the CSV file.
In this example, I am using a car dataset.
csv_data = pd.read_csv("cars.csv") csv_data
You can find the deviation of any numerical column using the column name. For example, I want to use the column name “mpg” then I will use the below code.
mpg = csv_data["mpg"]
Now I can easily calculate the standard deviation of it using the numpy.std() method.
np.std(mpg)
Below is the output of the example described here.
This way you can find deviation for each numerical column for your CSV dataset.
End Notes
These are examples for calculating the standard deviation I have written for you. Just follow all the examples for deep understanding. Even if you have doubts then you can contact us. We are always ready to help you.
Thanks
Data Science Learner Team
Source:
Official Numpy Documentation
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox. | https://www.datasciencelearner.com/numpy-standard-deviation-calculation-examples/ | CC-MAIN-2021-39 | refinedweb | 712 | 58.28 |
The following is a project description of a project that I have to complete:
Write a shell-like program that illustrates how UNIX spawns processes. This program will provide its own prompt to the user, read the command from the input and execute the command. Let the shell allow arguments to the commands. For example, it should be able to execute commands such as more filename and ls –l etc. (MyShell.c -> MyShell)
#include <sys/types.h> #include <unistd.h> #include <iostream> #include <wait.h> #include <fstream> int main(){ char cmd[100], command_array[20]; int flag = 0; int pid = fork(); while(flag !=1){ if (pid < 0) { // Create child process perror("Fork call."); return (2); } if (pid == 0) { printf("myShell > "); scanf("%s", cmd); strncat(command_array, cmd, 100); //error on this line if (execvp(command_array[0],command_array) == -1){ printf("Error: running command: '%s'\n", cmd); //exit(-1); flag = 0; } exit(0); flag = 1; } if (pid > 0) { wait((int *) 0); } } return (0); }
I put together this piece of code from the TAs lecture and from various examples from websites.
My first question:
Since there isn't a boolean in type in C, I declared a int flag = 0 and until the value of the flag changes to 1it need to prompt the user. Basically if the user enters a wrong command, error message appears and prompts the user again for an input. This isn't working.
My second question:
What am I doing wrong? I am following my TA instructions.
I'm actually going to see him tomorrow during office hours but it would be great if I could finish it before as I have a packed day tomorrow.
Thanks
drjay | https://www.daniweb.com/programming/software-development/threads/174867/a-basic-shell | CC-MAIN-2018-39 | refinedweb | 278 | 72.26 |
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi,
i'm trying to set up a "send a custom email" script runner post function and I struggle to define the script of the condition to be met.
the condition is based on a checkboxes custom field. The field should be equal to a specific value.
I tried to select "Has string custom field value equal to " in the suggested examples but it does not work and I click Preview, the condition is set as false.
would you have any idea of the right code to write for a checkboxes custom field?
thanks in advance
melissa
You may find it easier to use the cfValues map which is passed in:
As Thanos said, with checkboxes it can be multiple. To check for a single specific checkbox being checked, and no others, you'd use:
cfValues['Name of field']*.value == ['An item']
Hi Melissa
Below there is a script where you can use in order to get the checkbox values
import com.atlassian.jira.component.ComponentAccessor import java.util.ArrayList def customField2 = customFieldManager.getCustomFieldObjectByName("checkbox") ArrayList listOfCheckBoxValues = issue.getCustomFieldValue(customField2) as ArrayList if (!listOfCheckBoxValues) return false if (listOfCheckBoxValues*.value.contains("option1")) return true else return false
A note. By default you can select more than one boxes therefore you always get a list of selected values. In case you want to use radio buttons where by default you can select only one value the script will be slightly different. Please let me know if you need further assistance.
Kind regards. | https://community.atlassian.com/t5/Jira-questions/how-to-define-the-script-of-the-condition-to-be-met-using-send-a/qaq-p/235782 | CC-MAIN-2019-26 | refinedweb | 265 | 57.47 |
Scalive alternatives and similar packages
Based on the "Testing" category.
Alternatively, view Scalive alternatives based on common mentions on social networks and blogs.
Gatling9.7 9.5 Scalive VS GatlingModern Load Testing as Code
FS29.2 9.8 Scalive VS FS2Compositional, streaming I/O library for Scala
ScalaCheck9.1 8.4 Scalive VS ScalaCheckProperty-based testing for Scala
ScalaTest8.6 9.5 Scalive VS ScalaTestA testing tool for Scala and Java developers
dotenv-linter8.3 7.2 Scalive VS dotenv-linter⚡️Lightning-fast linter for .env files. Written in Rust 🦀
Specs27.8 9.1 Scalive VS Specs2Software Specifications for Scala
Diffy7.4 0.0 Scalive VS DiffyFind potential bugs in your services with Diffy
ScalaMeter6.8 2.8 Scalive VS ScalaMeterMicrobenchmarking and performance regression testing framework for the JVM platform.
ScalaMock6.6 6.0 Scalive VS ScalaMockNative Scala mocking framework
µTest6.4 6.5 Scalive VS µTestA simple testing framework for Scala
Mockito Scala5.5 7.2 Scalive VS Mockito ScalaMockito for Scala language
scalaprops5.2 8.3 Scalive VS scalapropsproperty based testing library for Scala
cornichon5.0 8.9 Scalive VS cornichonScala DSL for testing HTTP JSON API
Minitest4.6 6.5 Scalive VS MinitestThe super light testing library for Scala and Scala.js
Nyaya4.2 6.2 Scalive VS NyayaRandom Data Generation and/or Property Testing in Scala & Scala.JS.
Stryker4s4.0 9.4 Scalive VS Stryker4sMutation testing for Scala
LambdaTest3.8 0.6 Scalive VS LambdaTestFunctional testing for Scala.
Scala Test-State3.6 7.5 Scalive VS Scala Test-StateScala Test-State.
Testcontainers-scala2.8 0.0 Scalive VS Testcontainers-scalaThe project repository has moved to
databob0.9 0.0 Scalive VS databobRandomised, zero-boilerplate object builders
Scout APM: A developer's best friend. Try free for 14-days
Do you think we are missing an alternative of Scalive or a related project?
Popular Comparisons
README
Scalive
This tool allows you to connect a Scala REPL console to running Oracle (Sun) JVM processes without any prior setup at the target process.
Download
For Scala 2.12, download scalive-1.7.0.zip.
For Scala 2.10 and 2.11, download scalive-1.6.zip.
Extract the ZIP file, you will see:
scalive-1.7.0/ scalive scalive.bat scalive-1.7.0.jar scala-library-2.12.8.jar scala-compiler-2.12.8.jar scala-reflect-2.12.8.jar
scala-library, scala-compiler, and scala-reflect of the correct version that your JVM process is using will be loaded, if they have not been loaded yet. The REPL console needs these libraries to work.
For example, your process has already loaded scala-library 2.12.8 by itself, but scala-compiler and scala-reflect haven't been loaded, Scalive will automatically load their version 2.12.8.
If none of them has been loaded, i.e. your process doesn't use Scala, Scalive will load the lastest version in the directory.
For your convenience, Scala 2.12.8 JAR files have been included above.
If your process uses a different Scala version, you need to manually download the corresponding JARs from the Internet and save them in the same directory as above.
Usage
Run the shell script
scalive (*nix) or
scalive.bat (Windows).
Run without argument to see the list of running JVM process IDs on your local machine:
scalive
Example output:
JVM processes: #pid Display name 13821 demos.Boot 17978 quickstart.Boot
To connect a Scala REPL console to a process:
scalive <process id listed above>
Just like in normal Scala REPL console, you can:
- Use up/down arrows keys to navigate the console history
- Use tab key for completion
How to load your own JARs to the process
Scalive only automatically loads
scala-library.jar,
scala-compiler.jar,
scala-reflect.jar, and
scalive.jar to the system classpath.
If you want to load additional classes in other JARs, first run these in the REPL console to load the JAR to the system class loader:
val cl = ClassLoader.getSystemClassLoader.asInstanceOf[java.net.URLClassLoader] val jarSearchDirs = Array("/dir/containing/the/jar") val jarPrefix = "mylib" // Will match "mylib-xxx.jar", convenient when there's version number in the file name scalive.Classpath.findAndAddJar(cl, jarSearchDirs, jarPrefix)
Now the trick is just quit the REPL console and connect it to the target process again. You will be able to use your classes in the JAR normally:
import mylib.foo.Bar ...
Note that
:cp doesn't work.
How Scalive works
Scalive uses the Attach API to tell the target process to load an agent.
Inside the target progress, the agent creates a REPL interpreter and a TCP server to let the Scalive process connect and interact with the interpreter. The Scalive process acts as a TCP client. There are 2 TCP connections, one for REPL data and one for tab key completion data.
Similar projects:
Known issues
For simplicity and to avoid memory leak when you attach/detach many times, Scalive only supports processes with only the default system class loader, without additional class loaders. Usually they are standalone JVM processes, like Play or Xitrum in production mode.
Processes with multiple class loaders like Tomcat are currently not supported. | https://scala.libhunt.com/scalive-alternatives | CC-MAIN-2021-43 | refinedweb | 861 | 59.19 |
20 July 2012 10:36 [Source: ICIS news]
SINGAPORE (ICIS)--Taiwanese producer China Petrochemical Development Corp (CPDC) plans to restart its new 100,000 tonne/year caprolactam (capro) line in Toufen, Miaoli county, over the weekend, a company source said on Friday.
The line is likely to run at 100% by next week, the source added.
The line, underwent trial runs in March but had to be taken off line in April due to technical reasons. Its restart date was originally scheduled for May, but was later postponed twice to late July.
The decision to restart instead of a further delay was partly on the back of firming spot capro prices, the source said.
CPDC, the sole capro producer in ?xml:namespace>
Spot capro prices increased by $30/tonne (€24/tonne) at the high end of the range week on week in the week ended 18 July, at $2,250-2,300/tonne CFR NE Asia, ICIS data showed.
Additional reporting by Angeline Zhang
( | http://www.icis.com/Articles/2012/07/20/9579759/taiwans-cpdc-to-restart-toufen-capro-line-over-the.html | CC-MAIN-2015-18 | refinedweb | 164 | 69.01 |
by Mitch Garnaat
Over the past year or so, Amazon has been expanding it's line of infrastructural web services. Amazon CEO Jeff Bezos likes to call this collection of services muck, meaning these kinds of services are difficult to build in a scalable manner. That's exactly what this article will focus on: combining three of these scalable services from AWS using an architecture that allows us to build robust, reliable, and scalable compute services. While the basic architecture described here could be applied to many different application areas, this article will focus on building a service to solve a specific problem that I run into regularly: video format conversion.
Like most people today, I have made the transition from film cameras to digital cameras. One of the cool things that many digital cameras today can do is shoot video in addition to still photos. So now I have hundreds of these videos, all in AVI format, sitting around on my hard drive. What I really want to be able to do is load these videos up on my video iPod so I can enjoy them and share them easily. The problem is that the iPod doesn't play AVI format videos. It wants it's videos in MPEG4 format.
In this article, we're going to build a video conversion service using the AWS building blocks. This service accepts AVI format video files as an input and produces MPEG4 files as an output. Not only will this service be able to convert all of my videos, it could easily be scaled to handle mass video conversion for thousands of users or be used as a component in a larger media management application.
Before We Begin
The architecture that I describe in this article could be applied to any programming language, but, for my example, I'm going to use Python. There are two main reasons for this:
- Python has been my favorite programming language since the days of Python 0.9 and it's a great choice for mashing up various services to create new services.
- I've already developed a Python language library called boto that provides interfaces into all three of the Amazon Web Services we are going to use in this article.
We're going to be taking advantage of boto on both the server side and the client side of this project but if you are interested in building similar services in other programming languages, check out the AWS forums and Resource Center. There are lot's of libraries available for many different languages.
Let's get started!
The Big Three
The three services we will focus on in this article are:
- Amazon Elastic Compute Cloud (EC2) for scalable compute resources
- Amazon Simple Storage Service (Amazon S3) for unlimited, reliable storage
- Amazon Simple Queue Service (SQS) for reliable messaging and loose coupling
We could build a conversion service like the one I describe above without using any of these Amazon Web Services. Our little server sitting on the web may work for us, but what happens if our friends and family decide they want to use it? Or, heaven forbid, what happens if someone blogs about it, the right people find out, and our little service gets Digg'ed or Slashdot'ed? Where are we going to store all of that uploaded video? How are we going to handle the compute-intensive video conversion? How are we even going to handle the bandwidth required to handle the requests? We're not. We're building our service leveraging these building blocks from AWS so we can end up with a service that will be easy to construct, inexpensive to operate, and able to scale to meet virually any demand.
Putting the Pieces Together
The diagram below shows the basic architecture of the service we are building.
Amazon S3 is the perfect place to store the video files to be converted as well as any output files generated by our conversion service. In addition to being fast and reliable, we will never have to worry about our service running out of disk space.
For the instructions, we want a place where different clients can store the information and know that the instructions will be delivered to our service. Our service wants to be able to read one set of instructions at a time, in roughly the order in which they were stored. This ensures that work is done in a timely and fair manner. Fortunately for us, that's exactly what SQS provides us. Think of it as e-mail (or more generally, messaging) for services. And again, we won't have to worry about scalability, availability, or reliability.
Finally, we need a way to actually perform the video conversion. This is where EC2 comes in. EC2 provides elastic computing resources. With a single API call to EC2 I can create a brand new server to do my bidding. In fact, I can create dozens of them. And when the work is done, I can make them go away just as quickly and easily. No more trips to Fry's! Well, okay, maybe we will still find a reason to go to Fry's but we definitely won't have to go to buy servers for our conversion service. EC2 will take care of that for us.
Building Our EC2 Image
One of the first things we need to do is to create a new EC2 image (called an AMI) that contains all of the software needed to create our video conversion service. Because this process is quite detailed and time-consuming (and because this article is already pretty darn long) we are going to cheat. I have already done all of the configuration necessary to build our conversion service and turned it into a publicly accessible AMI that anyone can access. So, we are going to skip over most of the nitty-gritty details of installing software, etc. If you really want to go through that, there are more detailed notes with the public AMI. (See the related documents below for a link to the public AMI.)
Building Our Conversion Service
Based on the architecture shown in the diagram above, the basic steps required in our conversion service are:
- Read a message from our input queue
- Based on the data in the message, retrieve the input file from Amazon S3 and store it locally in our EC2 instance
- Perform our video conversion processing, producing one or more output files
- Store the generated output files in Amazon S3
- Write a message to our output queue describing the work we just performed
- Delete the input message from the input queue
The boto library provides a framework for this type of service in a class called, appropriately enough, Service. The Service class takes care of all of the details of reading messages, retrieving and storing files in Amazon S3, writing messages, etc. It also handles many of the common types of errors that come up when dealing with distributed services. For details on the Service class, you can view the source code here. For this article, though, we are just going to leverage that class and focus our efforts on what we need to do to get our video conversion service up and running.
To keep things simple, I've also already created a subclass of the Service class in boto to perform the video conversion. It's call ConvertVideo and the source code is shown below. This class is also part of boto and can be found here.
from boto.services.service import Service import os class ConvertVideo(Service): ProcessingTime = 30 Command = """ffmpeg -y -i %s -f mov -r 29.97 -b 1200kb -mbd 2 -flags \ +4mv+trell -aic 2 -cmp 2 -subcmp 2 -ar 48000 -ab 192 -s 320x240 \ -vcodec mpeg4 -acodec aac %s""" def process_file(self, in_file_name, msg): out_file_name = os.path.join(self.working_dir, 'out.mov') command = self.Command % (in_file_name, out_file_name) os.system(command) return [(out_file_name, 'video/quicktime')]
Our ConvertVideo class subclasses the boto Service class. That means we can leverage all of the Service class code to handle messaging, etc. The only thing we need to do is define how the video conversion process works. We do that by overriding the process_file method of the Service class. This is the method that gets called within the Service framework when there is an input file that needs to be processed. The process_file method takes two arguments:
- in_file_name - the fully qualified path to the input file to be processed. In our case, this will be an AVI format video file.
- msg - the message read from the input queue representing the work to be done. Right now, we can ignore this message because we are always performing the same conversion.
There are also a couple of class variables defined:
- ProcessingTime - defines the maximum amount of time we think it will take to process a file. This time is important because when we read an input message from SQS, we need to tell SQS how long to keep this message invisible from other readers of the queue. This is called the InvisibilityTimeout. If the timeout is too short, other services reading from the same queue might read the same message we are reading and perform the conversion again. Because the services are idempotent, this won't cause any harm but is a waste of computing resources. If the timeout is too long, the message could remain invisible longer than necessary if the original service that reads the message fails to process the message successfully. Eventually, the message will become visible in the queue again and will be read by another service but to provide reasonable response times, we don't want the timeout to be longer than necessary.
- Command - this is our command line for the call to ffmpeg to perform the conversion. The input file name and output file name have been parameterized so we can supply them at runtime.
The process_file method constructs the correct command line to run and then executes that command line using the os.system call in Python. The boto Service class expects the process_file method to return a list of tuples. Each tuple represents one output file generated by the service. The first element of the tuple is the fully qualified path to the output file and the second element of the tuple is the mime type of the output file. Since our simple service produces only a single output file (the MPEG4 file) we return a list with a single tuple.
Get the Message?
We have described how we are using SQS to help us scale our services and we have talked about the boto Service class handling the reading and writing of messages for us. But what do those messages actually look like? Here's an example input message for our ConvertVideo
The basic message structure is very simple and should look familiar. It basically follows the same RFC-822 format used in mail messages and in HTTP headers. The required fields are described below:
- Bucket - the Amazon S3 bucket that contains input files and will be used to contain output files
- InputKey - the key of the input file in Amazon S3. The combination of the bucket and key provides a fully qualified reference to the input document. The boto service framework uses the MD5 hash of the file as it's key in Amazon S3. This is one of the ways we can guarantee that services are idempotent.
- Date - the date and time that the input file was originally stored in Amazon S3
- OriginalFileName - the original name of the input file
- Size - the size in bytes of the input file
Once our ConvertVideo service has completed processing a particular input message, it writes a message to the status queue describing the work that it just completed. That output message is shown
OutputKey: e69e376be5af6f88f81d3e31adf27988;type=video/quicktime
Server: ConvertVideo
Host: domU-12-31-34-00-02-82
Service-Read: Wed, 21 Feb 2007 01:28:14 GMT
Service-Write: Wed, 21 Feb 2007 01:28:27 GMT
As you can see, this status message contains all of the fields from the original input message plus some additional fields added by the service, described below.
- OutputKey - the Amazon S3 key and mime type of the outputs of the service. This field could contain multiple entries, separated by commas.
- Server - the name of the service that processed this message.
- Host - the DNS name of the EC2 instance that performed the actual conversion
- Service-Read - the date and time that the service read the input message
- Service-Write - the date and time that the service wrote the status message
In a production environment, these output messages would be read from the output queue and persisted in log files or a database.
Enough Muck! Let's Convert Some Video!
First of all, if you are still with me; Congratulations! We are almost ready to put our service into action. As I mentioned earlier, I've already bundled the conversion service into a publicly available AMI (see the links at the end of this article). In addition to installing the necessary software to perform the video conversion, I also needed to modify the rc.local file in that instance so that it would automatically start up our conversion service as soon as the instance boots up.
But wait! How does our instance know what service to start up? Or where to read messages from? Or who's AWS credentials to use? Well, that's where instance user data comes in. A relatively new feature added to EC2 allows us to to pass arbitrary data to an instance when we launch it. This provides a great way to create very general purpose images with little or no hardcoded data. The boto Service class takes advantage of the instance user data feature in EC2 to allow a variety of parameters to be passed to the service at instance creation time.
Making it Work
Okay, now we are finally ready to use our super-scalable video conversion service. The first thing we need to do is submit some video files to be converted. We could provide a simple web upload page to submit the files but for now, we want to be able to get a bunch of files up there as efficiently as possible so we will use a command-line utility provided in the boto library. Let's assume that we have a bunch of AVI format video files sitting in a directory called movies in our home directory. Here's how we would submit those files to the video conversion service.
$ cd ~/boto
$ boto/services/submit_files.py -b myvideos -q vc-input ~/movies
...
50 files successfully submitted.
$
This command does a lot of work behind the scenes. Let's step through it. The -b option is used to specify the Amazon S3 bucket in which to store the input video files. The -q option specifies the SQS queue that will be used as the input queue for our service. The final argument is either a fully qualified path to a single file to submit to our service or a fully qualified path to a directory. If we pass a directory to the submit_files command will submit all files in the directory.
For each file processed by the submit_files command the file will be stored in the specified bucket in Amazon S3 using the file's MD5 hash as it's key in Amazon S3. In addition, for each file stored a message will be written to the specified queue. This message represents the work that needs to be performed on the file. In our case, that means the video conversion.
Now that we have stored our original video files in Amazon S3 and created messages in our video conversion service's input queue we can fire up our conversion service. Again, we will leverage a command-line utility in boto to make this as easy as possible. Remember, previously in this article we went through the steps to create and register our video conversion service's AMI in EC2 so all we need to do is create one or more instances of our service to process the messages in the input queue. To start with, let's just create a single service instance to process our files.
$ cd ~/boto
$ boto/services/start_service.py -r -m boto.services.convertvideo \
-c ConvertVideo -a ami-2eba5f47 -i vc-input -o vc-output \
-e mitch@garnaat.com
$
Again, there's a lot happening here behind the scenes. First let's go through the arguments passed to the command.
- -r: this option means that we are starting a remote service. This same script is used to start up the service on the remote EC2 instance so this option is needed to tell the command whether it should be firing up an EC2 instance (as in our case) or starting the service software on the EC2 instance.
- -m: this option is used to specify the python module that contains our server class. The ConvertVideo class we created earlier resides in the module boto.services.convertvideo. You can create your services in any module you like, as long as you have configured your EC2 server instance so it can access the module.
- -c: this option specifies the name of our Python server class. We called our class ConvertVideo.
- -a: this option specifies the EC2 AMI id for our service. This is the value returned when you register the image with EC2.
- -i: this option specifies the name of the SQS queue that will be read for input messages for our service. This should match the name given when we called submit_files.py.
- -o: this option specifies the name of the SQS queue that will be used to store status messages for our service.
- -e: this option is used to provide an e-mail address that will be notified when a service is started or stopped. This just provides an easy way to tell when your service has been instantiated and is ready to start processing input messages and also when it has completed all processing and is shutting down.
- Process all of the command line arguments and construct a string that will be passed to the instances as UserData. This UserData contains everything the instance will need when it starts up.
- Start up a new instance of the AMI specified on the command line
- Once the new instances starts up, it will read the UserData passed to it
- Based on the UserData, it will load the appropriate Python class representing our service and create a new instance of that class. If the -e option was used on the command line the service will send an e-mail indicating that the service has started.
- The new instance of our service class will begin reading messages from the input queue specified in the UserData and will process the messages until the queue is empty
- The service will then terminate the EC2 instance in which it is running. Before doing so, if the -e option was used on the command line starting the service the service will send an e-mail indicating that the service is shutting down.
Once the service has completed it's work, we can grab the results. Here again we will leverage a command-line utility provided in boto to simplify this task.
python boto/services/get_results.py -q vc-status ~/movies
retrieving file: MVI_3110.mov
...
50 results successfully retrieved.
Minimum Processing Time: 2
Maximum Processing Time: 58
Average Processing Time: 17.820000
Elapsed Time: 896
Throughput: 3.348214 transactions / minute
This shows the kind of throughput we can expect from a single instance of our conversion service. But how about that scalability we talked about earlier? Well, let's make things a little more interesting. In our next test we will queue up 500 videos for conversion. Since we queued up 10 times more work let's create 10 times more servers and see how things go.
$ python boto/services/start_service.py -m boto.services.convertvideo \
-c ConvertVideo -r -a ami-2eba5f47 -e mitch@garnaat.com\
-i vidconv-input -o test-status -n 10
Server: boto.services.convertvideo.ConvertVideo - ami-2eba5f47 (Started)
Reservation r-b4bf5bdd contains the following instances:
i-10c32479
i-13c3247a
i-12c3247b
i-15c3247c
i-14c3247d
i-17c3247e
i-16c3247f
i-e9c32480
i-e8c32481
i-ebc32482
Now we will have 10 video conversion servers all reading messages from the same queue and processing the same set of work. In theory, that means the elapsed time to complete all of this processing should be about the same as it took a single server to process 50 files. Let's check on the results.
$ python boto/services/get_results.py -q test-status ~/movies
retrieving file: MVI_3110.mov
500 results successfully retrieved.
...
Minimum Processing Time: 2
Maximum Processing Time: 60
Average Processing Time: 17.794000
Elapsed Time: 928
Throughput: 32.327586 transactions / minute
Sure enough, the average processing time and elapsed time are almost exactly the same but our overall throughput is roughly 10 times higher than in our previous example which is exactly the sort of behavior we would expect and hope for.
Check Please!
We've created a framework for providing scalable services and shown some examples of how that framework can be easily be ramped up to handle increasing demands. We've also shown that our approach scales in a very linear and predictable manner, exactly what we want to see. One important question remaining, however, is " How much does it cost?". We can answer that question pretty easily because the get_results.py command, in addition to retrieving and summarizing the results found in a status queue, also creates a CSV file called log.csv in the directory specified on the command line. By bringing that file into a spreadsheet program like Excel (or by loading it into a database) we can get all kinds of stats about our services. Let's use that information to total up our bill for converting the 500 videos.
A total of about $1.78 for converting 500 videos means a per/video cost of less than $0.004. Pretty impressive. And, unlike traditional computing infrastructure that is a fixed cost no matter what your actual demand looks like, this infrastructure cost can track your demand exactly.
Wrapping It Up
We've covered a lot of ground in this article. We've discussed the different Amazon Web Services involved, described a high-level architecture for combining those services into a scalable services framework and shown the performance and cost metrics of a video conversion service built with that architecture.
But that's really only the beginning. There's a lot more we could do to make this services framework even more useful, such as:
- Provide a browser interface for submitting videos for conversion. This would have to accept the POST'ed file submissions (and parameters) and then transfer the file to Amazon S3 and queue up a message to describe the work to be performed.
- Extend the video conversion service itself to handle a wider range of input formats and conversions. The ffmpeg program is very powerful and we should take better advantage of it.
- Load the status messages into a database so we can query about previous jobs and better track our service usage.
- Come up with a strategy for dynamically managing the EC2 instances rather than starting them up manually.
- Develop service support code in different languages. This article focused on my favorite language, Python, but since the main interface between consumers and producers in our architecture is via RFC822-style message headers we could easily write services in any language and have them interoperate.
- Lots, lots, more...
So get out there and produce your own scalable, reliable web services! AWS makes it easy.
Additional Resources
- Amazon Web Services:
- Amazon S3:
- Amazon SQS:
- Amazon EC2:
- boto: (also listed in the Resource Center; see below)
- Python:
- The Monster Muck Mashup public AMI (see below)
Mitch Garnaat is an independent software consultant living in Upstate New York. He has been designing and developing software for 20 years. For the past year, his focus has been on leveraging Amazon Web Services. He is the author of the open source boto library which provides a Python interface for an expanding set of Amazon Web Services and has been developing AWS-based applications for a variety of customers. | https://aws.amazon.com/articles/Python/691 | CC-MAIN-2014-10 | refinedweb | 4,085 | 61.26 |
For the past couple of days I have been playing with SubSonic – The Zero Code DAL, which is a really sweet code generation tool by Rob Conery that generates ActiveRecord Classes, a Stored Procedures Wrapper, a Query Object, etc. for the data access layer of your web applications. You can go the build provider route which creates the classes and collections in the background automagically, or run the provided code generation templates that will make the class files for you to import into your application.
I wrote a brief tutorial on it that you can read for more information:
ActiveRecord and Code Generation – Data Access Layer RAD Tools – SubSonic: The Zero Code DAL
but I suggest watching the webcast which is excellent. I haven't listened to it yet, but I noticed that Rob talked about SubSonic earlier this month on .NET Rocks! as well ( Oct 4th show ). Kudos to Rob for his generosity in providing this great learning and productivity tool to the .NET community.
Table Schemas and Getting Database Schema Information
As you play with the code generated by SubSonic as well as dive into the source code, you will notice it has to go out and get schema information about each table in the database that you choose for inclusion into your data access layer. SubSonic queries the INFORMATION_SCHEMA Tables that hold the metadata for your database, allowing it to get the list of tables in the database as well as the column, primary key, and foreign key information for each table. Cool stuff. Here is an example of querying the INFORMATION_SCHEMA tables to get a list of tables in a SQL Server Database:
Get List of Tables in a Database – Query INFORMATION_SCHEMA.Tables – ADO.NET
You can also do this kind of thing using GetSchema in ADO.NET 2.0 as well as SQL Server Management Objects. The source code has a number of interesting queries to get database metadata which is worth checking out if you are interested.
Reading Table Schema Information from XML File Instead of Querying Database
For kicks, late last night I wanted to have SubSonic read the database schema from an XML file instead of going to the database. No real reason for doing this other than wondering how quickly I could pull it off. Although I could see some performance and security benefits from having it in a local repository as opposed to querying the database in real-time.
I envisioned separating the current data providers into 2 different providers: 1) DataProvider and 2) SchemaProvider. By treating them separately, one could get a little more flexibility about where to get the schema information as well as offer DDL services to create the database and tables based on the schema.
I decided to take a bit of a shortcut and piggyback on the current DataProvider because I needed to accomplish this in an hour
Extracting Our New ISchemaProvider Interface
We are going to extract a new SchemaProvider Interface from the current data providers by grabbing the interface that only has to do with Schema Related Stuff…
interface ISchemaProvider { string GetForeignKeyTableName(string fkColumnName, string tableName); string GetForeignKeyTableName(string fkColumnName); string[] GetSPList(); IDataReader GetSPParams(string spName); string[] GetTableList(); TableSchema.Table GetTableSchema(string tableName); string[] GetViewList(); string ScriptData(string tableName); string ScriptSchema(); }
Now, because the current DataProviders implement this interface, we can use them as Schema Providers. Note we could also make an IDataProvider interface out of the leftover methods, etc., but I am not concerned with that now.
public class SqlDataProvider : DataProvider, ISchemaProvider
Modifying DataService Class to Use ISchemaProvider
The DataService Class handles all data services, handing them off to the proper Data Provider. This is beautiful, because here is where we are going to delegate schema related activities to our schema provider.
public class DataService { public static ISchemaProvider _schemaProvider = null; // ... static ISchemaProvider SchemaProviderInstance { get { LoadProviders(); return _schemaProvider; } } internal static void LoadProviders() { // ... SubSonicConfig.SchemaFile = section.SchemaFile; // ... // If no XML Specified, use DataProvider // else use XmlSchemaProvider. if (string.IsNullOrEmpty(SubSonicConfig.SchemaFile)) _schemaProvider = _provider as ISchemaProvider; else _schemaProvider = new XmlSchemaProvider(SubSonicConfig.SchemaFile); } // ... public static TableSchema.Table GetTableSchema(string tableName) { return SchemaProviderInstance.GetTableSchema(tableName); }
I have set this all up using the provider model as used by SubSonic.
Specifying the XML File and the XMLSchemaProvider Class
I specify the path to the XML File in web.config using schemaFile="BlogTable.xml". Obviously this is just a shortcut as opposed to a complete IConfigurationSource type of idea.
<SubSonicService defaultProvider="SqlDataProvider" schemaFile="BlogTable.xml"> <providers> <add name="SqlDataProvider" ... /> </providers> </SubSonicService>
I added the property to SubSonicConfig for completeness:
public static class SubSonicConfig { // ... private static string _schemaFile = string.Empty; public static string SchemaFile { get { return _schemaFile; } set { _schemaFile = value; } } }
And here is just a quick XmlSchemaProvider Class I created to test the idea that only handles the single method, GetTableSchema, by passing back the same schema for all tables as read from the XML file.
public class XmlSchemaProvider : ISchemaProvider { TableSchema.Table _blogsTable; public XmlSchemaProvider(string xmlFile) { XmlSerializer serializer = new XmlSerializer(typeof(TableSchema.Table)); using (Stream fs = new FileStream(xmlFile, FileMode.Open)) { _blogsTable = (TableSchema.Table)serializer.Deserialize(fs); fs.Close(); } } // ... // Just for test. Returns a single table based on any name. public TableSchema.Table GetTableSchema(string tableName) { return _blogsTable; } // ... }
Obviously my XmlSchemaProvider is lacking :), but it worked. I was able to read the schema from an XML File as opposed to the database based on information provided in web.config and the simple addition of a Schema Provider Service which is separate than a data service.
Conclusion
This showed a little about the ease of extending SubSonic to include a separate Schema Provider Service from the Data Provider Service. Although a real solution would be different, this provided some architectural value in how that might look and what you can do in an hour
by David Hayden | http://codebetter.com/davidhayden/2006/10/28/subsonic-extending-the-zero-code-dal-with-a-schema-provider-service-for-kicks/ | CC-MAIN-2018-30 | refinedweb | 967 | 53.1 |
Some more great news from Microsoft. Popfly is now in beta mode and can be downloaded for free. They've also announced a Mashup contest where you can win a Zune or XBox 360 Halo 3 Special Edition.. "
Pretty cool stuff. Link: Microsoft Announces Popfly Beta and Mashup and Win contest.
I meant to post this a while ago. This is a presentation that I did for the Wisconsin .NET User's Group in Sept, focusing on Microsoft's implementation of Ruby (IronRuby). The session went through the features of Ruby, Ruby on Rails, the DLR architecture, IronRuby, and several different tools to get you up and running.
One of my main thrusts of the presentation is that I believe Ruby is going to be a truly cross-platform and cross-runtime language in the near future. Ruby itself already runs on almost every OS to include Windows, Linux and Mac and via Ruby on Rails can work with almost any database out there (MySql, Postgres, Firebird, Oracle, DB2, Sql Server, Sybase, etc). JRuby is providing a Ruby-runtime that integrates with the Java framework, similar to what IronRuby is doing for Ruby and .NET. Shortly you'll be able to develop a Ruby application and deploy it on any OS...and on any major run-time, be it by itself, on a Java stack or on a .NET stack. On top of that, you're already seeing significant support from the industry at large, to include ThoughtWorks (one of the first commercial JRuby implementations via Mingle), Borland (3rdRail Ruby IDE), JetBrains (Ruby IDE), Microsoft (IronRuby and the DLR), several Google Summer of Code projects, and industry icons such as Martin Fowler and Pragmatic Dave. Pretty powerful...in my humble view, Ruby has reached the tipping point.
DirectSupply hosted the meeting. It was an awesome facility and perfect for our audience (~150 in attendance).
One cool new development since my presentation is that Microsoft has decided to host IronRuby on RubyForge...making a truly open-source implementation. That's great news!
Download IronRuby.ppt
A minor release is out for Adapdev.NET, in support of the new Codus 1.4 release. Changes are primarily around the Adapdev.Data.Schema namespace to include adding support for native Oracle drivers, MySql foreign key retrieval, Sql Server Express 2005, and some small bug fixes.
Download is here.
This is a small update to address several minor bugs, the most important one being around sql generation for composite keys.
Latest binaries are here.
I'm pleased to announce the final release of Codus 1.4!
If you aren't familiar with Codus, it's a comprehensive code generation tool for object-relational mapping. It takes an existing database and automatically generates all of the code for updating, deleting, inserting and selecting records. In addition, it creates web services for distributed programming, strongly-typed collections, and a full set of unit tests.
What's New
This new release brings tons of new features, to include:
Thank Yous
This is an exciting new release that's been a long time coming. Many thanks to everyone that helped make it possible through extensive beta testing and input. In particular I'd like to thank Bhaskar Sharma of HCL Technologies for his code donations around VS2005 generation and n-n mappings. He also had several other improvements that hopefully will make it into a later release, to include support for optimistic locking and better code retention. Other kudos go to the following community members for their bug reports and feedback:
Codus Has Gone Commercial!
Codus is now being released as a commercial product. The support demands have grown substantially over the past year - Codus now averages almost 8,000 downloads per month - so a commercial model is the best option for continuing to grow and improve Codus and meet the increasing support demands. It will also open the doors for the completion of several other super secret products that are currently being worked on.
So, if you've used Codus in the past, I'd encourage you to purchase the latest version - details are here. All of the previous versions are still free and available for download. The commercial version is available in a Single Developer and Site License option. Purchase includes full source code, all minor updates, and priority support. In the near future you'll also get access to a member portal with nightly builds and access to the source code repository.
What's Next?
Two quick releases are planned over the next few months to support generation of Castle ActiveRecord mappings and our emerging Elementary framework. Those will comprise 1.5 and 1.6 respectively (and we may even sneak a few extras in!). After that, the focus is on version 2.0 which will be a ground up rewrite. Focus for 2.0 is:
Current direction is WPF for the interface and ClickOnce for deployment - but it's up for debate and we're definitely open to suggestions. Something else that's being floated is a model designer...Beyond that, we're looking at generation of ASP.NET websites and WinUI apps for database administration, along with support for several other ORM frameworks. Current roadmap is here. Lot's of opportunities!
Thanks again to everyone that helped get this out the door!
Attached are the slides and videos from the Wisconsin .NET User's Group launch of .NET 3.0 and Vista.
Download Net30Launch.zip
Also, here are links to the 2 videos I showed:
German Coastguard
Interview in Northern Iraq
A new beta has been released following on the heals of 031307. Most important in this release is a bug fix for an issue that popped up in 031307. Codus wasn't copying the Oracle.DataAccess.dll to the generated output folder, so depending on your environment, if you try running the compiled code you'll get an error stating that it can't find the Oracle.DataAccess.dll. (Thanks to Darren Sellner for the screen shot and bug report)
To solve this, simply copy the Oracle.DataAccess.dll from your Codus install folder to the generated output folder and you should be good to go. Or, you can download the new beta. :)
There are also two major additions in this beta, making it feature complete:
Throw on top of that a few minor bug fixes, and it's pretty close to gold! I'll be addressing bugs over the next 1-2 weeks, with the goal of a final release at the end of the month.
Latest Beta (under Current Development Release):
The latest release of Elementary, an advanced ORM framework, is available. Elementary is currently in the early stages, but is being used in several commercial environments with great success. All feedback so far has been very positive and the framework is quite stable, so a 1.0 release isn't very far off.
This release addresses several minor bugs. The three major bugs that were fixed:
The download is available here:
If you're using Elementary, shoot me an email and let me know what you think and what you'd like to see added!
The latest 1.4 beta build of Codus is now available. This build addresses the following:
What's new in 1.4?
What's left:
IMPORTANT:
The naming for the Sql Server database connection options have changed. When you click on a saved database connection that's using Sql Server, you'll get an error saying the key can't be found. That's because it's looking for "Sql Server" and there are now two options "Sql Server 2000" and "Sql Server 2005". Simply select the new Sql Server option that you want to use and you're good to go.
Latest download is available here (under Current Development Release):
I'm currently on track for releasing the final 1.4 version by the end of this month. Please try out the beta and provide feedback! Thanks to everyone that identified the items above and provided suggestions so far.
Here are the slides from last week's presentation to the Wisconsin Fox Valley .NET User's Group. The session covered SubSonic and MonoRail, with a brief mention of the Patterns and Practices Web Client Software Factory.
Download 022107.ppt | http://feeds.feedburner.com/AdapdevTechnologies | crawl-001 | refinedweb | 1,385 | 65.32 |
It’s that time again — SpaceNet raised the bar in their third challenge to detect road-networks in overhead imagery around the world. Today, map features such as roads, building footprints, and points of interest are primarily created through manual techniques. In the third SpaceNet challenge, competitors were tasked with finding automated methods for extracting map-ready road networks from high-resolution satellite imagery. This move towards automated extraction of road networks will help bring innovation to computer vision methodologies applied to high-resolution satellite imagery and ultimately help create better maps where they are needed most such as humanitarian efforts, disaster response, and operations. For more details, check out the SpaceNet data repository on AWS and see our previous NVIDIA Developer Blog post on past SpaceNet challenges to extract building footprints.
In this post, we approach the current SpaceNet challenge from distinct perspectives. The first part of this blog describes how to directly leverage the full 8-band imagery and manipulate ground truth labels to obtain excellent road networks with relative ease and excellent performance. We next look at how we might exploit the material properties of the road surface itself by using the spectral aspect of the data to create a deep learning solution tailored for a specific spectral signature. Finally, we take creative liberties to think about how we might apply these types of deep learning solutions in a broader operational sense using conditional random fields, percolation theory, and reinforcement learning. Think of this like Bohemian Rhapsody for deep learning (minus the Grammy Hall of Fame).
Section 1 – Tiramisu and Manipulating the Truth
In this section the Tiramisu network is used which is readily available in many forms on GitHub. Here we use the Semantic Segmentation Suite, written by George Seif, which incorporates many different segmentation methods. This network was chosen because it can provide highly accurate semantic segmentation masks for a variety of segmentation tasks and easily extends to the full 8-channel data (in contrast to just 3-channel RGB data). From a design perspective, Tiramisu extends the U-Net architecture adding Densely Connected Convolutional Networks into the network through conversion into fully convolutional layers to enable upsampling. The advantages of these modifications include deeper supervision between layers. Given that the feature maps are shared throughout the dense blocks this aids multi-scale supervision and introduces skip connections within and outside of each block, a feature shown to be extremely successful in Residual Networks.
To infer road networks using the SpaceNet data a number of preprocessing steps are required to create segmentation masks for training and evaluation. The tool sets provided by Cosmiq Works provide useful methods to convert from the line string graph formats into a segmentation mask allowing the user to specify the width of the segmented road. It is possible to generate a segmentation model with these tools, however it is evident upon visual inspection that these segmentation masks do not label all pixels which are associated with roads, highways and alleyways. For example a segment may either be too wide or too narrow to describe a specific road. In the case of a label being too wide this results in buildings, trees and other objects to be labelled incorrectly as road. Conversely, a too narrow label will result in potentially large areas of the road network being labelled as the background class. In some cases areas which clearly contain roads are not labelled as such. This label mixing can adversely affect accuracy, particularly at large intersections which, as described in the Cosmiq Works blog, are extremely important to label correctly to maximize the Average Path Length Similarity (APLS) metric.
Label Pre-Processing
To reduce this label misassignment we can use relatively straightforward image processing approaches. Whilst these approaches will not completely rectify the problem they will increase the number of pixels correctly labelled as a drivable surface and background classes. To achieve this the OpenCV floodFill function is employed. From the OpenCV website:
The functions floodFill fill a connected component starting from the seed point with the specified color. The connectivity is determined by the color/brightness closeness of the neighbor pixels. The pixel at (x,y) is considered to belong to the repainted domain if:
The floodFill function effectively generates a mask for all connected points which fulfil the above statement. Using this method it is possible to use a thinned version of the segmentation mask created by the Cosmiq Works functions to specify seed points for the floodFill function. Examples of the Las Vegas dataset processed using the Cosmiq Works and the floodFill methods are presented below in figure 2 (left: image, middle: Cosmiq Works boundary method, right: floodFill segmentation). It is clear that many more pixels are correctly labelled, particularly in parking lots, cul-de-sacs, and major trunk roads. In addition objects on or overhanging roads such as vehicles and vegetation are no longer labelled incorrectly.
The Tiramisu network is trained using Tensorflow on the full 8 band Vegas AOI dataset. The Cosmiq Works mask tool and the floodFill method are used for label generation and model accuracy is calculated using mean Intersection over Union (IoU) for both road and background classes and using the APLS metric. In all cases the networks were trained for 14 epochs taking approximately 6 hours per model per dataset per GPU when using NVIDIA Quadro GP100 GPUs. When using 8 band data the number of initial convolutional filters in the Tiramisu network is increased from 64 to 96 to increase the network capacity to learn multispectral features. The proportion of training, validation and test data is 70%, 15% and 15% respectively.
Results
Figure 3 shows the resulting masks from this process using the Tiramisu network on hold out test data. The left image presents the ground truth mask and graph, middle predicted mask and graph using the Cosmiq Work’s boundary function, and the right image shows predicted mask and graph using the floodFill approach. The floodFill is clearly superior when compared to the boundary fill method. The wide highway and it’s junctions are detected which is advantageous when calculating APLS. However care must be taken in areas where floodFill selects areas which are clearly drivable, such as parking lots and side roads, which are not labelled in the ground truth images.
The Tiramisu network trained on all 8 bands achieved a very respectable mean IoU of 0.89 and mean APLS of 0.60. This was achieved through simple image preprocessing and postprocessing steps and by training a standard open source model against the full dimensionality of the dataset without any major changes or additions.
Wrapping Up
In this section we have discussed some image processing methods to aid data preparation which can significantly increase accuracy in some cases. Data labelling and formatting is at the heart of data science and artificial intelligence. Depending on the overall goals of your requirements the correct selection of data preprocessing methods, and selecting the most relevant set of evaluation metrics, can help you achieve them.
In the next section we focus on determining the best spectral content to use when training a deep learning network for a specific spectral signature.
Section 2: Focusing the Spectrum
In this section we want to think about the physical properties of the road materials we are trying to isolate as well as the surrounding observable environment and how the sensor’s characteristics might be able to help in the road extraction task. That is, we want to exploit material-specific spectral signatures within the imagery for efficient asset detection and localization. The SpaceNet road challenge data were collected by the DigitalGlobe Worldview-3 satellite. The 8-band multispectral images contain spectral bands for coastal blue, blue, green, yellow, red, red edge, near infrared 1 (NIR1) and near infrared 2 (NIR2) (with corresponding center wavelengths of 427, 478, 546, 608, 659, 724, 833 and 949 nm, respectively). This extended spectral range allows Worldview-3 imagery to be used to classify the actual material that is being imaged as shown in Figure 4 .
Spectral features for a given material can be thought of physical characteristics of the material that manifest as observed reflectance (or absorbance) changes at particular wavelengths. Traditional spectral analysis leverages these diagnostic features to perform material identification. For example, organic materials such as vegetation exhibit very distinct spectral features in the red and near-infrared (NIR) portion of the spectrum. These features are driven largely by the chlorophyll content of vegetation. Chlorophyll will strongly absorb energy in the visible portion of the spectrum and strongly reflect energy beginning in the red/near-infrared (wavelengths ~700 nm and longer).
Given that we are interested in roads, why do we care about vegetation? As you can see from Figure 5, asphalt tends to be a poor reflector in the red and near-infrared portions of the spectrum. In fact, man-made objects in general will exhibit this poor reflectance characteristic. These differences will manifest as a strong contrast between vegetation and manmade objects in the last three spectral bands of the Worldview 3 sensor which is easily exploited with deep learning.
Of course, this is not an exhaustive spectral analysis of all environments observed in the SpaceNet data but it does motivate the investigation of a pure R/NIR deep learning model. The additional caveat here is that this data has not been compensated for atmospheric effects and that has the potential to affect results. For the sake of simplicity, we will assume that the atmospheric properties are constant over the entire Las Vegas AOI. That is, the atmosphere distorts all AOI-2 images in the same way. And, since Las Vegas is a dry desert-like climate, it is reasonable to assume minimal atmospheric distortion.
Something else to keep in mind is that the bit depth of the imagery from satellite sensors is often higher than 8-bit. However, the majority of deep learning frameworks and generic image manipulation libraries cannot handle data with higher bit depths. In combination with tools such as numpy, one can convert the data down to the required 8-bit depth and use it in subsequent steps. The sample code below provides two methods for extracting band information and using numpy and scipy’s bytescale function to perform a contrast stretch on the raw data that clips the top 10% and bottom 2% of values.
def retrieve_bands(ds, x_size, y_size, bands): stack = np.zeros([x_size, y_size, len(bands)]) for i, band in enumerate(bands): src_band = ds.GetRasterBand(band) band_arr = src_band.ReadAsArray() stack[:, :, i] = band_arr return stack def contrast_stretch(np_img, p1_clip=2, p2_clip=90): x, y, bands = np_img.shape return_stack = np.zeros([x, y, bands], dtype=np.uint8) for b in range(bands): cur_b = np_img[:, :, b] p1_pix, p2_pix = np.percentile(cur_b, (p1_clip, p2_clip)) return_stack[:, :, b] = bytescale(exposure.rescale_intensity(cur_b, out_range=(p1_pix, p2_pix))) return return_stack
Segmenting road networks from R/NIR data
The last 3 bands of the MSI pan-sharpened imagery were extracted from the original imagery for AOI-2 in Las Vegas (i.e. red edge, NIR1, and NIR2). Each image was resized to 1024×1024 and scaled to 8-bits with a contrast stretch clips of 2% and 90%. Ground truth masks did not make use of any additional pre-processing (i.e. flood fill). Training and validation datasets were created with a typical 20% split. A sample R/NIR training image is show below. It is interesting to note the pan-sharpening artifacts manifesting as blur in the vegetation, namely trees.
The same Tiramisu architecture (described in section 1) was used again for this exercise. Since we are using a fully convolutional network (FCN) we are not restricted by input size at inference, however it does need to be a multiple of the original training dimensions. For this effort a training size of 512×512 was chosen. Conveniently, Geospatial Data Abstraction Library (GDAL) includes method calls to resize geospatial datasets while preserving the geographic metadata via a warping function. For added functionality, this can either be done in a file or in memory, depending on where you chose to insert this process in your workflow. The sample code below shows the GDAL API call used to do a resize in memory.
warp_ds = gdal.Warp('', img_fn, format='MEM', width=resize[0], height=resize[1], resampleAlg=gdal.GRIORA_NearestNeighbour, outputType=gdal.GDT_Byte)
Results
The model was trained for ~50 epochs with a batch size of 1 on the R/NIR data for Vegas. Shown below are some samples from the segmentation results generated by the model.
The segmentation results were processed using some custom tools and the provided APIs and tools to extract a road network (represented by a graph) and calculate the APLS score per image. Below are the companion road network predictions for the presented samples.
The model trained on only the R/NIR data produced a mean IoU of 0.71 and a mean APLS score of 0.56
Wrapping Up
Above we discussed using a subset of the spectral data to train a deep learning model for road extraction. While there are a few implicit assumptions in this approach (e.g. atmospheric corrections), this method significantly reduces the amount of data needed for model training by focusing on a specific spectral signature. Compared to the full 8-band model, these R/NIR model results are slightly degraded but potentially more desirable given the data reduction and elimination of pre-processing overhead (i.e. flood-fill). Although, the minor performance reduction could simply be due to the fact that this approach uses a lot less data!
In the next section we are going to go off the beaten path and think about things slightly differently.
Section 3: Thinking Outside the Box
In this section we leverage the NVIDIA GPU Cloud resources together with the next-generation of AWS EC2 compute-optimized GPU instances. The AWS P3 instances are powered by up to 8 of the latest-generation NVIDIA Tesla V100 GPUs and are ideal for computationally advanced workloads such as machine learning (ML), high performance computing (HPC), data compression, and cryptography. Based on NVIDIA’s latest Volta architecture, each Tesla V100 GPU provides up to 125 TOPS of mixed-precision performance, 15.7 TFLOPS of single precision (FP32) performance and 7.8 TFLOPS of double precision (FP64) performance. For machine learning and deep learning applications, the P3 instance offers up to 14x performance improvement over previous P2 instances based on the much older NVIDIA Tesla K80 GPUs, allowing developers to train their machine learning models in hours rather than days.
For road segmentation we utilize the awesome Mask R-CNN deep learning network architecture implemented by Matterport available on GitHub. Mask R-CNN is a flexible framework for object instance segmentation which efficiently detects objects in an image while concurrently generating high-quality segmentation masks for each instance. To do this, Mask R-CNN extends Faster R-CNN by adding an additional branch for predicting an object mask together with the original branch for bounding boxes.
For training Mask R-CNN we start with the model sufficiently trained in advance on MS COCO data. The network is configured with 128 training ROIs per image, RPN anchor scales of (8, 16, 32, 64, 128), and an RGB pan-sharpened input image size of 512x512x3. Over all four areas of interest in the SpaceNet dataset there are 2731 total training images with associated ground truth road labels. These data were divided into a 90/10 split of 2458 images used to further train the network and 273 holdout images for validation. A training image label consists of an arbitrary number of road lines each specified as list of points in pixel space. Bresenham’s line algorithm was used convert these lines to binary segmentation mask labels for network training. It is important to note that Mask R-CNN is an object detection network at heart and therefore each road line must be treated as a separate object mask rather than combining all road lines into a single binary mask of size 512×512. That is, if a training image specified N road lines then the associated training label for that image would be a binary mask of size 512x512xN where label[:,:,i] is the binary mask for the i’th road line. The network used here was trained with a batch size of 8 images for 40 total epoch with 300 steps per epoch.
A traditional approach for working with object detectors is to threshold detections scores (confidence metric from 0 to 1) at 0.5 so that detections scoring less than 0.5 are discarded. Similarly, the segmentation mask produced by the Mask R-CNN network provides a softmax probability for each image pixel as “road” or “not-road”. Those pixels scoring, say, 0.87 have high probability of being road while a pixel score of 0.21 would be a weak indication of “road”. Therefore, a common post-processing approach is to binarize the resulting segmentation mask by converting all softmax probabilities below 0.5 to 0 and all probabilities 0.5 and above to 1.
Another popular approach to aid in the pixel classification decision processes is a technique called conditional random fields (CRFs) which is often employed for “structured prediction” in pattern recognition and hence readily applied in semantic segmentation post-processing. Essentially, a CRF is a probabilistic graphical model that encodes known relationships between observations and produces interpretations consistent with those relationships. A discrete classifier such as threshold classifier described above, on the other hand, predicts a label for a single sample without considering “neighboring” samples while a CRF can take context into account. It is crucially important to view the deep learning network as a technique for producing information that informs a decision process rather than a device that makes a decision.
Here are a few Mask R-CNN results with the aforementioned post-processing approaches:
Applications
In this section we want to highlight how we might work with and apply this data for real operations. At this point, given some overhead image we can produce an “educated” guess for where roads might be within that image. So, now what? For starters, given a map of interconnected roads we can start to ask basic questions like “can we get from A to B?” or similarly “what are all the traversable locations from location X?” Just because two points are “connected” does not inform us for exactly how we might actually go about getting from point A to B. Notice that creating a simple binary test (yes/no) for
bool = is_connected(mask,A,B)
is quite computationally efficient compared to something like “find the optimal path from A to B”. This raises another question, given some road segmentation mask, what is meant by “optimal path” between two points? In routing applications we often think of “optimal” as “shortest” and/or “fastest” but here we also have to consider that we’re not entirely certain where the road actually is. We can therefore start to think about things like the maximum likelihood route between two points (hint: it is not likely going to be the shortest route). Again, this underpins the notion that the deep learning network as a technique for producing information that informs a decision process.
Percolation
If we think of an image as a grid of vertices with edges between vertices (i.e. pixels) with probability p, the description is strikingly similar to a domain of statistical physics called percolation theory. In general, percolation refers to the movement of fluids through porous materials and over the last decades the theory of percolation has brought new understanding and techniques to a broad range of topics in physics, materials science, complex networks, epidemiology, and other fields. A famous problem in percolation theory involves systems comprised of randomly distributed insulating and metallic materials: what fraction of the materials need to be metallic so that the composite system is an electrical conductor? If a current passes through the system from top to bottom then the system is said to percolate. In such a system a site (i.e. grid point) is “open” or “vacant” with probability p and “blocked” with probability q = (1-p).
What makes this problem famous is the system phase change that occurs above a critical vacancy probability Pc. That is, systems with site vacancy below the critical threshold, p < Pc, almost never percolate and systems with p > Pc almost always percolate.
Since the segmentation mask provides a probability for each pixel we can think of each pixel as a Bernoulli random variable. That is, think of each pixel as a biased coin that when flipped produces heads with probability p and talis with the “leftover” probability of q = (1-p) — just like the site percolation problem above. In this way we create a binary (0 or 1) mask where each pixel is assigned heads = “road” = 1 or tails = “not-road” = 0. Once we have a random road network realization we can grow (i.e. percolate) connected components. Below shows a few examples of using the segmentation softmax probabilities to produce random realization of the road network and their associated connected components. For any two points in a connected component, there exists a path between those points.
Routing versus Navigation
Percolation was a useful way to investigate connected components and build a deeper understanding for connectivity from a probabilistic perspective which provided a simple yet robust formalism for testing if two points were connected. However, from an operations standpoint, given some objective or goal we really need to use the information provided for routing purposes. That is, how do we move beyond probabilistic connectivity to more useful capabilities such as optimal navigation of the environment? Given some graph of vertices and edges we can, no doubt, compute the shortest path between points. Although, if we consider an image with dimensions 512×512 pixels that translates to a graph with 266,144 vertices which means shortest path computations will scale poorly as the environment size increases. For example, Dijkstra’s algorithm has worst case performance of O(|E| + |V|log(|V|)) where |E| is the number of edges and |V| is the number of vertices and for grid graph of size {h,w} we have |V| = nm and |E| has a maximum possible value of (n-1)m + (m-1)n. With the native image resolution of 1300×1300, the SpaceNet roads challenge data produces graphs with 1,690,000 vertices and 3,377,400 possible edges. These numbers quickly become intractable for 4k and 8k overhead imagery. Operationally, given some goal location, it can be quite computationally expensive to determine shortest path from an arbitrary location on-the-fly. Furthermore, we still need to incorporate the environmental uncertainty of where the road actually is! Another option to consider is the A algorithm (pronounce “A star”) which generally achieves better performance by using heuristics (i.e. environmental informance in this case) to guide the search process. The A* algorithm is an informed best-first search process where the search considers the vertices that appear to lead most quickly to the goal. While A* can incorporate environmental information to the search process and thus typically converge faster to a path between two points, there are still some things to think about. For example, these routing approaches work well if we know exactly the starting point to the goal. However, if for some reason we find ourselves in a totally different location on the map we then need a new route computed to the goal. One might naively think to precompute routes to the goal from all possible starting locations but that is a lot of routes to take in your back pocket as you parachute out of an airplane. Perhaps rather than explicit routes what we need is a way to navigate the uncertain environment from wherever our starting point ends up to be.
Gridworld 2049
Rather than worry about computing explicit routes between all possible starting location and the goal (an intractable solution at any rate) we can simply leverage reinforcement learning to generate an optimal action from each location that leads to the goal. This type of solution allows for goal navigation from any location on the map which accounts for all the environmental uncertainty up front. At the core, reinforcement learning (RL) is a dressed up Markov decision process (MDP) which is itself a form of discrete-time stochastic control process. Here MDPs provide a mathematical framework for modeling decision making in environments where outcomes are partly random and loosely under the control of a decision maker. MDPs are widely used in optimization problems and solved using dynamic programing (DP) and Bellman’s equation. For more RL content and discussion see previous NVIDIA Developer Blog posts (RL nutshell and OpenAI Gym). Just as MNIST is the iconic deep learning exercise, Gridworld is the classic RL example.
For our application of RL, we will assign some goal state on the road and use the softmax probabilities produced by Mask R-CNN to define the transition probabilities. In a traditional gridworld, there are only four possible actions {up, down, left, right} so that is what we will use for our navigation example here. Notice that even if the neighboring grid location has high value, we discount that value if it leads to a non-traversable location (i.e. low probability of being road). Using simple policy iteration approach (Sutton and Barto, Ch 4) we can determine high-value actions at each grid point which lead to the goal state. One can almost think of this as a coarse vector field that illuminates a force pulling agents in the environment towards to the goal. As a simple example, in Figure 20 below, we consider image 442 from AOI-5 in Khartoum with a goal state in an intersection near the top of the image marked with a large red dot. After a few hundred iterations the optimal policy begins to emerge with clear direction to the goal state (Figure 21 below). Notice that we now have an optimal action for each pixel. Even if the pixel is not a road surface, the policy prescribes the action which leads to the nearest road (i.e. high value areas) and from there to the goal state (obtain maximum reward).
MDPs and RL techniques provide a powerful framework for decision making under uncertainty. Furthermore, these methods can easily accommodate additional environmental information such as buildings (not traversable, must navigate around) and utility functions (i.e. how painful are type I and type II errors), so forth and so on. Additionally, simple policy iteration methods exhibit massive parallelism and can be quickly updated on-the-fly as new information becomes available in an operations or situational awareness type scenario. And finally, this type of decision framework extends naturally to more complex state and reward descriptions to methods such as DeepQ learning (deepRL) and Monte Carlo search trees which led to the historic AlphaGo championship win.
Summary and Closing Comments
In this blog we have presented various methods to model and infer road networks to predict routes between locations within the SpaceNet road detection dataset. This includes experimenting with preprocessing methods to increase model accuracy, using domain knowledge to select the most appropriate spectral bands for a given target of interest, and combining percolation theory and reinforcement learning techniques to calculate navigable and efficient paths directly from imagery.
To enable this we utilized high performance GPU compute resources provided by the NVIDIA GPU Cloud (NGC) and AWS. Using pre-configured NGC framework containers and AMIs eliminated many hours (or days) of setup and provided optimized performance-tuned deep learning frameworks. The Amazon P3 instances provided low-cost easy access to state-of-the-art GPUs for fast network training in just a few hours using the latest NVIDIA Tesla V100 GPUs. Just think, a few years ago this same work would have taken days to configure, tune, debug, and train while costing thousands of dollars for enterprise hardware access. Today with NGC and AWS, we can knock out proof of concept work like this in just hours for around $50. Join us at our GPU Technology Conference 2018 for more great content and the latest in deep learning and artificial intelligence! Use code CMGOV to receive a 25% discount on your GTC registration. See you there! | https://devblogs.nvidia.com/solving-spacenet-road-detection-challenge-deep-learning/ | CC-MAIN-2020-24 | refinedweb | 4,744 | 50.16 |
[
]
Konstantin Shvachko updated HADOOP-2585:
----------------------------------------
Attachment: SecondaryStorage.patch
This is a new patch that
# does not contain code that was fixed in HADOOP-3069;
# fixes findbugs from the previous run;
# fixes TestCheckpoint failure.
The letter was tricky. TestCheckpoint failed on Hudson but not on any of other machines I
tested it. The failure is related to that when the name-node started it did not get an exclusive
lock for its storage directory as required. I initially suspected that this is a Solaris problem,
but later realized that it is a NFS problem, which may not support exclusive locks consistently.
IMO we should enforce exclusivity of locks only if it is supported by a local file system.
So the test now checks whether exclusive locks are supported before failing.
> Automatic namespace recovery from the secondary image.
> ------------------------------------------------------
>
> Key: HADOOP-2585
> URL:
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Konstantin Shvachko
> Assignee: Konstantin Shvachko
> Attachments: SecondaryStorage.patch, SecondaryStorage.patch
>
>
> Hadoop has a three way (configuration controlled) protection from loosing the namespace
image.
> # image can be replicated on different hard-drives of the same node;
> # image can be replicated on a nfs mounted drive on an independent node;
> # a stale replica of the image is created during periodic checkpointing and stored on
the secondary name-node.
> Currently during startup the name-node examines all configured storage directories, selects
the
> most up to date image, reads it, merges with the corresponding edits, and writes to the
new image back
> into all storage directories. Everything is done automatically.
> If due to multiple hardware failures none of those images on mounted hard drives (local
or remote)
> are available the secondary image although stale (up to one hour old by default) can
be still
> used in order to recover the majority of the file system data.
> Currently one can reconstruct a valid name-node image from the secondary one manually.
> It would be nice to support an automatic recovery.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/200804.mbox/%3C1713167131.1207882084841.JavaMail.jira@brutus%3E | CC-MAIN-2018-05 | refinedweb | 351 | 54.32 |
[gforth]
/
gforth
/ tags.fs
gforth: gforth/tags.fs
1 :
pazsan
1.1
\ VI tags support for GNU Forth.
2 : :
45 :
require search.fs
46 :
require extend.fs
47 :
79 :
sourcefilename r@ write-file throw
80 :
#tab r> emit-file throw ;
81 :
82 :
: put-tags-entry ( -- )
83 :
\ write the entry for the last name to the TAGS file
84 :
\ if the input is from a file and it is not a local name
85 :
source-id dup 0<> swap -1 <> and \ input from a file
86 :
current @ locals-list <> and \ not a local name
87 :
last @ 0<> and \ not an anonymous (i.e. noname) header
88 :
if
89 :
tags-file-id >r
90 :
r@ put-load-file-name
91 :
last @ name>string r@ write-file throw
92 :
#tab r@ emit-file throw
93 :
s" /^" r@ write-file throw
94 :
source drop >in @ r@ write-file throw
95 :
s" $/" r@ write-line throw
96 :
rdrop
97 :
endif ;
98 :
99 :
: (tags-header) ( -- )
100 :
defers header
101 :
put-tags-entry ;
102 :
103 :
' (tags-header) IS header
CVS Admin
ViewCVS 1.0-dev
ViewCVS and CVS Help | http://www.complang.tuwien.ac.at/viewcvs/cgi-bin/viewcvs.cgi/gforth/tags.fs?annotate=1.1&sortby=rev&only_with_tag=MAIN | CC-MAIN-2013-20 | refinedweb | 182 | 79.7 |
I am writing a program that is supposed to take an amount of change the user imputs and then the program is supposed to output the number of quarters, dimes, nickles, and pennies needed to make up the amount given prior.
I get the program to run and able to input the change. When the output of coins is given you get the right number of qurters multiplied by 100. Ex. If you put in 75 for change you will get 300 for quarters, 3000 for dimes, etc, etc. The first digit is correct but the rest isn't.
It doesn't make sense to me. Are my equations written wrong or did I go about this the wrong way.
Thank you in advance and helping me work through this.Thank you in advance and helping me work through this.Code:#include <iostream> int change; int quarter; int dime; int nickle; int pennie; int main() { std::cout << "Amount of change: " ; std::cin >> change ; quarter = ( change / .25 ) ; dime = ( change / .25 ) / .1 ; nickle = ( ( change / .25 ) / .1 ) / .05 ; pennie = ( ( ( change / .25 ) / .1 ) / .5 ) / .01 ; std::cout << "Anount due:" << '\n' ; std::cout << " quarters: " << quarter ; std::cout << " dimes: " << dime ; std::cout << " nickles: " << nickle ; std::cout << " pennies: " << pennie ; return (0); } | http://cboard.cprogramming.com/cplusplus-programming/122219-problem-change-return-problem.html | CC-MAIN-2014-49 | refinedweb | 204 | 82.95 |
I am currently working on a small piece of code that I want to go through a user-inputted folder and rename all the files in there depending on certain criteria.
At the moment, the user enters the filename using this code:
src = input("Please enter the folder path where the files are located")
for f in glob.glob("*reference*" + "*letter*"):
new_filename = "203 Reference Letter" + " " + name
os.rename(f,new_filename)
You can use os.path.join to join the user input to the desired pattern:
import os.path src = input('Please enter the folder path where the files are located: ') if not os.path.isdir(src): print('Invalid given path.') exit(1) path = os.path.join(src, '*reference*letter*') for f in glob.glob(path): new_filename = '203 Reference Letter {}'.format(name) os.rename(f, new_filename)
I do not know what is the pattern used in
glob, but basically you join the user input folder to any pattern. | https://codedump.io/share/cddvT84jiltz/1/how-to-use-glob-in-python-to-search-through-a-specific-folder | CC-MAIN-2017-39 | refinedweb | 157 | 59.3 |
- NAME
- DESCRIPTION
- INCLUDED BUNDLES
- INCLUDED TOOLS
- INCLUDED PLUGINS
- INCLUDED REQUIREMENT CHECKERS
- SEE ALSO
- CONTACTING US
- SOURCE
- MAINTAINERS
- AUTHORS
NAME
Test2::Suite - Distribution with a rich set of tools built upon the Test2 framework.
DESCRIPTION
Rich set of tools, plugins, bundles, etc built upon the Test2 testing library. If you are interested in writing tests, this is the distribution for you.
WHAT ARE TOOLS, PLUGINS, AND BUNDLES?
- TOOLS
Tools are packages that export functions for use in test files. These functions typically generate events. Tools SHOULD NEVER alter behavior of other tools, or the system in general.
- PLUGINS
Plugins are packages that produce effects, or alter behavior of tools. An example would be a plugin that causes the test to bail out after the first failure. Plugins SHOULD NOT export anything.
- BUNDLES
Bundles are collections of tools and plugins. A bundle should load and re-export functions from Tool packages. A bundle may also load and configure any number of plugins.
If you want to write something that both exports new functions, and effects behavior, you should write both a Tools distribution, and a Plugin distribution, then a Bundle that loads them both. This is important as it helps avoid the problem where a package exports much-desired tools, but also produces undesirable side effects.
INCLUDED BUNDLES
- Test2::V#
These do not live in the bundle namespace as they are the primary ways to use Test2::Suite.
The current latest is Test2::V0.
use Test2::V0; #::V0 for complete documentation.
- Extended
** Deprecated ** See Test2::V0
use Test2::Bundle::Extended; #::Bundle::Extended for complete documentation.
- More
use Test2::Bundle::More; use strict; use warnings; plan 3; # Or you can use done_testing at the end ok(...); is(...); # Note: String compare is_deeply(...); ... done_testing; # Use instead of plan
This bundle is meant to be a mostly drop-in replacement for Test::More. There are some notable differences to be aware of however. Some exports are missing:
eq_array,
eq_hash,
eq_set,
$TODO,
explain,
use_ok,
require_ok. As well it is no longer possible to set the plan at import:
use .. tests => 5.
$TODOhas been replaced by the
todo()function. Planning is done using
plan,
skip_all, or
done_testing.
See Test2::Bundle::More for complete documentation.
- Simple
use Test2::Bundle::Simple; use strict; use warnings; plan 1; ok(...);
This bundle is meant to be a mostly drop-in replacement for Test::Simple. See Test2::Bundle::Simple for complete documentation.
INCLUDED TOOLS
- Basic
Basic provides most of the essential tools previously found in Test::More. However it does not export any tools used for comparison. The basic
pass,
fail,
okfunctions are present, as are functions for planning.
See Test2::Tools::Basic for complete documentation.
- Compare
This provides
is,
like,
isnt,
unlike, and several additional helpers. Note: These are all deep comparison tools and work like a combination of Test::More's
isand
is_deeply.
See Test2::Tools::Compare for complete documentation.
- ClassicCompare
This provides Test::More flavored
is,
like,
isnt,
unlike, and
is_deeply. It also provides
cmp_ok.
See Test2::Tools::ClassicCompare for complete documentation.
- Class
This provides functions for testing objects and classes, things like
isa_ok.
See Test2::Tools::Class for complete documentation.
- Defer
This provides functions for writing test functions in one place, but running them later. This is useful for testing things that run in an altered state.
See Test2::Tools::Defer for complete documentation.
- Encoding
This exports a single function that can be used to change the encoding of all your test output.
See Test2::Tools::Encoding for complete documentation.
- Exports
This provides tools for verifying exports. You can verify that functions have been imported, or that they have not been imported.
See Test2::Tools::Exports for complete documentation.
- Mock
This provides tools for mocking objects and classes. This is based largely on Mock::Quick, but several interface improvements have been added that cannot be added to Mock::Quick itself without breaking backwards compatibility.
See Test2::Tools::Mock for complete documentation.
- Ref
This exports tools for validating and comparing references.
See Test2::Tools::Ref for complete documentation.
- Spec
This is an RSPEC implementation with concurrency support.
See Test2::Tools::Spec for more details.
- Subtest
This exports tools for running subtests.
See Test2::Tools::Subtest for complete documentation.
- Target
This lets you load the package(s) you intend to test, and alias them into constants/package variables.
See Test2::Tools::Target for complete documentation.
INCLUDED PLUGINS
- BailOnFail
The much requested "bail-out on first failure" plugin. When this plugin is loaded, any failure will cause the test to bail out immediately.
See Test2::Plugin::BailOnFail for complete documentation.
- DieOnFail
The much requested "die on first failure" plugin. When this plugin is loaded, any failure will cause the test to die immediately.
See Test2::Plugin::DieOnFail for complete documentation.
- ExitSummary
This plugin gives you statistics and diagnostics at the end of your test in the event of a failure.
See Test2::Plugin::ExitSummary for complete documentation.
- SRand
Use this to set the random seed to a specific seed, or to the current date.
See Test2::Plugin::SRand for complete documentation.
- UTF8
Turn on utf8 for your testing. This sets the current file to be utf8, it also sets STDERR, STDOUT, and your formatter to all output utf8.
See Test2::Plugin::UTF8 for complete documentation.
INCLUDED REQUIREMENT CHECKERS
- AuthorTesting
Using this package will cause the test file to be skipped unless the AUTHOR_TESTING environment variable is set.
See Test2::Require::AuthorTesting for complete documentation.
- EnvVar
Using this package will cause the test file to be skipped unless a custom environment variable is set.
See Test2::Require::EnvVar for complete documentation.
- Fork
Using this package will cause the test file to be skipped unless the system is capable of forking (including emulated forking).
See Test2::Require::Fork for complete documentation.
- RealFork
Using this package will cause the test file to be skipped unless the system is capable of true forking.
See Test2::Require::RealFork for complete documentation.
- Module
Using this package will cause the test file to be skipped unless the specified module is installed (and optionally at a minimum version).
See Test2::Require::Module for complete documentation.
- Perl
Using this package will cause the test file to be skipped unless the specified minimum perl version is met.
See Test2::Require::Perl for complete documentation.
- Threads
Using this package will cause the test file to be skipped unless the system has threading enabled.
Note: This will not turn threading on for you.
See Test2::Require::Threads for complete documentation.
SEE ALSO
See the Test2 documentation for a namespace map. Everything in this distribution uses Test2.
Test2::Manual is the Test2 Manual.
CONTACTING US
Many Test2 developers and users lurk on irc://irc.perl.org/#perl.-Suite can be found at.
MAINTAINERS
AUTHORS
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See | http://web-stage.metacpan.org/pod/Test2::Suite | CC-MAIN-2021-21 | refinedweb | 1,140 | 51.85 |
Backport of Python 2.7's collections module
Project description
backport_collections is a backport of Python 2.7’s collections module for Python 2.6.
What is backported?
Counter, deque, OrderedDict and namedtuple are backported. The rest of the members of the collections module are still exposed. Note though that some ABC classes are slighlty different (see known issues below).
Usage
To use it just import the desired classes from the module backport_collections. Example:
from backport_collections import Counter from backport_collections import deque from backport_collections import OrderedDict from backport_collections import namedtuple
Known Issues
- In Python 2.6 Issue 9137 is not fixed as it complains if it gets a keyword argument named self. The error is TypeError: update() got multiple values for keyword argument 'self'. Additionally the keyword argument cannot be called other either as it will think it is the full dict. No error is raised in this case.
- In Python 2.6 Issue 8743 is not fully fixed: Set interoperability with real sets
License
The Python Software Foundation License.
Changes
- v0.1 (15/08/2014): Synced to revision
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/backport_collections/ | CC-MAIN-2018-26 | refinedweb | 206 | 59.8 |
The.
My previous post was mainly about the overloaded algorithms. In case you are curious, read the post Parallel Algorithm of the Standard Template Library.
Today, I'm writing about the seven new algorithms. Here are they.
std::for_each_n
std::exclusive_scan
std::inclusive_scan
std::transform_exclusive_scan
std::transform_inclusive_scan
std::parallel::reduce
std::parallel::transform_reduce
Beside of std::for_each_n this names are quite unusual. So let me make a short detour and write a little bit about Haskell.
To make the long story short. All new functions have a pendant in the pure functional language Haskell.
Before I show you Haskell in action, let me say a few words to the different functions.
Now comes the action. Here is Haskell's interpreter shell.
(1) and (2) define a list of integers and a list of strings. In (3), I apply the lambda function (\a -> a * a) to the list of ints. (4) and (5) are more sophisticated. The expression (4) multiplies (*) all pairs of integers starting with the 1 as neutral element of the multiplication. Expression (5) does the corresponding for the addition. The expressions (6), (7), and (9) are quite challenging to read for the imperative eye. You have to read them from right to left. scanl1 (+) . map(\a -> length a (7) is a function composition. The dot (.) symbol compose the two functions. The first function maps each element to its length, the second function adds the list of lengths together. (9) is similar to 7. The difference is, that foldl produces one value and requires an initial element. This is 0. Now, the expression (8) should be readable. The expression successively joins two strings with the ":" character.
I think you wonder why I write in a C++ blog so much challenging stuff about Haskell. That is for two good reasons. At first, you know the history of the C++ functions. And at second, it's a lot easier to understand the C++ function if you compare them with the Haskell pendants.
So, let's finally start with C++.
I promised, it may become a little bit difficult to read.
// newAlgorithm.cpp
#include <hpx/hpx_init.hpp>
#include <hpx/hpx.hpp>
#include <hpx/include/parallel_numeric.hpp>
#include <hpx/include/parallel_algorithm.hpp>
#include <hpx/include/iostreams.hpp>
#include <string>
#include <vector>
int hpx_main(){
hpx::cout << hpx::endl;
// for_each_n
std::vector<int> intVec{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; // 1
hpx::parallel::for_each_n(hpx::parallel::execution::par, // 2
intVec.begin(), 5, [](int& arg){ arg *= arg; });
hpx::cout << "for_each_n: ";
for (auto v: intVec) hpx::cout << v << " ";
hpx::cout << "\n\n";
// exclusive_scan and inclusive_scan
std::vector<int> resVec{1, 2, 3, 4, 5, 6, 7, 8, 9};
hpx::parallel::exclusive_scan(hpx::parallel::execution::par, // 3
resVec.begin(), resVec.end(), resVec.begin(), 1,
[](int fir, int sec){ return fir * sec; });
hpx::cout << "exclusive_scan: ";
for (auto v: resVec) hpx::cout << v << " ";
hpx::cout << hpx::endl;
std::vector<int> resVec2{1, 2, 3, 4, 5, 6, 7, 8, 9};
hpx::parallel::inclusive_scan(hpx::parallel::execution::par, // 5
resVec2.begin(), resVec2.end(), resVec2.begin(),
[](int fir, int sec){ return fir * sec; }, 1);
hpx::cout << "inclusive_scan: ";
for (auto v: resVec2) hpx::cout << v << " ";
hpx::cout << "\n\n";
// transform_exclusive_scan and transform_inclusive_scan
std::vector<int> resVec3{1, 2, 3, 4, 5, 6, 7, 8, 9};
std::vector<int> resVec4(resVec3.size());
hpx::parallel::transform_exclusive_scan(hpx::parallel::execution::par, // 6
resVec3.begin(), resVec3.end(),
resVec4.begin(), 0,
[](int fir, int sec){ return fir + sec; },
[](int arg){ return arg *= arg; });
hpx::cout << "transform_exclusive_scan: ";
for (auto v: resVec4) hpx::cout << v << " ";
hpx::cout << hpx::endl;
std::vector<std::string> strVec{"Only","for","testing","purpose"}; // 7
std::vector<int> resVec5(strVec.size());
hpx::parallel::transform_inclusive_scan(hpx::parallel::execution::par, // 8
strVec.begin(), strVec.end(),
resVec5.begin(), 0,
[](auto fir, auto sec){ return fir + sec; },
[](auto s){ return s.length(); });
hpx::cout << "transform_inclusive_scan: ";
for (auto v: resVec5) hpx::cout << v << " ";
hpx::cout << "\n\n";
// reduce and transform_reduce
std::vector<std::string> strVec2{"Only","for","testing","purpose"};
std::string res = hpx::parallel::reduce(hpx::parallel::execution::par, // 9
strVec2.begin() + 1, strVec2.end(), strVec2[0],
[](auto fir, auto sec){ return fir + ":" + sec; });
hpx::cout << "reduce: " << res << hpx::endl;
// 11
std::size_t res7 = hpx::parallel::parallel::transform_reduce(hpx::parallel::execution::par,
strVec2.begin(), strVec2.end(),
[](std::string s){ return s.length(); },
0, [](std::size_t a, std::size_t b){ return a + b; });
hpx::cout << "transform_reduce: " << res7 << hpx::endl;
hpx::cout << hpx::endl;
return hpx::finalize();
}
int main(int argc, char* argv[]){
// By default this should run on all available cores
std::vector<std::string> const cfg = {"hpx.os_threads=all"};
// Initialize and run HPX
return hpx::init(argc, argv, cfg);
}
Before I show you the output of the program and explain you the source code, I have to make a general remark. As far as I know, there is no implementation of the parallel STL available. Therefore, I used the HPX implementation that uses the namespace hpx. So, if you replace the namespace hpx with std and write the code in the hpx_main function you know, how the STL algorithm will look like.
In correspondence to Haskell, I use a std::vector of ints (1) and strings (7).
The for_each_n algorithm in (2) maps the first n ints of the vector to it's power of 2.
exclusive_scan (3) and inclusive_scan (5) are quite similar. Both apply a binary operation to its elements. The difference is, that exclusive_scan excludes in each iteration the last element. Here you have the corresponding Haskell expression: scanl (*) 1 ints.
The transform_exclusive_scan (6) is quite challenging to read. Let me try it. Apply in the first step the lambda function [](int arg){ return arg *= arg; } to each element of the range from resVec3.begin() to resVec3.end(). Then apply in the second step the binary operation [](int fir, int sec){ return fir + sec; } to the intermediate vector. That means, sum up all elements by using the 0 as initial element. The result goes to resVec4.begin(). To make the long story short. Here is Haskell: scanl (+) 0 . map(\a -> a * a) $ ints.
The transform_inclusive_scan function in (8) is similar. This function maps each element to its length. Once more in Haskell: scanl1 (+) . map(\a -> length a) $ strings.
Now, the reduce function should be quite simple to read. It puts ":" characters between each element of the input vector. The resulting string should not start with a ":" character. Therefore, the range starts at the second element (strVec2.begin() + 1) and the initial element is the first element of the vector: strVec2[0]. Here is Haskell: foldl1 (\l r -> l ++ ":" ++ r) strings.
If you want to understand the transform_reduce expression in (11), please read my post Parallel Algorithm of the Standard Template Library. I have more to say about the function. For the impatient readers. The concise expression in Haskell: foldl (+) 0 . map (\a -> length a) $ strings.
Studying the output of the program should help you.
Each of the seven new algorithms exist in different flavours. You can invoke them with and without an initial element, with and without specifying the execution policy. You can invoke the function that requires a binary operator such as std::scan and std::parallel::reduce even without a binary operator. In this case, the addition is used as default. In order to execute the algorithm in parallel or in parallel and vectorised, the binary operator has to be associative. That makes a lot of sense because the algorithm can quite easily run on many cores. For the details, read the Wikipedia article on prefix_sum. Here are further details to the new algorithms: extensions for parallelism.
Sorry, that was a long post. But making two posts out of it makes no sense. In the next post, I write about the performance-improved interface of the associative containers (sets and maps) and the unified interface of the STL containers at all.
Thanks a lot to my Patreon Supporter: Eric Pederson.
Go to Leanpub/cpplibrary "What every professional C++ programmer should know about the C++ standard library". Get your e-book. Support my blog.
Hey Rainer, just as a small addition: in HPX we have implemented both, the old and the new parameter sequences for those algorithms where the sequences differ between the Parallelism TS and C++17. We have also adapted the names (which differ slightly as well). We will phase out the old signatures over time. Thanks!
Name (required)
Website
Notify me of follow-up comments
Hunting
Today 2934
All 1580670
Currently are 120 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Thanks Hartmut,
I will change the post.
Extremely useful info particularly the closing part :) I deal with such info much.
I was looking for this particular info for a long time.
Thank you and best of luck.
to my friends. I'm sure they will be benefited from this web site.
I'm having some small security issues with my latest website and I'd like to find something more safe.
Do you have any solutions?
with it!
Ι'll bookmark your blog аnd take the feeds alѕo?
I'm һappy tto find a ⅼot of uѕeful info right
һere ѡithin thhe ρut up, we neeɗ develop morе techniques ᧐n thijs regard, thanks for sharing.
. . . . . | http://www.modernescpp.com/index.php/c-17-new-algorithm-of-the-standard-template-library | CC-MAIN-2019-09 | refinedweb | 1,539 | 59.7 |
When you import photos and video into Lightroom,.
In the Library module, do one of the following:
Click the Import button.
From the main menu, choose File > Import Photos
And Video.
Drag a folder with files or drag individual files into the
Grid view.
Click Select A Source or From in the upper-left corner of
the import window, or use the Source panel on the left side of the
import window to navigate to the files you want to import.
In the top center of the import window, specify how you want
to add the photos to the catalog:
Select the photos that you want to import from the preview
area in the center of the import window. A check mark in the upper-left
corner of the thumbnail indicates that the photo is selected for
import.
To filter photos in the preview, select one
of the following:, Checked State, File Name, or Media Type (image
or video file). upper-right corner of the window, click
To and choose a location for the photos. Specify further options
in the Destination panel:
Specify other options for the imported files using the panels
on the right side of the window. See Specify import options.
Click Import.
Twitter™ and Facebook posts are not covered under the terms of Creative Commons. | http://help.adobe.com/en_US/lightroom/using/WSEBFDC1BF-1D0A-46c4-A453-C29279EB078C.html | CC-MAIN-2014-15 | refinedweb | 218 | 61.77 |
In this article, we will be learning about modules in Python. Let us begin by defining the term module.
A module is a file that consists of constants, variables, functions, and classes. In python, modules have the extension
.py. It can be built-in or user-defined.
A module provides code reusability. We can import modules in a program (which we will be learning in the next section) and use its functions, classes, and variables so that we do not have to write them repeatedly hence reducing the length of the code.
For example, let’s start by creating a module,
factorial.py.
File: factorial.py
SPEED_1 = 5 SPEED_2 = 10 def fact(number): if number in (0, 1): return 1 else: return number * fact(number - 1)
Here, we have defined constants
SPEED_1 and
SPEED_2 with the values
5 and
10 and defined a function
fact() that takes a number as an input and returns its factorial.
In the next section, we will learn how to import modules and use their variables and functions in our code.
To import a complete module, Python provides us with a keyword
import. In the following example, we will import the
factorial.py module to calculate the factorial of a number.
File: test_factorial.py
import factorial num = 5 factorial_5 = factorial.fact(num) print("Factorial of %d is %d" %(num, factorial_5))
Here, we have accessed
fact() method of the
factorial module using the dot (
.) operator.
Output:
Factorial of 5 is 120
Now let’s say we want to use the
factorial module multiple times. To make our code compact, we can give alias name to the imported module by writing,
import factorial as f
Now, we can use
SPEED_1 and
fact() method as
print("Factorial of %d is %d" %(f.SPEED_1, f.fact(f.SPEED_1))) print("Value of SPEED_2 = %d" %(f.SPEED_2))
Which also gives the same output
Output:
Factorial of 5 is 120 Value of SPEED_2 = 10
Note: If we give alias
f to the module, the module name
factorial will not be recognized anymore.
What if we only want to import the variable
SPEED_1 and the
fact() method? For this, we will only import
SPEED_1 and
fact() as done below.
from factorial import fact, SPEED_1
In this case, we can directly write variable and method names to use them.
print(fact(5)) print(SPEED_1)
And we get the following output
Output:
120 5
We can also import all the functions, variables, and classes defined in the module using an asterisk (
*).
from factorial import *
Note: Let’s say we create a variable, a function, or a class in our code with the same name as defined inside the module. It can lead to the definition duplicacy. Therefore, this method is not recommended.
There can be cases where we would like to import from subdirectories. To achieve that, we can use the absolute path of the module.
Consider the following folder structure
my_project # root folder(level 0) |-- test.py |-- util |-- printer.py |-- source # folder at level 1 |-- code # folder at level 2 |-- factorial.py
If we want to use the
fact() method of the
factorial module, we will have to go all the way up to folder level 2 and import
fact() as shown.
File: test.py
from source.code.factorial import fact print(fact(5))
Output:
120
But what if we have a deep or complex folder structure? In that case, it will be impractical to use absolute paths.
Here,
__init__.py comes into the picture.
We create an
__init__.py file inside a directory to be considered as a module. Also, inside the
__init__.py file, we can write the following code to reduce the path complexity for the above folder structure.
New folder structure
my_project # root folder(level 0) |-- test.py |-- util |-- printer.py |-- source # folder at level 1 |-- __init__.py |-- code # folder at level 2 |-- factorial.py
Add the following code inside the
__init__.py file.
File: __init__.py
from .code.factorial import fact
Now, in our test.py file, we only have to write
from source import fact print(fact(5))
Output:
120
Until now, we have learned about modules and how to import them. But how does the python interpreter find the modules we import?
First, the python interpreter looks for a built-in module. If the required module is not found, it looks into the directories mentioned in
sys.path.
The following code will print a list of values inside
sys.path.
from sys import path print(path)
Output
[ 'C:\\Users\\user\\PycharmProjects\\my_project', 'C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python37-32\\python37.zip', 'C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python37-32\\DLLs', 'C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python37-32\\lib', 'C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python37-32','C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python37-32\\lib\\site-packages' ] Process finished with exit code 0
The search order is,
PYTHONPATH, which is an environment variable that contains a list of directories
A module in a python script is imported only once by the interpreter for efficiency.
Consider the following module,
printer.py
number = 5 print(number)
Now if we import this module in our script file as
>>> import printer 5
The value of
number is printed.
Now let’s modify the
printer.py module by changing the value of
number to
6. But if we import the module again, we don’t see any output.
>>> import printer 5 >>> import printer
Why is it so?
If we import a module and make changes to it, those changes will not be reflected in the current script. For this, we need to either restart the interpreter or reload the module by using the
reload() function.
The
reload() function is defined inside the
importlib module which we need to import.
>>> import printer 5 >>> import printer >>> from importlib import reload >>> reload(printer) 6
Python provides us with a function
dir() which can be used to find the names that are defined in a module. It returns a sorted list with the names of the functions, variables, and classes of the module. For example
import factorial print(dir(factorial))
We get,
Output:
[ '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'fact', 'SPEED_1', 'SPEED_2' ]
Here, apart from the names defined inside the module
factorial, we get names with underscores. These are the default built-in python attributes provided to the module.
We already know that a module in Python is simply a Python script file with extension
.py. So, like any script file, the module can also be executed.
We can execute a module (
printer.py in our case) using
cmd on Windows or
terminal on Linux/Mac and run the following command.
python printer.py
And we get the output,
6
You may have noticed that we got the same output when we imported the
printer.py module in the
Reloading a module section. So, how do we differentiate between when we run the module as a script and when the file is imported as a module?
If the module is run as a script file, the python interpreter sets the special variable
__name__to the value
__main__. If this file is imported as a module,
__name__will be set to the name of the module.
So, we add the following code where we check if the value of
__name__ is equal to
__main__. If true, it will mean that the module is run as a script. Otherwise, it is imported as a module.
File: printer.py
SPEED_1 = 5 SPEED_2 = 10 def fact(number): if number in (0,1): return 1 else: return number * fact(number - 1) if __name__ == '__main__': print(fact(5))
As imported
>> import printer >> printer.fact(5) 120
As a script file
C:\Users\user\PycharmProjects\my_project> printer.py 5 120
In this section, we learned
Help us improve this content by editing this page on GitHub | https://tutswiki.com/python/modules/ | CC-MAIN-2021-17 | refinedweb | 1,310 | 65.12 |
Now it's time to create a program that uses everything we've learned so far. This program will generate a list of house prices and find their average (mean, mode, and median) and range. The following sections will describe each function in turn.
getRange() iterates through a set of numbers passed to it and calculates the minimum and maximum values. It then returns these values and their range in a tuple.
def getRange (nums): min = 300000000 max = -300000000 for item in nums: if (item > max): max = item if (item < min): min = item return (min, max, max-min)
First getRange() declares two variables, min and max. The min variable refers to a very large number, so the first item extracted from the list is less ...
No credit card required | https://www.safaribooksonline.com/library/view/python-programming-with/0201616165/0201616165_ch04lev1sec4.html | CC-MAIN-2018-34 | refinedweb | 129 | 80.82 |
# Deployment: Dask
How can you run a Prefect flow in a distributed Dask cluster?
# The Dask Executor
Prefect exposes a suite of "Executors" that represent the logic for how and where a task should run (e.g., should it run in a subprocess? on a different computer?).
In our case, we want to use Prefect's
DaskExecutor to submit task runs to a known Dask cluster. This provides a few key benefits out of the box:
- Dask manages all "intra-flow scheduling" for a single run, such as determining when upstream tasks are complete before attempting to run a downstream task. This enables users to deploy flows with many bite-sized tasks in a way that doesn't overload any central scheduler.
- Dask handles many resource decisions such as what worker to submit a job to
- Dask handles worker/scheduler communication, like serializing data between workers
# An Example Flow
If you'd like to kick the tires on Dask locally, you can install Dask distributed and spin up a local "cluster" with two Dask workers via the following CLI commands:
> dask-scheduler # Scheduler at: tcp://10.0.0.41:8786 # in new terminal windows > dask-worker tcp://10.0.0.41:8786 > dask-worker tcp://10.0.0.41:8786
Once you have a cluster up and running, let's deploy a very basic flow that runs on this cluster. This example makes the classic "diamond shape" of a flow, where many tasks run in parallel and are bottlenecked by a final task that depends on their upstream states. This type of flow benefits greatly from the parallelism supported by an executor like Dask.
from prefect import task, Flow import datetime import random from time import sleep @task def inc(x): sleep(random.random() / 10) return x + 1 @task def dec(x): sleep(random.random() / 10) return x - 1 @task def add(x, y): sleep(random.random() / 10) return x + y @task(name="sum") def list_sum(arr): return sum(arr) with Flow("dask-example") as flow: incs = inc.map(x=range(100)) decs = dec.map(x=range(100)) adds = add.map(x=incs, y=decs) total = list_sum(adds)
So far, all we have done is define a flow that contains all the necessary information for how to run these tasks -- none of our custom task code has been executed yet.
To have this flow run on our Dask cluster, all we need to do is provide an appropriately configured
DaskExecutor to the
flow.run() method:
from prefect.engine.executors import DaskExecutor executor = DaskExecutor(address="tcp://10.0.0.41:8786") flow.run(executor=executor)
If you happen to have
bokeh installed, you can visit the Dask Web UI and see your tasks being processed when the flow run begins!
Advanced Dask Configuration
To interface with a secure, production-hardened Dask cluster via Dask Gateway you may need to provide TLS details to the
DaskExecutor. These details can be found on the GatewayCluster object on creation:
from dask_gateway import Gateway from prefect.engine.executors import DaskExecutor # ...flow definition... gateway = Gateway() cluster = gateway.new_cluster() executor = DaskExecutor( address=cluster.scheduler_address, client_kwargs={"security": cluster.security} ) flow.run(executor=executor)
Alternatively, TLS details can be provided manually:
from dask_gateway.client import GatewaySecurity from prefect.engine.executors import DaskExecutor # ...flow definition... security = GatewaySecurity(tls_cert="path-to-cert", tls_key="path-to-key") executor = DaskExecutor( address="a-scheduler-address", client_kwargs={"security": security} ) flow.run(executor=executor)
# Next Steps
Let's take this one step further: let's attach a schedule to this flow, and package it up so that we can point it to any Dask cluster we choose, without editing the code which defines the flow. To do this, we will first add a main method to our script above so that it can be executed via CLI:
def main(): from prefect.schedules import IntervalSchedule every_minute = IntervalSchedule(start_date=datetime.datetime.utcnow(), interval=datetime.timedelta(minutes=1)) flow.schedule = every_minute flow.run() # runs this flow on its schedule if __name__ == "__main__": main()
Notice that we didn't specify an executor in our call to
flow.run(). This is because the default executor can be set via environment variable (for more information on how this works, see Prefect's documentation). Supposing we save this in a file called
dask_flow.py, we can now specify the executor and the Dask scheduler address as follows:
> export export python dask_flow.py
This flow will now run every minute on your local Dask cluster until you kill this process.
# Further steps
Dask is a fully featured tool all on its own, including many different ways to deploy it. For the latest in how to deploy Dask, check out the Dask setup docs. There is also this great blog post on the Dask blog describing the current state of all the ways to deploy distributed Dask clusters.
Often at some point users become interested in optimizing their Dask cluster for their workload. Usually this comes down to a tweaking the resource utilization of your dask cluster through settings such as
- how many workers
- the machine type / size the workers are on
- how many threads each worker uses to schedule work
There are also some best practices in terms of splitting up your work to make the dask scheduler as efficient as possible, particularly when it comes to data transfer. Another common gotcha when deploying to a distributed Dask cluster is making sure dependencies match across all of your Dask workers.
For more details on what to look out for while optimizing these aspects of your Dask cluster and workload, check out this blog co-written by Prefect and Saturn Cloud. | https://docs.prefect.io/core/advanced_tutorials/dask-cluster.html | CC-MAIN-2020-45 | refinedweb | 936 | 54.83 |
“Simple.”
Zero-Config Object Persistence with Simple Persistence for Java
Submitted by BlueVoodoo 2007-02-19 Java 3 Comments
I didn’t like the article, but the library it describes looks very interesting. Make sure to have a look at the tutorial on the project’s website, it was much more useful to me than that article with its overcomplicated example:
– Very Nice looking…
But, why these obscure dependencies.
I’d be very nervous to implement this code in production.
Are these open source library api’s set in stone or will they be redesigned in the next release? Why no use DERBY which is already in the Java namespace…
This is part of the problem of “open sourcing” java.
This code could stop working in 3 to 12 months.
Very good, I am impressed. That’s real simplicity and abstraction. Up until now, I have been avoiding using persistence layers in Java due to their complexities (I developed my own, which is a light API over JBDC with limited functionality).
This product, coupled with ThinWire (), can make programmers really productive…
Edited 2007-02-20 11:58 | https://www.osnews.com/story/17303/zero-config-object-persistence-with-simple-persistence-for-java/ | CC-MAIN-2019-35 | refinedweb | 187 | 62.07 |
In this next example, we will implement a quicksort-like algorithm for sorting lists and compare an F# implementation to a C# implementation.
Here is the logic for a simplified quicksort-like algorithm:
If the list is empty, there is nothing to do. Otherwise: 1. Take the first element of the list 2. Find all elements in the rest of the list that are less than the first element, and sort them. 3. Find all elements in the rest of the list that are >= than the first element, and sort them 4. Combine the three parts together to get the final result: (sorted smaller elements + firstElement + sorted larger elements)
Note that this is a simplified algorithm and is not optimized (and it does not sort in place, like a true quicksort); we want to focus on clarity for now.
Here is the code in F#:
let rec quicksort list = match list with | [] -> // If the list is empty [] // return an empty list | firstElem::otherElements -> // If the list is not empty let smallerElements = // extract the smaller ones otherElements |> List.filter (fun e -> e < firstElem) |> quicksort // and sort them let largerElements = // extract the large ones otherElements |> List.filter (fun e -> e >= firstElem) |> quicksort // and sort them // Combine the 3 parts into a new list and return it List.concat [smallerElements; [firstElem]; largerElements] //test printfn "%A" (quicksort [1;5;23;18;9;1;3])
Again note that this is not an optimized implementation, but is designed to mirror the algorithm closely.
Let’s go through this code:
reckeyword in “
let rec quicksort list =”.
match..withis sort of like a switch/case statement. Each branch to test is signaled with a vertical bar, like so:
match x with | caseA -> something | caseB -> somethingElse
match” with
[]matches an empty list, and returns an empty list.
match” with
firstElem::otherElementsdoes two things.
firstElem”, and one for the rest of the list, called “
otherElements”. In C# terms, this is like having a “switch” statement that not only branches, but does variable declaration and assignment at the same time.
->is sort of like a lambda (
=>) in C#. The equivalent C# lambda would look something like
(firstElem, otherElements) => do something.
smallerElements” section takes the rest of the list, filters it against the first element using an inline lambda expression with the “
<” operator and then pipes the result into the quicksort function recursively.
largerElements” line does the same thing, except using the “
>=” operator
List.concat”. For this to work, the first element needs to be put into a list, which is what the square brackets are for.
[]” branch the return value is the empty list, and in the main branch, it is the newly constructed list.
For comparison here is an old-style C# implementation (without using LINQ).
public class QuickSortHelper { public static List<T> QuickSort<T>(List<T> values) where T : IComparable { if (values.Count == 0) { return new List<T>(); } //get the first element T firstElement = values[0]; //get the smaller and larger elements var smallerElements = new List<T>(); var largerElements = new List<T>(); for (int i = 1; i < values.Count; i++) // i starts at 1 { // not 0! var elem = values[i]; if (elem.CompareTo(firstElement) < 0) { smallerElements.Add(elem); } else { largerElements.Add(elem); } } //return the result var result = new List<T>(); result.AddRange(QuickSort(smallerElements.ToList())); result.Add(firstElement); result.AddRange(QuickSort(largerElements.ToList())); return result; } }
Comparing the two sets of code, again we can see that the F# code is much more compact, with less noise and no need for type declarations.
Furthermore, the F# code reads almost exactly like the actual algorithm, unlike the C# code. This is another key advantage of F# – the code is generally more declarative (“what to do”) and less imperative (“how to do it”) than C#, and is therefore much more self-documenting.
Here’s a more modern “functional-style” implementation using LINQ and an extension method:
public static class QuickSortExtension { /// <summary> /// Implement as an extension method for IEnumerable /// </summary> public static IEnumerable<T> QuickSort<T>( this IEnumerable<T> values) where T : IComparable { if (values == null || !values.Any()) { return new List<T>(); } //split the list into the first element and the rest var firstElement = values.First(); var rest = values.Skip(1); //get the smaller and larger elements var smallerElements = rest .Where(i => i.CompareTo(firstElement) < 0) .QuickSort(); var largerElements = rest .Where(i => i.CompareTo(firstElement) >= 0) .QuickSort(); //return the result return smallerElements .Concat(new List<T>{firstElement}) .Concat(largerElements); } }
This is much cleaner, and reads almost the same as the F# version. But unfortunately there is no way of avoiding the extra noise in the function signature.
Finally, a beneficial side-effect of this compactness is that F# code often works the first time, while the C# code may require more debugging.
Indeed, when coding these samples, the old-style C# code was incorrect initially, and required some debugging to get it right. Particularly tricky areas were the
for loop (starting at 1 not zero) and the
CompareTo comparison (which I got the wrong way round), and it would also be very easy to accidentally modify the inbound list. The functional style in the second C# example is not only cleaner but was easier to code correctly.
But even the functional C# version has drawbacks compared to the F# version. For example, because F# uses pattern matching, it is not possible to branch to the “non-empty list” case with an empty list. On the other hand, in the C# code, if we forgot the test:
if (values == null || !values.Any()) ...
then the extraction of the first element:
var firstElement = values.First();
would fail with an exception. The compiler cannot enforce this for you. In your own code, how often have you used
FirstOrDefault rather than
First because you are writing “defensive” code. Here is an example of a code pattern that is very common in C# but is rare in F#:
var item = values.FirstOrDefault(); // instead of .First() if (item != null) { // do something if item is valid }
The one-step “pattern match and branch” in F# allows you to avoid this in many cases.
The example implementation in F# above is actually pretty verbose by F# standards!
For fun, here is what a more typically concise version would look like:
let rec quicksort2 = function | [] -> [] | first::rest -> let smaller,larger = List.partition ((>=) first) rest List.concat [quicksort2 smaller; [first]; quicksort2 larger] // test code printfn "%A" (quicksort2 [1;5;23;18;9;1;3])
Not bad for 4 lines of code, and when you get used to the syntax, still quite readable. | https://fsharpforfunandprofit.com/posts/fvsc-quicksort/ | CC-MAIN-2018-13 | refinedweb | 1,088 | 62.98 |
Write a C program to find the sum of ‘N’ natural numbers. Natural numbers are the whole numbers except zero, usually we used for counting.
Example:1, 2, 3, 4,———– Read more about C Programming Language .
Like to get updates right inside your feed reader? Grab our feed!
Example:1, 2, 3, 4,———– Read more about C Programming Language .
/***********************************************************
* You can use all the programs on
* for personal and learning purposes. For permissions to use the
* programs for commercial purposes,
* contact [email protected]
* To find more C programs, do visit
* and browse!
*
* Happy Coding
***********************************************************/
#include <stdio.h>
#include <conio.h>
void main()
{
int i, N, sum = 0;
clrscr();
printf("Enter an integer numbern");
scanf ("%d", &N);
for (i=1; i <= N; i++)
{
sum = sum + i;
}
printf ("Sum of first %d natural numbers = %dn", N, sum);
}
Read more Similar C Programs
Searching in C
Number Theory) | https://c-program-example.com/2011/09/c-program-to-find-the-sum-of-n-natural-numbers.html | CC-MAIN-2020-40 | refinedweb | 145 | 67.15 |
Elastic Agent uses data streams to store time series data across multiple indices while giving you a single named resource for requests. Data streams are well-suited for logs, metrics, traces, and other continuously generated data. They offer a host of benefits over other indexing strategies:
- Reduced number of fields per index: Indices only need to store a specific subset of your data–meaning no more indices with hundreds of thousands of fields. This leads to better space efficiency and faster queries. As an added bonus, only relevant fields are shown in Discover.
- More granular data control: For example, filesystem, load, cpu, network, and process metrics are sent to different indices–each potentially with its own rollover, retention, and security permissions.
- Flexible: Use the custom namespace component to divide and organize data in a way that makes sense to your use case or company.
- Fewer ingest permissions required: Data ingestion only requires permissions to append data.
Data stream naming schemeedit
Elastic Agent uses the Elastic data stream naming scheme to name data streams. The naming scheme splits data into different streams based on the following components:
type
- A generic
typedescribing the data, such as
logs,
metrics,
traces, or
synthetics.
dataset
- The
datasetis defined by the integration and describes the ingested data and its structure for each index. For example, you might have a dataset for process metrics with a field describing whether the process is running or not, and another dataset for disk I/O metrics with a field describing the number of bytes read.
namespace
- A user-configurable arbitrary grouping, such as an environment (
dev,
prod, or
qa), a team, or a strategic business unit. A
namespacecan be up to 100 bytes in length (multibyte characters will count toward this limit faster). Using a namespace makes it easier to search the data from a given source by using index patterns, or to give users permissions to data by assigning an index pattern to user roles.
The naming scheme separates each components with a
- character:
<type>-<dataset>-<namespace>
For example, if you’ve set up the Nginx integration with a namespace of
prod,
Elastic Agent uses the
logs type,
nginx.access dataset, and
prod namespace to store data in the following data stream:
logs-nginx.access-prod
Alternatively, if you use the APM integration with a namespace of
dev,
Elastic Agent stores data in the following data stream:
traces-apm-dev
All data streams, and the pre-built dashboards that they ship with, are viewable on the Fleet Data Streams page:
If you’re familiar with the concept of indices, you can think of each data stream as a separate index in Elasticsearch. Under the hood though, things are a bit more complex. All of the juicy details are available in Elasticsearch Data streams.
Index patternsedit
When searching your data in Kibana, you can use an index pattern to search across all or some of your data streams.
Index templatesedit
An index template is a way to tell Elasticsearch how to configure an index when it is created. For data streams, the index template configures the stream’s backing indices as they are created.
Elasticsearch provides the following built-in, ECS based templates:
logs-*-*,
metrics-*-*, and
synthetics-*-*.
Elastic Agent integrations can also provide dataset-specific index templates, like
logs-nginx.access-*.
These templates are loaded when the integration is installed, and are used to configure the integration’s data streams.
Configure an index lifecycle management (ILM) policyedit
Use the index lifecycle management (ILM) feature in Elasticsearch to manage your Elastic Agent data stream indices as they age. For example, create a new index after a certain period of time, or delete stale indices to enforce data retention standards.
Elastic Agent uses ILM policies built-in to Elasticsearch to manage backing indices for its data streams. See the Customize built-in ILM policies tutorial to learn how to customize these policies based on your performance, resilience, and retention requirements.
To instead create a new ILM policy, in Kibana, go to Stack Management > Index Lifecycle Policies. Click Create policy. Define data tiers for your data, and any associated actions, like a rollover, freeze, or shrink. See configure a lifecycle policy for more information. | https://www.elastic.co/guide/en/fleet/7.15/data-streams.html | CC-MAIN-2021-49 | refinedweb | 701 | 51.28 |
Summary: Guest Blogger Trevor Sullivan shows how to monitor and to respond to Windows Power events using Windows PowerShell.
Microsoft Scripting Guy Ed Wilson here. Today’s guest blogger is Trevor Sullivan, and he has a fascinating article about responding to power management events. First, a little bit about Trevor.
Trevor Sullivan things.
Take it away, Trevor!
Introduction
Oftentimes, people want to be able to respond to events automatically on their computers: “When <X> happens, I want <Y> to happen in response.” An example of this might be: “If SomeProcess.exe exceeds 50 percent processor utilization for 60 seconds, kill it.” Usually this would require some custom systems monitoring software, but what if I told you that your computer had this functionality built into it already? That’s right, little known to most people is the WMI background service, which provides a robust eventing and event response model.
Although power management hasn’t always been a highlight of the Microsoft Windows operating system (OS), it’s certainly come a long way in Windows 7 and is now quite robust. Sleeping and hibernating in Windows 7 are both quite fast, and resuming from both states is likewise very quick. But what if you want to do something when your computer wakes up? Though this may not be a terribly common scenario, sometimes people have the need to subscribe to this event and perform an action in response to it.
In the remainder of this article, we will take a look at how to subscribe for system-level power management events, and how to respond to them. We will be working with the following technologies:
WMI Power Management events
Microsoft has built a robust power management provider into Windows 7, and thankfully for us, they have exposed its functionality via the WMI service. WMI provides a standards-based interface in the operating system and applications that extend it. Although WMI has suffered from reliability and performance problems in the past—primarily on Windows XP—modern-day hardware combined with the newest Windows 7 operating system is quite reliable. Microsoft has resolved a lot of WMI bugs such that it is a very dependable service.
Power Management WMI Provider
All WMI providers (extensions to WMI) are registered in a particular WMI namespace under the __Win32Provider class. We can ensure that the Windows Power Management provider is registered by running this WMI query from Windows PowerShell:
@(Get-WmiObject -Namespace root\cimv2 -Query "select * from __Win32Provider where Name = 'MS_Power_Management_Event_Provider'").Count
If this query returns a result of “1,” we know that the provider is registered.
Win32_PowerManagementEvent class
The power management provider exposes a single WMI class called Win32_PowerManagementEvent, which is an extrinsic event class. Extrinsic event classes differ from intrinsic event classes in that the events they provide come from an external provider (the Power Management WMI provider), rather than them representing a change to a WMI object.
The Win32_PowerManagementEvent class only has one property that we really care about, which is the EventType property. The possible values for this property are:
Value
Meaning
4
Entering suspend
7
Resume from suspend
10
Power status change
11
OEM event
18
Resume automatic
As you might gather, we are interested in events that have a value of "7," which represents a system resume.
Example Scenario
In this example scenario, we are going to take a look at how to restart a Windows service when the system resumes. Specifically, I recently noticed that the PS3 Media Server software has an issue with power management in that it does not listen for connections upon system resume from Standby/Hibernate. This has reportedly been a problem with Windows 7 Ultimate Edition 64-bit.
To work around this issue, we’ll look at how to restart the PS3 Media Server service each time the computer resumes from a low power state.
Using PowerEvents
The use of the PowerEvents model follows a three-step process:
We will cover these three steps individually below.
WQL event filter
First, we need to build an WMI event filter using the WMI Query Language (WQL). WQL is similar to Structured Query Language (SQL), but is much more limited in scope. WQL does not support INSERT, UPDATE, or DELETE statements; it only supports SELECT queries. We’re going to follow the event query template:
select * from <WmiClass> WITHIN <PollInterval> where <Criteria>
In this case, we’re going to use the following values for our event query:
Our resulting query will look like this:
select * from Win32_PowerManagementEvent WITHIN 5 where EventType = 7
The command we’ll use to create our WMI event filter using the PowerEvents module for Windows PowerShell looks like this:
$Filter = New-WmiEventFilter -Name SystemResumed -Query "select * from Win32_PowerManagementEvent where EventType = 7"
We store the filter object in a Windows PowerShell variable for later use in the event binding.
More information about WMI event queries can be found in the PowerEvents documentation. The PDF is located in the Documentation folder of the PowerEvents download. This document includes information about how to test your WMI event query using the wbemtest.exe utility, before creating the permanent event registration to reduce troubleshooting hassle.
Event consumer
Now that we have created (and tested, right?) the event filter, we need to create an event consumer. In this example, we’ll use a Windows PowerShell script to stop and start the PS3 Media Server service (short name: PS3 Media Server). The script itself contains this code:
$ServiceName = $args[0]
Add-Content -Path 'c:\Restart Service.log' -Value "Service name is: $ServiceName"
$Service = @(Get-WmiObject -Namespace root\cimv2 -Class Win32_Service -Filter "Name = '$ServiceName'")
Add-Content -Path 'C:\Restart Service.log' -Value "Found $($Service.Count) instances of '$ServiceName' service"
$Result = $Service[0].StopService()
Add-Content -Path 'c:\Restart Service.log' -Value "Stopped service with result: $($Result.ReturnValue)"
Start-Sleep 4
$Result = $Service[0].StartService()
Add-Content -Path 'c:\Restart Service.log' -Value "Started service with result: $($Result.ReturnValue)"
Add-Content -Path 'c:\Restart Service.log' -Value "Exiting restart service script"
Save this code in a file called c:\windows\temp\Restart Windows Service.ps1.
To create the event consumer object in WMI, we’ll use the following command:
$Consumer = New-WmiEventConsumer -Verbose -Name SystemResumedRestartService -ConsumerType CommandLine -CommandLineTemplate "powershell.exe -command `". '$($env:WinDir)\temp\Restart Windows Service.ps1' 'PS3 Media Server'`""
This command creates a command-line consumer — that is to say, we want to call a command-line utility in response to the event that occurs. We give it a friendly name so that we know what it runs in response to, and what it does in response to the event: SystemResumedRestartService. Then we use the CommandLineTemplate parameter to specify the command line we want to execute in response to the event. In this case, we’re calling Windows PowerShell and passing it our script file via the -command switch along with an argument to the script file. We use script arguments to make our script dynamic. All we have to do to change the service that gets restarted is change the parameter that we’re passing to it. We don’t have to touch the script itself at all.
Important Make sure you have configured your Windows PowerShell execution policy to allow execution of script files; otherwise, the event consumer will fail. Run Windows PowerShell with your administrative token and use this command: Set-ExecutionPolicy Unrestricted.
WMI event binding
Finally, now that we have created our event filter and event consumer, all we have to do to initiate the flow of events is bind them together. We’ve got the filter and consumer stored in variables called $Filter and $Consumer, so all we have to do is call this command:
New-WmiFilterToConsumerBinding -Filter $Filter -Consumer $Consumer
Testing
And that’s it! We’re done. Now that all the WMI objects have been created, all we have to do is suspend and resume our workstation to test the process. After the system is restarted, we should see a c:\Restart Service.log file created. Check this log to ensure that the service you specified in the event consumer command-line was properly stopped and started.
Conclusion
This article has demonstrated the use of the PowerEvents module for Windows PowerShell to create an event listener (filter)/responder (consumer) for wake-from-low-power-state events. Although this particular example restarts a Windows service in response to such an event, you can use your creativity to come up with other tasks you might need to fire off at the same occurrence.
Note For more information about working with permanent and temporary WMI events, see this collection of Hey, Scripting Guy! Blog posts. This collection includes a post about using VBScript to create permanent WMI events. This post is important because it discusses the basics of permanent WMI events. Next, I talk about using Windows PowerShell to monitor and to respond to events on the server. This post continues the discussion about permanent WMI events. This is followed by the first of two articles from Trevor that talk about his Windows PowerShell module to work with WMI permanent events. The second Trevor article in the series talks about using the Windows PowerShell WMI event module to quickly monitor events.
Thanks Trevor for an interesting article, and for writing your Windows PowerShell module for working with WMI Permanent Trevor,
thanks for this introduction into permanent WMI events and your PowerEvents module which makes working with them easier!
One question: Your module on CodePlex is still a 10 months old alpha 0.2 release as it is stated on the download page. Will you develop it further on or is it more or less the final version?
I'm just asking because I might use it in a productive environment ...
Thanks, Klaus
Hi Klaus,
I don't currently have any plans on making major changes to the module. I hope you enjoy using it!
Cheers,
Trevor Sullivan
Can you provide an example on how I could monitor Active Directory events?
API correction. As of PowerEvents 0.2 Alpha, the New-WmiFilter API accepts the query string in parameter name -WQLQuery not -Query as documented above. The correction to above example is:
$Filter = New-WmiEventFilter -Name SystemResumed -WQLQuery "select * from Win32_PowerManagementEvent where EventType = 7" | http://blogs.technet.com/b/heyscriptingguy/archive/2011/08/16/monitor-and-respond-to-windows-power-events-with-powershell.aspx | CC-MAIN-2015-11 | refinedweb | 1,706 | 53.41 |
OK, I've now hacked together a proposal for a general SAX ParserFilter API, with implementations of two filters: 'keep character data together' and namespaces. (The latter is just a rough sketch riddled with 'FIXME' comments.) The whole thing is just a proposal, and consists of readable source with two simple demos with sample documents. You can download it as a 5k zip file from: <URL:> Comments, anyone? Is this the way to do the SAX side of this? And, Geir Ove, what do you think? Could xmlarch be fitted into this as a ParserFilter? (Didn't have time to look at it.) --Lars M. | https://mail.python.org/pipermail/xml-sig/1998-November/000494.html | CC-MAIN-2014-10 | refinedweb | 105 | 75.3 |
I'm looking forward to the next lecture from STL, anyone know when it might be.
Charles do you know?
I'm looking forward to the next lecture from STL, anyone know when it might be.
Happy Christmas to you Stephen,
Bring on more C++ videos.
Keep up the good work, an excellent Christmas present from C9 team.
Thanks again
Hey STL,
Any Idea when User Defined Literals will be implemented in VC2012, i'm looking forward to trying something like,
int operator "" _MB (unsigned long long) {return 1024;} int main(void) { int mySize = 34_MB; printf("%d",mySize); return 0; }
Tom
Hey STL,
I managed to write the Linked List using unique_ptrs finally, it was me who emailed you!
Took me a fair while though.
Here's my code.
#include <memory> #include <utility> using namespace std; template <typename T> struct Node{ T data; unique_ptr<Node<T>> next; public: Node(T val):data(val),next(nullptr){} Node(T val,unique_ptr<Node>&& next){ this->next = move(next); } Node(T val,unique_ptr<Node>& next){ this->data = val; this->next.swap(new Node<T>(next->data)); } void insertAfter(T& data){ next = unique_ptr<Node<T>>(new Node<T>(data,next)); } }; template <typename T> class SList{ private: unique_ptr<Node<T>> head; int size; public: SList(){ size = 0; head = nullptr; } void insertFront(T val){ head = unique_ptr<Node<T>>(new Node<T>(val,head)); size++; } };
and the main program
#include <iostream> #include "Node.h" using namespace std; int main(){ SList<int> myList; int f = 3; int s = 4; myList.insertFront(f); myList.insertFront(s); return 0; }
Keep up the good work
Tom
I know, i saw them
I meant an Advanced lecture on Linked Lists and Maps.
How they work etc, but like I said Stephen, it doesn't matter anymore :->
Hi,
One topic I'd like to see is see an Advanced Series on Data Structures.
For Example, Linked Lists and Maps.
[edit] Forget that
I can look up data structures on you tube
As always a great series
Keep up the good work
Tom
Great
another C++ video from Stephen.
Template Specialization,
ive seen a factorial specialization like that.
template <int N> struct Factorial { enum { value = N * Factorial<N - 1>::value }; }; template <> struct Factorial<0> { enum { value = 1 }; }; // Factorial<4>::value == 24 // Factorial<0>::value == 1 const int x = Factorial<4>::value; // == 24 const int y = Factorial<0>::value; // == 1
Cant wait for part 5, and a meta programming tutorial will be fantastic]));
Thanks in advance for any assistance
Tom
Great Video again!
Charles, any idea when the next video will be uploaded?
Wow, i must be honoured to get a reply from the man himself.
Thanks for the heads up Bjarne, C++ is really moving up.
So I can just pass a object by value by using rvalue references.
That is soo cool.
Tom | http://channel9.msdn.com/Niners/Tominator2005/Comments | CC-MAIN-2014-41 | refinedweb | 469 | 61.26 |
In this lesson, you’re going to create a template for your Django app so you won’t have to paste all your HTML directly into your views. Django has already provided you with the import statement you’re going to need for this:
from django.shortcuts import render
Now that you have
render(), use it in your function with the template name
'projects/index.html':
# Create your views here. def project_list(request): return render(request, 'projects/index.html')
Gascowin on Oct. 19, 2019
May i ask; why is it that in views.py you have
instead of
? My understanding here is that we are passing in the path to the file index.html and since views.py and the templates folder are in the same directory we would need the path ‘templates/projects/…’ instead of ‘projects/…’ Thanks in advance. | https://realpython.com/lessons/create-template/ | CC-MAIN-2021-17 | refinedweb | 140 | 76.72 |
# WAL in PostgreSQL: 2. Write-Ahead Log
[Last time](https://habr.com/ru/company/postgrespro/blog/491730/) we got acquainted with the structure of an important component of the shared memory — the buffer cache. A risk of losing information from RAM is the main reason why we need techniques to recover data after failure. Now we will discuss these techniques.
The log
=======
Sadly, there's no such thing as miracles: to survive the loss of information in RAM, everything needed must be duly saved to disk (or other nonvolatile media).
Therefore, the following was done. Along with changing data, the *log* of these changes is maintained. When we change something on a page in the buffer cache, we create a record of this change in the log. The record contains the minimum information sufficient to redo the change if the need arises.
For this to work, the log record must obligatory get to disk *before* the changed page gets there. And this explains the name: *write-ahead log (WAL)*.
In case of failure, the data on disk appear to be inconsistent: some pages were written earlier, and others later. But WAL remains, which we can read and redo the operations that were performed before the failure but their result was late to reach the disk.
> Why not force writing to disk the data pages themselves, why duplicate the work instead? It appears to be more efficient.
>
>
>
> First, WAL is a sequential stream of append-only data. Even HDD disks do the job of sequential writing fine. However, the data themselves are written in a random fashion since pages are spread across the disk more or less in disorder.
>
>
>
> Second, a WAL record can be way smaller than the page.
>
>
>
> Third, when writing to disk we do not need to take care of maintaining the consistency of data on disk at every point in time (this requirement makes life really difficult).
>
>
>
> And fourth, as we will see later, WAL (once it is available) can be used not only for recovery, but also for backup and replication.
>
>
All the operations must be WAL-logged that can result in inconsistent data on disk in case of failure. Specifically, the following operations are WAL-logged:
* Changes to pages in the buffer cache (mostly table and index pages) — since it takes some time for the page changed to get to disk.
* Transactions' commits and aborts — since a change of the status is done in XACT buffers and it also takes some time for the change to get to disk.
* File operations (creation and deletion of files and directories, such as creation of files during creation of a table) — since these operations must be synchronous with the changes to data.
The following is not WAL-logged:
* Operations with unlogged tables — their name is self-explanatory.
* Operations with temporary tables — logging makes no sense since the lifetime of such tables does not exceed the lifetime of the session that created them.
Before PostgreSQL 10, [hash indexes](https://habr.com/ru/company/postgrespro/blog/442776/) were not WAL-logged (they served only to associate hash functions with different data types), but this has been corrected.
Logical structure
=================

We can logically envisage WAL as a sequence of records of different lengths. Each record contains *data* on a certain operation, which are prefixed by a standard *header*. In the header, among the rest, the following is specified:
* The ID of the transaction that the record relates to.
* The resource manager — the system component responsible for the record.
* The checksum (CRC) — permits to detect data corruption.
* The length of the record and link to the preceding record.
As for the data, they can have different formats and meaning. For example: they can be represented by a page fragment that needs to be written on top of the page contents at a certain offset. The resource manager specified «understands» how to interpret the data in its record. There are separate managers for tables, each type of indexes, transaction statuses and so on. You can get the full list of them using the command
```
pg_waldump -r list
```
Physical structure
==================
WAL is stored on disk as files in the `$PGDATA/pg_wal` directory. By default, each file is 16 MB. You can increase this size to avoid having many files in one catalog. Before PostgreSQL 11, you could do this only when compiling source codes, but now you can specify the size when initializing the cluster (use the `--wal-segsize` option).
WAL records get into the currently used file, and once it is over, the next one will be used.
In the shared memory of the server, special buffers are allocated for WAL. The *wal\_buffers* parameter specifies the size of the WAL cache (the default value implies automatic setting: 1/32 of the buffer cache is allocated).
The WAL cache is structured similarly to the buffer cache, but works mainly in the circular buffer mode: records are added to the «head», but get written to disk starting with the «tail».
The `pg_current_wal_lsn` and `pg_current_wal_insert_lsn` functions return the write («tail») and insert («head») locations, respectively:
```
=> SELECT pg_current_wal_lsn(), pg_current_wal_insert_lsn();
```
```
pg_current_wal_lsn | pg_current_wal_insert_lsn
--------------------+---------------------------
0/331E4E64 | 0/331E4EA0
(1 row)
```
To reference a certain record, the `pg_lsn` data type is used: it is a 64-bit integer that represents the byte offset of the beginning of the record with respect to the beginning of WAL. LSN (log sequence number) is output as two 32-bit hexadecimal numbers separated by a slash.
We can get to know in what file we will find the location needed and at what offset from the beginning of the file:
```
=> SELECT file_name, upper(to_hex(file_offset)) file_offset
FROM pg_walfile_name_offset('0/331E4E64');
```
```
file_name | file_offset
--------------------------+-------------
000000010000000000000033 | 1E4E64
\ /\ /
time 0/331E4E64
line
```
The filename consists of two parts. 8 high-order hexadecimal digits show the number of the time line (it is used in restoring from backup) and the remainder corresponds to the high-order LSN digits (and the rest low-order LSN digits show the offset).
In the file system, you can see WAL files in the `$PGDATA/pg_wal/` directory, but starting with PostgreSQL 10, you can also see them using a specialized function:
```
=> SELECT * FROM pg_ls_waldir() WHERE name = '000000010000000000000033';
```
```
name | size | modification
--------------------------+----------+------------------------
000000010000000000000033 | 16777216 | 2019-07-08 20:24:13+03
(1 row)
```
Write-ahead logging
===================
Let's see how WAL-logging is done and how writing ahead is ensured. Let's create a table:
```
=> CREATE TABLE wal(id integer);
=> INSERT INTO wal VALUES (1);
```
We will be looking into the header of the table page. To do this, we will need a well-known extension:
```
=> CREATE EXTENSION pageinspect;
```
Let's start a transaction and remember the location of insertion into WAL:
```
=> BEGIN;
=> SELECT pg_current_wal_insert_lsn();
```
```
pg_current_wal_insert_lsn
---------------------------
0/331F377C
(1 row)
```
Now we will perform some operation, for example, update a row:
```
=> UPDATE wal set id = id + 1;
```
This change was WAL-logged, and the insert location changed:
```
=> SELECT pg_current_wal_insert_lsn();
```
```
pg_current_wal_insert_lsn
---------------------------
0/331F37C4
(1 row)
```
To ensure the changed data page not to be flushed to disk prior to the WAL record, LSN of the last WAL record related to this page is stored in the page header:
```
=> SELECT lsn FROM page_header(get_raw_page('wal',0));
```
```
lsn
------------
0/331F37C4
(1 row)
```
Note that WAL is one for the entire cluster, and new records get there all the time. Therefore, LSN on the page can be less than the value just returned by the `pg_current_wal_insert_lsn` function. But since nothing is happening in our system, the numbers are the same.
Now let's commit the transaction.
```
=> COMMIT;
```
Commits are also WAL-logged, and the location changes again:
```
=> SELECT pg_current_wal_insert_lsn();
```
```
pg_current_wal_insert_lsn
---------------------------
0/331F37E8
(1 row)
```
Each commit changes the transaction status in the structure called XACT (we've [already discussed it](https://habr.com/ru/company/postgrespro/blog/477648/)). Statuses are stored in files, but they also use their own cache, which occupies 128 pages in the shared memory. Therefore, for XACT pages, LSN of the last WAL record also has to be tracked. But this information is stored in RAM rather than in the page itself.
WAL records created will once be written to disk. We will discuss sometime later when exactly this happens, but in the above situation, it has already happened:
```
=> SELECT pg_current_wal_lsn(), pg_current_wal_insert_lsn();
```
```
pg_current_wal_lsn | pg_current_wal_insert_lsn
--------------------+---------------------------
0/331F37E8 | 0/331F37E8
(1 row)
```
Hereafter the data and XACT pages can be flushed to disk. But if we had to flush them earlier, it would be detected and the WAL records would be forced to get to disk first.
Having two LSN locations, we can get the amount of WAL records between them (in bytes) by simply subtracting one from the other. We only need to cast the locations to the `pg_lsn` type:
```
=> SELECT '0/331F37E8'::pg_lsn - '0/331F377C'::pg_lsn;
```
```
?column?
----------
108
(1 row)
```
In this case, the update of the row and commit required 108 bytes in WAL.
The same way we can evaluate the amount of WAL records that the server generates per unit of time at a certain load. This is important information, which will be needed for tuning (which we will discuss next time).
Now let's use the `pg_waldump` utility to look at the WAL records created.
The utility can also work with a range of LSNs (as in this example) and select the records for a transaction specified. You should run the utility as `postgres` OS user since it will need access to WAL files on disk.
```
postgres$ /usr/lib/postgresql/11/bin/pg_waldump -p /var/lib/postgresql/11/main/pg_wal -s 0/331F377C -e 0/331F37E8 000000010000000000000033
```
```
rmgr: Heap len (rec/tot): 69/ 69, tx: 101085, lsn: 0/331F377C, prev 0/331F3014, desc: HOT_UPDATE off 1 xmax 101085 ; new off 2 xmax 0, blkref #0: rel 1663/16386/33081 blk 0
```
```
rmgr: Transaction len (rec/tot): 34/ 34, tx: 101085, lsn: 0/331F37C4, prev 0/331F377C, desc: COMMIT 2019-07-08 20:24:13.945435 MSK
```
Here we see the headers of two records.
The first one is the [HOT\_UPDATE](https://habr.com/ru/company/postgrespro/blog/483768/) operation, related to the *Heap* resource manager. The filename and page number are specified in the `blkref` field and are the same as of the updated table page:
```
=> SELECT pg_relation_filepath('wal');
```
```
pg_relation_filepath
----------------------
base/16386/33081
(1 row)
```
The second record is COMMIT, related to the *Transaction* resource manager.
This format is hardly easy-to-read, but allows us to clarify the things if needed.
Recovery
========
When we start the server, the **postmaster** process is launched first, which, in turn, launches the **startup** process, whose task is to ensure the recovery in case of failure.
To figure out a need for the recovery, **startup** looks at the cluster state in the specialized control file `$PGDATA/global/pg_control`. But we can also check the state on our own by means of the `pg_controldata` utility:
```
postgres$ /usr/lib/postgresql/11/bin/pg_controldata -D /var/lib/postgresql/11/main | grep state
```
```
Database cluster state: in production
```
A server that was shut down in a regular way will have the «shut down» state. If a server is not working, but the state is still «in production», it means that the DBMS is down and the recovery will be done automatically.
For the recovery, the **startup** process will sequentially read WAL and apply records to the pages if needed. The need can be determined by comparing LSN of the page on disk with LSN of the WAL record. If LSN of the page appears to be greater, the record does not need to be applied. Actually, it even cannot be applied since the records are meant to be applied in a strictly sequential order.
> But there are exceptions. Certain records are created as FPI (full page image), which overrides page contents and can therefore be applied to the page regardless of its state. A change to the transaction status can be applied to any version of a XACT page, so there is no need to store LSN inside such pages.
>
>
During a recovery, pages are changed in the buffer cache, as during regular work. To this end, `postmaster` launches the background processes needed.
WAL records are applied to files in a similar way: for example, if it is clear from a record that the file must exist, but it does not, the file is created.
And at the very end of the recovery process, respective [initialization forks](https://habr.com/ru/company/postgrespro/blog/469087/) overwrite all unlogged tables to make them empty.
This is a very simplified description of the algorithm. Specifically, we haven't said a word so far on where to start reading WAL records (we have to put a talk on this off until we discuss a checkpoint).
And the last thing to clarify. «Classically», a recovery process consists of two phases. At the first (roll forward) phase, log records are applied and the server redoes all the work lost due to failure. At the second (roll back) phase, transactions that were not committed by the moment of failure are rolled back. But PostgreSQL does not need the second phase. As we [discussed earlier](https://habr.com/ru/company/postgrespro/blog/477648/), thanks to the implementation features of the multiversion concurrency control, transactions do not need to be physically rolled back — it is sufficient that the commit bit is not set in XACT.
[Read on](https://habr.com/en/company/postgrespro/blog/494464/). | https://habr.com/ru/post/494246/ | null | null | 2,248 | 59.64 |
Bitcoin and Blockchain Security
For a complete listing of titles in the Artech House Information Security and Privacy Series, turn to the back of this book.
Bitcoin and Blockchain Security Ghassan Karame Elli Androulaki
Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the U.S. Library of Congress. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library. Cover design by John Gomes
ISBN 13: 978-1-63081-013-9
Š 2016 ARTECH HOUSE 685 Canton Street Norwood, MA 02062
Contents Preface
xi
Acknowledgments
xiii 1 5 5 6 6 7 7 8 8 8 9
Chapter 1 Introduction 1.1 Book Structure 1.1.1 Chapter 2 1.1.2 Chapter 3 1.1.3 Chapter 4 1.1.4 Chapter 5 1.1.5 Chapter 6 1.1.6 Chapter 7 1.1.7 Chapter 8 1.1.8 Chapter 9 1.1.9 Chapter 10 Chapter 2 Background on Digital Payments 2.1 Payment Systems Architecture 2.2 Security and Privacy in Payments 2.2.1 Security 2.2.2 Privacy 2.2.3 Combining Security and Privacy 2.3 Security in Payment Systems prior to Bitcoin 2.3.1 Common Payment System Characteristics
v
11 11 13 14 15 16 17 17
vi
Contents
2.3.2
2.4
Privacy-preserving Payments Due to the Research Community 2.3.3 Deployed Payment Systems Summary
20 26 29
Chapter 3 Bitcoin Protocol Specification 3.1 Overview of Bitcoin 3.2 Building Blocks and Cryptographic Tools 3.2.1 Cryptographic Hash Functions 3.2.2 Merkle Trees 3.2.3 ECDSA 3.3 Bitcoin Data Types 3.3.1 Scripts 3.3.2 Addresses 3.3.3 Transactions 3.3.4 Blocks 3.4 Bitcoin Architecture 3.4.1 Node Types 3.4.2 Peer-to-Peer Overlay Network 3.5 Scalability Measures in Bitcoin 3.5.1 Request Management System 3.5.2 Static Time-outs 3.5.3 Recording Transaction Advertisements 3.5.4 Internal Reputation Management System
33 33 35 35 35 36 36 37 38 38 43 47 48 49 53 53 55 55 56
Chapter 4 Security of Transactions in Bitcoin 4.1 Security of Confirmed Transactions 4.1.1 Transaction Verification 4.1.2 Eclipse Attacks in Bitcoin 4.1.3 Denying the Delivery of Transactions 4.1.4 Transaction Confirmation 4.2 Security of Zero-Confirmation Transactions 4.2.1 (In-)Security of Zero-Confirmation Transactions 4.2.2 Possible Countermeasures 4.3 Bitcoin Forks 4.3.1 Exploiting Forks to Double-Spend 4.3.2 Fork Resolution
59 59 60 61 63 65 69 69 74 79 79 80
Chapter 5 Privacy in Bitcoin
85
Contents
5.1
vii
User Privacy in Bitcoin 5.1.1 Protocol-Based Privacy Quantification in Bitcoin 5.1.2 Exploiting Existing Bitcoin Client Implementations 5.1.3 Summing Up: Behavior-Based Analysis 5.1.4 Coin Tainting 5.1.5 Risks of Holding Tainted Bitcoins Network-Layer Attacks 5.2.1 Refresher on Bitcoin P2P Network Setup 5.2.2 Privacy Leakage over the Bitcoin Network Enhancing Privacy in Bitcoin 5.3.1 Mixing Services 5.3.2 CoinJoin 5.3.3 Privacy-Preserving Bitcoin Protocol Enhancements 5.3.4 Extending ZeroCoin: EZC and ZeroCash Summary
86 87 89 90 91 93 93 94 94 97 98 99 100 107 120
Chapter 6 Security and Privacy of Lightweight Clients 6.1 Simple Payment Verification 6.1.1 Overview 6.1.2 Specification of SPV Mode 6.1.3 Security Provisions of SPV mode 6.2 Privacy Provisions of Lightweight Clients 6.2.1 Bloom Filters 6.2.2 Privacy Provisions 6.2.3 Leakage Due to the Network Layer 6.2.4 Leakage Due to the Insertion of Both Public Keys and Addresses in the Bloom filter 6.2.5 Leakage under a Single Bloom Filter 6.2.6 Leakage under Multiple Bloom Filters 6.2.7 Summary 6.2.8 Countermeasure of Gervais et al.
125 125 125 126 127 128 128 129 130
Chapter 7 Bitcoin’s Ecosystem 7.1 Payment Processors 7.2 Bitcoin Exchanges 7.3 Bitcoin Wallets 7.3.1 Securing Bitcoin Wallets 7.4 Mining Pools 7.4.1 Impact of Mining Pools on De-centralization
143 144 146 146 148 151 152
5.2
5.3
5.4
130 131 134 138 139
viii
Contents
7.5 7.6
7.7
Betting Platforms Protocol Maintenance and Modifications 7.6.1 Bitcoin Improvement Proposals 7.6.2 The Need for Transparent Decision Making Concluding Remarks
154 155 156 156 157
Chapter 8 Applications and Extensions of Bitcoin 8.1 Extensions of Bitcoin 8.1.1 Litecoin 8.1.2 Dogecoin 8.1.3 Namecoin 8.1.4 Digital Assets 8.2 Applications of Bitcoin’s Blockchain 8.2.1 Robust Decentralized Storage 8.2.2 Permacoin 8.2.3 Decentralized Identity Management 8.2.4 Time-Dependent Source of Randomness 8.2.5 Smart Contracts 8.3 Concluding Remarks
163 163 164 164 165 165 166 166 169 171 171 172 175
Chapter 9 Blockchain Beyond Bitcoin 9.1 Sidechains 9.2 Ethereum 9.2.1 Accounts 9.2.2 Transactions and Messages 9.2.3 State and Transaction Execution 9.2.4 Blocks 9.2.5 Mining and Blockchain 9.3 Open Blockchain 9.3.1 Membership Services 9.3.2 Transactions Life-cycle 9.3.3 Possible Extensions 9.4 Ripple 9.4.1 Overview of Ripple 9.5 Comparison between Bitcoin, Ripple, Ethereum, and Open Blockchain 9.5.1 Security 9.5.2 Consensus Speed 9.5.3 Privacy and Anonymity
179 180 181 182 182 183 183 184 185 186 190 192 193 194 196 197 198 198
Contents
9.5.4 9.5.5
Clients, Protocol Update, and Maintenance Decentralized Deployment
ix
199 199
Chapter 10 Concluding Remarks
205
About the Authors
213
Index
215
Preface; it is true that we did not really believe in Bitcoin at that time. We believed that Bitcoin was an interesting protocol allowing computer geeks to make money by running a program on their PC. Our surprise was mainly that Bitcoin—in which a transaction takes almost an hour to be confirmed—was used to handle fast payments! We decided to immediately write a paper to warn the community from such usage of Bitcoin; in our paper, we showed analytically and experimentally that double-spending in Bitcoin can be easily realized in the network on unconfirmed transactions. At that time, we bought 10 Bitcoins with 5 Swiss Francs and I remember thinking: “These Bitcoins are really expensive” (I wish I knew better.) Our paper was published at ACM CCS 2012, which is one of the most prestigious computer security conferences in the world. We additionally proposed some countermeasure to allow fast payments with minimal risk of double-spending; our countermeasure was eventually integrated in Bitcoin XT. From that point on, we delved into researching Bitcoin. This resulted in a number of papers that appeared at top security and privacy conferences; the first few lines in our introductions would evolve from “Bitcoin is receiving considerable attention in the community” to something that turned out to be a big surprise to us as well: “Bitcoin has received more adoption than any other digital currency proposed to date.”
xi
xii
Contents
Five years after our first research paper on Bitcoin (during which we published eight research papers on Bitcoin at top security venues), we decided that it was time to share our Bitcoin experience, and the various lessons that we learned with a broader audience. This book is mostly intended for computer scientist/engineers and security experts. If you are interested in Bitcoin, and you do have general computer science knowledge, this book will teach you all that you need to know about the security and privacy provisions of Bitcoin.
Dr. Ghassan Karame
Acknowledgments First and foremost, we would like to express our deep gratitude for Srdjan Capkun, Arthur Gervais, and Hubert Ritzdorf for many of the interesting research collaborations and discussions related to the book contents. Special thanks are also due to Arthur Gervais and Angelo De Caro for coauthoring some of the chapters in this book. The authors would also like to thank Wenting Li, Damian Gruber, and David Froelicher for their invaluable support and comments on the book contents. We are also grateful for all the help that we received from family members, friends, and colleagues who helped us in writing one of the most comprehensive security books on Bitcoin and the blockchain.
xiii
Chapter 1 Introduction With the publication of the Bitcoin white paper in 2008, and the subsequent delivery of a first prototype implementation of Bitcoin 2 months later, the individual or group behind the alias “Satoshi Nakamoto” was able to forge a new class of decentralized currency..
1
2. • Open-sourcing Bitcoin’s code solicits the participation of skilled developers who are interested in attaining immediate impact in the community. Their contribution to the Bitcoin code will be reflected in official Bitcoin client releases, which will impact the experience of all Bitcoin users. • Users were asked to collaboratively contribute in confirming financial transactions; besides involving active user participation in regulating the Bitcoin ecosystem, several users saw in Bitcoin a novel way to invest their computing power and collect immediate financial returns. •
Introduction
3.
4 fine-tune the consensus (i.e., the block generation time and the hash function), and the network parameters (e.g., the size of blocks).: • What are the actual assumptions governing the security of Bitcoin? Is Bitcoin truly secure if 50% of the mining computing power is honest? • To which extent do the scalability measures adopted in Bitcoin threaten the underlying security of the system? • To which extent does Bitcoin offer privacy to its users? How can one quantify the user privacy offered by Bitcoin? • Are lightweight clients secure? To what extent do lightweight clients threaten the privacy of users? • What are the proper means to secure Bitcoin wallets?
Introduction
5
• Who effectively controls Bitcoin? • How do the security and privacy provisions of other blockchain technologies compare to Bitcoin? •.
6
1.1.2
Bitcoin and Blockchain Security’s remaining open connection slotsâ€
Introduction
7 Bitcoinâ€â€
8
Bitcoin and Blockchain Security
motivate a careful assessment of the current implementation of SPV clients prior to any large-scale deployment. 1.1.6
Chapter 7
In Chapter 7, we analyze. Bitcoin’s blockchain. Namely, we describe Namecoin, the first clone of Bitcoin, which implements a decentralized Domain Name Service for registering Web addresses that end in “.bit,�
Introduction
9â€.
Chapter 2 Background on Digital Payments In this chapter, we provide an overview of the predecessors of Bitcoin and their associated crypto-based payment schemes. In particular, we define the notions of payment security and privacy as established in already existing payment systems. Next, we provide an overview of alternatives to banking-based payment technologies that preceded Bitcoin, with a particular focus on their security, privacy provisions, and implementation deficiencies (if any). More specifically, in Sections 2.1 and 2.2, we present a generic payment model, detailing its architecture and security and privacy requirements. In Section 2.3, we list a number of desirable properties of payment systems and their impact on security and performance. We also investigate prominent deployed payment schemes prior to Bitcoin that seek to achieve those properties.
2.1
PAYMENT SYSTEMS ARCHITECTURE
As the name suggests, payment systems facilitate the exchange of money between two entities—a payer and a payee. Apart from the payer and payee, a payment system traditionally involves two more entities; one entity that manages assets and/or funds on behalf of the payer, known as the issuing bank (or issuer), and another entity that maintains an account for the payee, known as acquiring bank, or acquirer.1 For simplicity, we will use the terms payer/payee interchangeably to refer to the buyer/merchant, and we refer to all other parties as users. 1
In practice, the acquirer and issuer can represent the same physical entity (e.g., bank).
11
12
Bitcoin and Blockchain Security
In what follows, we adapt the classification of payment systems from [1]. Namely, we distinguish between cash-like payments, where payers need to withdraw their funds before using them in payments and check-like payments, in which the payers do not need to engage in a withdrawal operation prior to committing to a payment (and the money withdrawal takes place later in time). Figures 2.1 and 2.2 depict the respective architectures of cash-like and check-like systems. The operations of a typical cash-like system are depicted in Figure 2.1. In a cash-like system, the payer’s account is charged before the actual payment takes place. That is, the payer first contacts the issuer to withdraw some funds from his or her account. The payer can obtain his or her funds in various forms (e.g., in a credited smart-card, electronic cash). The payer and payee subsequently interact for the requested payment amount to be deducted from the payee’s funds. The acquirer is made aware of the payment through a special deposit operation, where the payee deposits the payments that he or she has received. The interactions of users and banks within check-like system are depicted in Figure 2.2. As opposed to cash-like systems, in check-like payments, the account of the payer is charged after the payment actually takes place (or concurrently with the payment). The latter case captures a credit card payment. Typically, in a check-like system, a payment request is initiated by the payer who sends the payee a check paying the latter. The payee forwards the request to the acquirer that notifies the issuer. The issuer evaluates the payment request and if it deems it valid, it settles the payment with the acquirer. Depending on the protocol, the issuing bank may send a message to the payer requesting a final approval of the payment or a notification that the payment was successfully processed (if the payment request already contains enough information). Another popular means of classifying payment schemes is to categorize them into interactive and noninteractive based on whether they require the active participation of both parties. Extensions of such architectures incorporate mediators that perform the payments on behalf of the users following the user requests. Naturally, in mediatorbased payment systems, payers do not directly communicate with their bank account. Instead, they manage their funds through accounts opened with a third-party that is further responsible to send user-authenticated payment requests as defined by the protocol. Mobile phone-enabled payments can be considered as prominent example of such payments. Another variant of such architectures involves systems like PayPal [2], where users open an account to which they transfer money from their bank account. Payments can be executed in this way by any user who owns a PayPal account.
Background on Digital Payments
Payer
13
Payee Payment
Withdrawal
Deposit
Bank
Bank Money transfer Settlement
Issuer
Acquirer
Figure 2.1 Cash-like payment system architecture.
Depending on the type of interaction, such mediators can also play the role of payment escrows; these are entities that can monitor a particular transaction and ensure the proper exchange of money and goods before the payments are confirmed. Though such a functionality has been popular during the last few years, older systems such as PayPal can be considered as pioneering examples of this category.
2.2
SECURITY AND PRIVACY IN PAYMENTS
In this section, we refer to well-established security and privacy requirements among payment systems. In particular, we define these properties in the context of payment systems and overview the challenges associated with each of them. Strictly speaking, security in information systems is defined by the combination of information integrity, availability, and confidentiality. In some other contexts, security can be obtained using the combination of authentication, authorization, and identification. As we explain in Section 2.2.1, payment systems require security properties that belong to both security descriptions. Privacy defines the right of each individual to determine the degree to which it will interact and share information with its environment. Privacy is a fundamental
14
Bitcoin and Blockchain Security
Payer
Payee ))))))))))))Check (Payment)Authorization)
Approval/ Notification
Clearing
Bank
Bank Settlement
Issuer
Acquirer
Figure 2.2 Check-like payment system architecture.
right of individuals and strongly relates to data confidentiality. Namely, providing privacy guarantees in payment systems is of crucial importance, and has been the clear focus of researchers and system developers. Given this, we discuss privacy/confidentiality of payment systems in Section 2.2.2. In Section 2.2.1, we focus on other security properties of such systems, such as integrity and availability. 2.2.1
Security
We start our quest by considering the breakdown of security in integrity, availability, and confidentiality, and continue with other security concepts that are specific to payment systems, such as fairness and resistance to impersonation attacks, among others. Integrity is defined as the assurance that information is not altered or modified except by properly authorized individuals. Thus, for integrity purposes, maintaining the consistency, accuracy, and trustworthiness of data over its entire life cycle becomes crucial. In the context of payment systems, integrity is important when it comes to the integrity of payment records and payment requests. That is, no unauthorized party should be able to alter the contents of a particular payment
Background on Digital Payments
15
request without invalidating the payment request itself or being detected. Common measures to enable data integrity verification against intentional or accidental data modifications include the use of cryptographic techniques, such as cryptographic checksums. Other measures involve controlling the physical environment of networked terminals and servers, restricting access to data, and maintaining rigorous authentication practices. Data integrity can also be threatened by hardware failures or environmental hazards, such as heat, dust, and electrical surges. Availability ensures that the payment system can serve authorized user requests, and fulfill its purpose whenever the service is supposed to be active. Ensuring this property expands to many aspects of the payment system. Moreover, providing adequate communication bandwidth and preventing the occurrence of bottlenecks are equally important when designing secure payment systems. Apart from the basic security requirements, security in payment systems has been commonly bound to the fairness of the underlying payment mechanism, as well as to the resistance to impersonation attacks and accountability. Fairness requires that a particular individual cannot commit to more payments than its assumed account balance. For example, users should not be able to use their credit card to pay more than their credit limit, or use their debit card to pay more than their debit account balance. This property is also commonly known as balance [3] in the literature. On the other hand, resistance to impersonation attacks requires that no one should be able to impersonate other users or perform payments on behalf of other users without their consent. Nonrepudiation is another aspect of the same property that requires that a user should not be able to deny a transaction that he or she carried/authorized. Finally, accountability requires that a user who misbehaves (i.e., behaves against the regulation of the account holder or the law) can eventually be accountable for his or her acts. This concept of security is also strongly associated with nonrepudiation. Common mechanisms and concepts to provide integrity, confidentiality, and (implicitly) availability in payment systems enforce strong authentication and proper authorization. Authentication is the method or mechanism for ascertaining that somebody really is who they claim to be. Authorization, on the other hand, refers to rules that determine who is allowed to do what in a system. 2.2.2
Privacy
As mentioned earlier, privacy is the right of each individual to determine the degree to which they will interact and exchange information with their environment. As
16
Bitcoin and Blockchain Security
such, privacy is strongly associated with data confidentiality, which mandates that data should not be accessible by unauthorized individuals. Popular privacy concepts in the context of payment systems consist of transaction anonymity and transaction unlinkability. Assuming a set of user identities that make use of a payment system, transaction anonymity requires that one cannot link a particular transaction to a specific identity more than to any other identity that is part of the system. On the other hand, transaction unlinkability requires that two transactions of the same individual cannot be linked as such. Depending on the system, privacy of clients committing to transactions can be considered from the perspective of the banks, and/or users’ transaction partners. Given that the transactions of a particular individual contain sensitive information about consumers, privacy has been the subject of investigation in payment systems for a considerable amount of time. As we shall see later, cryptographic primitives, such as anonymous credentials and anonymous electronic cash, are means that were proposed almost a decade ago to protect the fundamental rights of individuals. 2.2.3
Combining Security and Privacy
During the design of security and privacy mechanisms for payment systems, banks were assumed to be rational entities aiming to maximize their profit. Rational entities refer to entities that would only deviate from the protocol (i.e., act maliciously) if such a misbehavior would increase their advantage in the system. Thus, banks would behave as the protocol suggests as long as their activity is traceable, but could be tempted to attack the privacy of their clients if such an activity could not be traced back to them. Using such information about their clients, banks can profile their clients, and/or sell their data to third parties, and so forth. This was very accurately reflected by the degree to which security and privacy mechanisms were adopted by banks. Security emerges as one of the most critical properties for payment systems. Unless users are sure that they are not in danger of losing their funds, they would not leverage any payment service of a particular bank. As a consequence, most banks have invested considerable resources in devising secure mechanisms for performing payments in the off-line and online realms. However, this model did not work the same way for privacy mechanisms. While banks have been (and still are) investing funds in anonymizing their client data (e.g., when sending them to third parties for processing) they have been neglecting mechanisms to make client payments privacy-preserving with respect to
Background on Digital Payments
17
them. Notice that privacy-preserving payments would prevent banks from profiling their clients in terms of payments and increases their risk in services such as granting a loan. This is the main reason why privacy-preserving mechanisms associated with bank payments have not yet been adopted.
2.3
SECURITY IN PAYMENT SYSTEMS PRIOR TO BITCOIN
In this section, we elaborate on prominent research and industrial payment systems (and their security properties) that started prior to Bitcoin. In particular, in Section 2.3.1, we elaborate on the security vulnerabilities featured in these payment systems. By extracting appropriate lessons from these vulnerabilities, we overview research results in the field of digital payment systems and show how these systems resist or deal with the aforementioned threats in Sections 2.3.2 and 2.3.3. In our overview, we focus on the systems that are of historical importance or became popular throughout their deployment. 2.3.1
Common Payment System Characteristics
Bitcoin is a payment system that satisfies the following properties: • Liveliness of payments • No need for payment mediator • Decentralization of trust in payment processing • Support for micropayments • No need for (expensive) specialized hardware We elaborate in this section on each of these properties separately by extracting lessons from the security vulnerabilities witnessed by previously deployed payment systems. 2.3.1.1
Liveliness of the Interaction with the Banks
One can classify payment systems in two broad categories as described in [1]: online payments and off-line payments, depending on whether there is a need to contact a single (trusted) entity for the payment to take place (in addition to the transaction
18
Bitcoin and Blockchain Security
participants). Clearly, online payment systems require that a contact with a thirdparty, such as with an authentication or authorization server (e.g., a server of the bank confirming a transaction), is necessary whenever a payment is to take place. In contrast, in off-line systems, payments can be executed by requiring only the active participation of the payer and the payee. Credit card payments are examples of online payments, while prepaid card-based payments are prominent examples of off-line payments. From a security perspective, off-line systems are vulnerable to doublespending attacks, where an adversary attempts to spend more than the available (off-line) balance (pretending that another off-line payment did not take place). Such attacks against fairness have been identified in the literature and resulted in the incorporation of a number measures to prevent/detect such threats. Among these measures, hardware-based approaches are the most widely used in order to deter double-spending in off-line payments (e.g., by utilizing smart cards). Another method to offer double-spending resistant off-line payments is to rely on nominal checks that have already been preapproved by a third-party to be received by the specific merchant [1]. Finally, additional methods are based on detection of doublespending acts or on tracing and punishing the double-spenders [3].
2.3.1.2
Mediator-Based Services
Mediator-based systems refer to systems where payments are not performed directly by the payer and its counterparties, but through a mediator that offers a payment escrow service to the payer. In such systems, users maintain accounts that they credit/debit directly from their bank accounts. Subsequently, users can make payments to any other user who maintains an account with the same service. PayPal [2] is a representative example of such systems. In many cases, apart from executing the payment itself, these services act as escrows of the fairness of the transaction that takes place. Though such systems do not tend to suffer from double-spending attacks, they place complete trust in the mediator. That is, if the mediator (service) is compromised or acts maliciously, the security and privacy of transactions can be set at risk. Although timed in a period after Bitcoin was introduced, MtGox [4] is a prominent example of such service. Depending on the service and the trust the user has to put in it (e.g., account details, strong identification information), more funds can be withdrawn by the user’s account to perform payments on his or her behalf without the latter’s actual consent.
Background on Digital Payments
2.3.1.3
19
Decentralization of Trust
Payment systems can be centralized; that is, they can have one designated entity that is authorized to approve or reject payments on behalf of a particular individual. Payments executed through banks belong to this category. A payment system is considered to be decentralized if a limited number of specially designated entities participate in the processing of transactions. Moreover, we say that a system is open if any entity can participate in the procedure of confirming or rejecting a transaction. Bitcoin is a prominent example of this category of payment systems. In the latter two cases, specially crafted consensus protocols are in place to guarantee that the transaction processing is atomic (i.e., a transaction is either approved or confirmed or rejected). The same problem is not present in centralized payment systems, where a single entity receives and processes the user payment requests. Though centralized systems do not suffer from double-spending attacks, they put complete trust on the single entity that processes payments. That is, if this service starts acting maliciously (e.g., is compromised), the security of the system and privacy of transactions are at serious risk.
2.3.1.4
Support for Micropayments
Micropayments, such as MiniPay [5], are payments of low-value that in many scenarios occur repeatedly and fast. Due to their low value, micropayment systems suffer from two fundamental problems. First, the payment processing cost in these systems may be higher than the actual payment value if payments are performed similarly to conventional payments. Naturally, this can be against the benefit of the party that processes transactions or require a transaction fee to the payer that is disproportional to the payment amount. M-Pesa [6] is a prominent example of this case. Second, for the common case of micropayments that occur frequently, payment processing should be considerably faster than conventional payments. Therefore, such systems may deviate in terms of performance and security requirements from systems serving conventional payments and could require different system designs.
20
Bitcoin and Blockchain Security
2.3.1.5
Need for Special Hardware/Software
As mentioned in Section 2.3.1.1, to support certain security properties, tamperresistant hardware is needed on the payer or payee side or on both parts. Additionally, recent schemes to support privacy-preserving transactions require sophisticated cryptographic operations to be implemented and performed on the payer and payee side. We classify the specifications of existing payment systems in the following categories: • Specification requiring asymmetric cryptographic operations (e.g., some public key infrastructure to be in place). • Specification requiring only symmetric cryptographic operations, where only symmetric keys are needed and the associated symmetric operations to be implemented. • Specification requiring secure hash functions, where there is only a need for an implementation of a hash function. Clearly, the more complex the cryptographic operations utilized within a system, the more complicated is the hardware needed to accelerate the computations taking place within it, and the more necessary and expensive that hardware becomes. Although the cost of hardware is not a primary issue in payment systems, it tends to considerably influence the adoption of a given payment system. 2.3.2
Privacy-preserving Payments Due to the Research Community
As mentioned previously, privacy-preserving payments attracted the attention of the research community in the early years of applied cryptography. In the following, we introduce digital or electronic cash, which aimed to replace conventional cash, and proceed with primitives to enable privacy-preserving checks and credit card-based payments. Finally, we provide a look at other use cases related to privacy-preserving payments, such as stock market and taxation. 2.3.2.1
Digital Cash
In a first attempt to offer privacy-preserving payments that would reflect the realworld needs, in their paper “Revokable and Versatile Electronic Money” [7], Jakobsson and Yung first presented the necessity for revokable anonymity in payments. To achieve this, users maintain their funds in bank accounts that carry their name
Background on Digital Payments
21
or identity. To make payments, users withdraw anonymous coins from these accounts using a three-party protocol that takes place between the user, the bank, and a trusted “ombudsman” in charge of revocation. The user chooses a coin sequence number, and using a blind signature system,2 the bank, and ombudsman generate a bank signature on information related to the coin. Clearly, the ombudsman, can potentially revoke the anonymity of a user. Although satisfying the need for revokable anonymity in funds management, this scheme does not protect account activity information from the bank, since accounts themselves were not anonymous. Digital cash or electronic cash was introduced by Chaum [8] as a cryptographic primitive to offer privacy-preserving cash-like payments. In its first version, digital cash is offered as a substitute for money on the Internet that cannot be faked, copied, or spent more than once, as long as the online participation of a bank can be guaranteed. In addition, it is known to provide absolute anonymity, namely no one, not even the bank itself, can relate a particular ecash coin (ecoin) with its owner. Consumers can indeed buy anonymous ecoins from a bank/mint and use them in their online transactions without being traced. Camenish, Lysyanskaya, and others [9–11] enhanced the work in [8] with accountability features, while offering the possibility of off-line transactions with resistance to double-spending. In [11], an ecash-based electronic payment system was introduced by taking into consideration real world system threats. More concretely, an ecash (EC) [9, 12] system consists of three types of players: the bank, users, and merchants. To open a bank account, users engage in a registration process through which they generate cryptographic keys to be identified within the system (EC.GenKey). When they need to perform a payment, the users contact the banks and withdraw funds in the form of ecash coins (EC.Withdraw). In most schemes, the users spend their coins among other users without the need to contact the bank at the time (EC.Spend). Users are not required to reveal their identities through EC.Spend and may participate in transactions through one-time pseudonyms [3, 13]. However, they need to use the secret keys they used during EC.Withdraw for the payment protocol not to fail. After the payment completes, merchants need to deposit their coins in the bank to allow double-spending detection to take place (EC.Deposit). These keys (and thus the identity of a payer) can be revealed (through EC.Identify) only if the payer attempts to double-spend a coin (confirmed through EC.VerifyGuilt). Depending on the scheme, it is only the identity of
2
Blind signatures refer to a cryptographic primitive that allows an entity to digitally sign a message without knowing or being able to read the message that it signs.
22
Bitcoin and Blockchain Security
the double-spender or the entire set of his or her transactions that are revealed (EC.Trace). A summary of the input and output specifications of the basic operations is listed below. • (pk B , skB ) ← EC.BGenKey(1k , params) and (pk u , sku ) ← EC.UKeyGen(1k , params), which are the key generation algorithms for the bank and the users, respectively. • hW, >i ← EC.Withdraw(pk B , pk u , n) [u(sku ), B(skB )]. In this interactive procedure, u withdraws a wallet W of n coins from B. • hW 0 , (S, π)i ← EC.Spend(pk M , pk B , n) [u(W ), M (skM )]. In this interactive procedure, u spends a digital coin with serial number S from his or her wallet W to M . When the procedure is over, W is reduced to W 0 , M obtains as output a coin (S, π), where π is a proof of a valid coin with a serial number S. • h>/⊥, L0 i ← EC.Deposit(pk M , pk B ) [M (skM , S, π), B(skB , L)]. In this interactive procedure, M deposits a coin (S, π) into its account in the bank. If this procedure is successful, M ’s output will be > and the bank’s list L of the spent coins will be updated to L0 . • (pk u , ΠG ) ← EC.Identify(params, S, π1 , π2 ). When the bank receives the two coins with the same serial number S and validity proofs π1 and π2 , it executes this procedure to reveal the public key of the violator accompanied with a violation proof ΠG . • >/⊥ ← EC.VerifyGuilt(params, S, pk u , ΠG ). This algorithm, given ΠG , publicly verifies the violation of pk u . • {(Si , Πi )}i ← EC.Trace(params, S, pk u , ΠG , D, n). This algorithm provides a list of serials Si of the ecoins a violator pk u has issued, with the corresponding ownership proofs Πi . • >/⊥ ← EC.VerifyOwnership(params, S, Π, pk u , n). This algorithm allows to publicly verify the proof Π that a coin with serial number S belongs to a user with public key pk u . Camenish et al. presented in [12] a money-laundering prevention version of [9], where anonymity is revoked when the spender spends more coins with the same merchant than their spending limit. In this case ecoins are upgraded to: C = (S, V, π), where V is a merchant-related locator, while EC.Identify and EC.VerifyGuilt procedures are upgraded to the DetectViolator and VerifyViolation to support the extended violation definition.
Background on Digital Payments
23
Security Properties Electronic cash systems are built with the following security properties: correctness, fairness, (conditional) anonymity, and resistance to impersonation attacks. Correctness requires that the electronic cash protocols work as intended if all players are honest. Fairness requires that no collection of users and merchants can ever spend more coins than they withdraw. On the other hand, anonymity of users requires that no entity, even the bank itself when colluding with any collection of (malicious) users and/or merchants can obtain information on a user’s spending other than the information intentionally released in the network by the user (and associated sideinformation). User anonymity is commonly conditional on honest user behavior. That is, given a violation with respect to a predefined policy, electronic cash system can output proofs of guilt ΠG and the violators’ public keys pkV such that EC.VerifyViolation accepts. Such a proof of violation enables systems to support the traceability of a violator’s behavior. That is, EC.Trace will output the serial numbers of all coins that belong to the violator with public key pkV along with the corresponding proofs of ownership that VerifyOwnership accepts. Finally, resistance to impersonation attacks requires that an honest user u cannot be accused of conducting a violation that EC.VerifyViolation accepts. Summary Although initial schemes offering digital cash were online-based, recent digital cash payment systems are off-line, offering to some extent resistance to double-spending (or misbehavior). In particular, ecash schemes reveal the identity of the doublespender and/or the entirety of their transactions. Except for their initial version, those schemes do not require any trusted third-party (mediator). Finally, despite their privacy guarantees, these systems usually incur complicated cryptographic operations (e.g., blind signature) and are therefore penalizing in terms of performance. 2.3.2.2
Credit Card-Based Payment Protocols
Credit card-based transactions tend to be popular for transactions over the wire, due to their easiness of use and their risk management features. These protocols support delayed payment, provide users with logs of their own transactions, receipts, and the opportunity to challenge and correct erroneous charges. In fact, frequent losses of credit cards, as well as impersonation attacks, justify the fact that banks (that
24
Bitcoin and Blockchain Security
are no more trusted than the people operating them) maintain a detailed log of user transactions. At the same time, users acting as payees tend to see the identities of the payers, when a credit card is used for payments. The latter constitutes a serious violation of privacy of card holders with regard to banks. To remedy that, anonymous payment cards were introduced as the privacy equivalent of credit or debit cards. Credit cards providing cardholder anonymity even toward the banks were introduced in 1994 by Low et al. [14] in their paper “Anonymous Credit Cards and its Collusion Analysis” [15]. Here, the authors present a system of anonymous lending accounts whose aim is to allow users to spend credit without revealing their identities to the stores. To achieve this, a user makes use of two banks, a credit bank who knows their identity (because they are extending the credit), and a payment bank who receives funds from the credit bank on behalf of the user and does not need to know the user’s identity. The user is known to the payment bank only as an authentication key or pseudonym for signing instructions. The stores participating in this system make use of store banks. To establish credit, the user asks the credit bank to issue funds (up to a given credit limit) to the payment bank. The user can then spend credit at stores by anonymously instructing his or her payment bank to transfer funds to the appropriate store bank. Although users have anonymous accounts with the payment bank, user privacy is not offered in this system when considering collusion between the payment and store banks. That is, a collaboration between these two banks could link a user‘s payments to the store. A number of other schemes, such as [16, 17], also proposed to blind credit card information from third parties or merchants, respectively but not toward banks. Androulaki and Bellovin [18] recently introduced a different system for managing anonymous lending accounts. This system eliminates the need for a credit bank that knows the identity of the user. To do this, it makes use of two types of digital cash wallets introduced by Camenisch, Hohenberger, and Lysyanskaya [9, 12]. One allows for anonymous accounts that hold a specified amount of funding and reveal the identity of the owners if that threshold is exceeded. The other allows for similar limits, but reveals an entire transaction history of the corresponding user account if the same threshold is exceeded. For normal transactions, these wallets are filled with an appropriate spending limit and payments are made from both to ensure that those who exceed their limit are detected and dealt with appropriately. To handle monthly payment of debt, only the wallet is accessed to prove that a specific individual has used a specific amount of his or her limit.
Background on Digital Payments
25
Summary In general, proposed privacy-preserving credit card schemes offer anonymity and/or transaction unlinkability. Security is achieved in all cases by requiring the user to provide his or her secret key at each transaction; however, upon compromise of the user-side software, fraud detection can only take place on the user side (not through the bank). Clearly, these systems enable off-line payments and detect/identify double-spending (e.g., in the case of digital cash). Such systems leverage advanced cryptographic primitives, but do not support micropayments. 2.3.2.3
Micropayments
A number of payment schemes addressing the delicate requirements of micropayments (i.e., low processing fee and fast processing time) have been proposed in the literature [19–21]. Micropayment schemes adopt the principle that not all payments need to be processed—but only a representative sample of those payments (due to their low value). In what follows, we give the example of how MiniPay works. The scheme assumes there is a one-way hash function H, a PKI in place and that users of the system, payers and payees, generate and certify their keys upon opening an account to a bank. To make a payment, a user u (with public key pk u ) simply digitally signs a statement of moving a certain amount of his or her funds to the payee’s account, M (with public key pk M ): σu→M ← Sign(sku ; pk M ||val), where sku is the secret key that corresponds to pk u , and val refers to the value of the transaction. Upon receiving the payment, the payee checks if the received payment is depositable; that is, the payee checks if the result of the hash function on the payment satisfies certain conditions, such as those expressed through Cond: Cond(H(σu→M )) = true. The payee eventually deposits the payment to the bank, which moves x · val from the account of u to the one of M , where x > 1 is a system parameter that is there to account for nondepositable payments of u. Privacy-preserving versions of micropayments have been introduced in the literature [19, 22]; here, users leverage cryptographic primitives equivalent to the
26
Bitcoin and Blockchain Security
ones used in ecash to hide the identities of transaction participants and make transactions of the payer unlinkable. However, the privacy of the recipient is not guaranteed. Summary MiniPay is a scheme that addresses micropayment requirements with some privacy offerings. Payments in MiniPay do not require contacting a bank and can therefore be considered off-line. At the same time, double-spending is treated by leveraging accountability properties of pure digital signatures that users use to authorize their transactions. Even in this case, security of the system would need to rest upon trusted software and hardware that run the payment generation on the client side. 2.3.2.4
Other. 2.3.3
Deployed Payment Systems
In this section, we provide an overview of payment systems that have been already deployed.
Background on Digital Payments
2.3.3.1
27
Zero-Knowledge Systems
Zero-Knowledge Systems (ZKS) was a company founded in 1997 to offer primarily privacy-preserving services. ZKS is well known for Freedom network, its privacypreserving network service. The main differentiator of ZKS was the strong privacy provisions offered within their services. ZKS technologies leveraged privacy-preserving cryptographic primitives at the time introduced by Brands along the lines of the schemes described in Section 2.3.2. 2.3.3.2
PayPal
Established in 1998, PayPal started as a company offering its clients the ability to transfer funds electronically between individuals, organizations, or businesses [2]. Clients of PayPal have to register accounts and once these accounts are connected to their bank/credit/debit account, they are able to send funds to anyone in possession of a PayPal account. PayPal accounts are traditionally associated to an email address, and knowledge of an email address is the minimum knowledge needed to move money to the account owned by that address. PayPal transactions are pseudonymous and allows users to register several accounts, each linked to a given bank account or email address. Therefore, one could argue that PayPal does not offer transactional privacy as defined in the previous sections. Currently, a number of online markets, such as eBay and Amazon, accept PayPal payments. In typical cases, PayPal accounts are automatically refilled by their owners’s bank account. To secure the PayPal account refill process, PayPal introduced two-factor authentication mechanisms. In particular, users would have to authenticate using their username and password and answer an additional challenge to be able to log in. The challenge was either a secret code sent to the user’s mobile phone or a number that the user was supposed to feed to a pre-agreed hardware security module. This two-level authentication would prevent a potential attacker from modifying any transaction: the attacker would have to obtain access to the user’s password and would also need to obtain the right answer to the challenge (by compromising another user device). Notice that the PayPal implementation of twofactor authentication does not seem to solve the issue of man-in-the-middle-attack since a potential attacker could try to impersonate the PayPal website to the user (and vice versa) just by forwarding responses from one end to the other.
28
Bitcoin and Blockchain Security
Summary PayPal is a mediator-based system that enables payments to users who maintain accounts through PayPal. The system does not offer privacy and requires trust to be placed in the mediator—who has to be online for the payment to complete. 2.3.3.3
IBM Micropayments
IBM Micropayments was developed by IBM Research and aims to efficiently support small-value transaction payments over the Internet. IBM Micropayments essentially implement the techniques introduced in MiniPay [5]. In IBM Micropayments, each entity (e.g., financial institution or Internetservice provider) manages their own risk by operating their own billing service. Recently, these entities can also offer billing support as a service to consumers and merchants. IBM Micropayments supports interoperability between different types of billing systems—which led to a widespread adoption of this system. 2.3.3.4
Peppercoin
Similar to IBM Micropayments, Peppercoin [24] is another prominent micropayment system based on a research paper [25] that has found its way into the industrial world. Peppercoin shares similar design principles as Minipay, as discussed in the previous paragraphs. 2.3.3.5
M-Pesa
M-Pesa enables mobile phone-based money transfer and microfinancing. M-Pesa was launched in 2007 by the largest mobile network operators in Kenya and Tanzania and has since expanded to many other countries (such as Afghanistan, South Africa, and India). M-Pesa allows users to withdraw, transfer, and receive funds, as well as perform purchases of goods/services. In particular, the service allows users to deposit money into an account stored on their cell phones. Payments are performed by requiring users to send (PIN-secured) SMS text messages to merchants. To redeem the received payments, these recipients are required to provide the correct PIN. M-Pesa profits from a small fee for each payment/money deposit that takes place through the service [26]. Given that M-Pesa was mainly introduced in countries where the banking network is poorly connected, the system was mainly designed to act as a branchless
Background on Digital Payments
29
banking service. Namely, M-Pesa customers can deposit and withdraw money from a network of agents that includes resellers and retail outlets that act as their banking agents. Summary Both M-Pesa and IBM Micropayments are systems that (functionality-wise) support micropayments. However, despite low operational costs, M-Pesa transaction fees are high when compared to the transaction value. This indicates the need in these countries for cheaper ways of micropayments. Privacy is not considered here, while it assumes that trusted hardware and software, such as mobile phone application, is in place.
2.4
SUMMARY
Table 2.1 provides a summary of the payment systems we discuss and their privacy and security guarantees. It is evident that although the theoretical background to build privacy-preserving systems exists, payment systems that have survived throughout the past few years do not support any privacy notion against either the bank or the provider of the payment service. Another remark is that in all the services provided so far (i.e., before the era of decentralized consensus networks and Bitcoin) the payment provider is always a centralized entity that has to be trusted. Micropayments are not sufficiently well supported by these systems but need to be handled by separate and dedicated payment systems. Although digital cash proposals achieve strong privacy guarantees, these proposals rely on complex cryptographic primitives—such as zero-knowledge proofs—which are not easily understood by application developers and system integrators. This was one of the main reasons explaining the slow development of practical solutions based on digital cash. In the following chapters, we start by describing the main operations of Bitcoin.
30
Bitcoin and Blockchain Security
Table 2.1 Summary of Payment Systems Prior to Bitcoin and Their Classification
System Digital Cash Digital Checks Micropayments
Online/ Off-line Both Off-line Off-line
System Digital Cash Digital Checks Micropayments
Double-Spending Countermeasures Detection Detection Detection
Online/ Off-line Both Off-line Off-line
Need Mediator No Yes No
Privacy Mechanism Yes w.r.t. users Yes/No
Centralization Yes Yes Yes
HW Requirements Yes Yes Yes
Accountability
Crypto
Yes Yes Yes
PKI PKI PKI
number of nodes. Open sourcing the implementation was also an excellent way for developers to maintain and support the growth of the system.
References [1] N. Asokan, Philippe A. Janson, Michael Steiner, and Michael Waidner. The state of the art in electronic payment systems. IEEE Computer, 1997. [2] Ed Grabianowski and Stephanie Crawford. How paypal works. 2014. [3] Jan Camenisch and Anna Lysyanskaya. An efficient system for non-transferable anonymous credentials with optional anonymity revocation. In Advances in Cryptology - EUROCRYPT 2001, volume 2045 of Lecture Notes in Computer Science, pages 93–118. Springer-Verlag, 2001. [4] MtGox, available from. [5] Amir Herzberg and Hilik Yochai. Minipay: Charging per click on the web. In Selected Papers from the Sixth International Conference on World Wide Web, 1997. [6] Hughes N. and Lonie S. M-PESA: Mobile Money for the Unbanked: Turning Cellphones into 24-Hour Tellers in Kenya, 2007. Innovations: Technology. [7] Markus Jakobsson and Moti Yung. Revokable and versatile electronic money (extended abstract). In CCS ’96: Proceedings of the 3rd ACM Conference on Computer and Communications Security, pages 76–87, New York, 1996. ACM. [8] David L. Chaum. Untraceable electronic mail, return addresses, and digital pseudonyms. Communications of the ACM, 1981.
Background on Digital Payments
31
[9] Jan Camenisch, Susan Hohenberger, and Anna Lysyanskaya. Compact e-cash. In Advances in Cryptology - EUROCRYPT 2005, Lecture Notes in Computer Science, pages 302–321. SpringerVerlag, 2005. [10] D. Chaum, A. Fiat, and M. Naor. Untraceable electronic cash. In Proceedings on Advances in Cryptology - CRYPTO, 1990. [11] S. Brands. Electronic Cash on the Internet. In Proceedings of the Symposium on the Network and Distributed System Security, 1995. [12] Jan Camenisch, Susan Hohenberger, and Anna Lysyanskaya. Balancing accountability and privacy using e-cash (extended abstract). In Security and Cryptography for Networks, 2006. [13] J. Camenisch and A. Lysyanskaya. A signature scheme with efficient protocols. In International Conference on Security in Communication Networks – SCN, volume 2576 of Lecture Notes in Computer Science, pages 268–289. Springer Verlag, 2002. [14] Steven H. Low, Sanjoy Paul, and Nicholas F. Maxemchuk. Anonymous credit cards. In CCS ’94: Proceedings of the 2nd ACM Conference on Computer and communications security, pages 108–117, New York, 1994. ACM. [15] S. Low, N. F. Maxemchuk, and S. Paul. Anonymous credit cards and its collusion analysis. IEEE Transactions on Networking, December 1996. [16] Hugo Krawczyk. Blinding of credit card numbers in the set protocol. In FC ’99: Proceedings of the Third International Conference on Financial Cryptography, pages 17–28, London, 1999. Springer-Verlag. [17] M. Bellare, J. Garay, R. Hauser, H. Krawczyk, A. Herzberg, G. Tsudik, E. van Herreweghen, M. Steiner, Gene Tsudik, and M. Waidner. Design, Implementation and Deployment of the iKP Secure Electronic Payment System. IEEE Journal on Selected Areas in Communications, 18:611– 627, 2000. [18] Elli Androulaki and Steven Bellovin. An anonymous credit card system. In TrustBus ’09: Proceedings of the 6th International Conference on Trust, Privacy and Security in Digital Business, pages 42–51, Berlin, Heidelberg, 2009. Springer-Verlag. [19] Elli Androulaki, Mariana Raykova, Shreyas Srivatsan, Angelos Stavrou, and Steven M. Bellovin. Par: Payment for anonymous routing. In Proceedings of the 8th International Symposium on Privacy Enhancing Technologies, 2008. [20] Ronald L. Rivest and Adi Shamir. Payword and micromint: Two simple micropayment schemes. In Proceedings of the International Workshop on Security Protocols, 1997. [21] Ronald L. Rivest. Peppercoin micropayments. In Financial Cryptography, 2004. [22] Yao Chen, Radu Sion, and Bogdan Carbunar. Xpay: Practical anonymous payments for tor routing and other networked services. In Proceedings of the 8th ACM Workshop on Privacy in the Electronic Society, 2009. [23] Moti Yung Shouhuai Xu and Gendu Zhang. Scalable, tax evasion-free anonymous investing, 2000. available from. [24] Peppercoin. Peppercoin. Peppercoin.
available from
32
Bitcoin and Blockchain Security
[25] Ronald L. Rivest. Peppercoin micropayments. In Proceedings Financial Cryptography, 2014. [26] M-pesa tariffs. available from tariffs.
Chapter 3 Bitcoin Protocol Specification by Arthur Gervais and Ghassan Karame
In this chapter, we detail the operation of Bitcoin and summarize the main scalability measures integrated in the system.
3.1
OVERVIEW OF BITCOIN
Bitcoin operates on top of a loosely connected P2P network, where nodes can join and leave the network at will. Bitcoin nodes are connected to the overlay network over TCP/IP. Initially, peers bootstrap to the network by requesting peer address information from Domain Name System (DNS) seeds that provide a list of current Bitcoin node IP addresses. Newly connected nodes advertise peer IP addresses via Bitcoin addr messages. Notice that a default full Bitcoin client establishes a maximum of 125 TCP connections, of which up to 8 are outgoing TCP connections. In Bitcoin, payments are performed by issuing transactions that transfer Bitcoin coins, referred to as BTCs in the sequel, from the payer to the payee. These entities are called “peers,” and are referenced in each transaction by means of pseudonyms denoted by Bitcoin addresses. Each address maps to a unique public/private key pair; these keys are used to transfer the ownership of BTCs among addresses. A Bitcoin address is an identifier of 26 to 35 alphanumeric characters (usually beginning with either 1 or 3). Each Bitcoin address is computed from an Elliptic Curve Digital Signature Algorithm (ECDSA) public key—for which the address owner knows the corresponding private key—using a transformation based on hash functions. Since hashes
33
34
Bitcoin and Blockchain Security
are one-way functions, it is possible to compute an address from a public key, but it is infeasible to retrieve the public key solely from the Bitcoin address.1 Recall that, using ECDSA signatures, a peer can sign a transaction using his or her private key; any other peer in the network can check the authenticity of this signature by verifying it using the public key of the signer. A Bitcoin transaction is formed by digitally signing a hash of the previous transaction where this coin was last spent along with the public key of the future owner and incorporating this signature in the coin. Transactions take as input the reference to an output of another transaction that spends the same coins and output the list of addresses that can collect the transferred coins. A transaction output can only be redeemed once, after which the output is no longer available to other transactions. Once ready, the transaction is signed by the user and broadcast in the P2P network. Any peer can verify the authenticity of a BTC by checking the chain of signatures. The difference between the input and output amounts of a transaction is collected in the form of fees by Bitcoin miners. Miners are peers that participate in the generation of Bitcoin blocks. These blocks are generated by solving a hashbased proof-of-work (PoW) scheme; more specifically, miners must find a nonce value that, when hashed with additional fields (e.g., the Merkle hash of all valid transactions, the hash of the previous block), the result is below a given target value. If such a nonce is found, miners then include it in a new block, thus allowing any entity to verify the PoW. Since each block links to the previously generated block, the Bitcoin blockchain grows upon the generation of a new block in the network. A Bitcoin block is mined on average every 10 minutes and currently awards 12.5 BTCs to the generating miner. It was shown in [2] that Bitcoin block generation follows a shifted geometric distribution with parameter 0.19. This also suggests that there is considerable variability in the generation times; for example, some blocks were generated after 99 minutes (e.g., block 152,218). During normal operations, miners typically work on extending the longest blockchain in the network. The longest blockchain is calculated based on the chain featuring the largest number of blocks created with the largest total difficulty from the genesis block. Due to the underlying PoW scheme, different miners can potentially find different blocks nearly at the same time—in which case a fork in the blockchain occurs. Forks are inherently resolved by the Bitcoin system; the longest
1
The actual derivation of a Bitcoin address from a public key entails a series of transformations based on hashes, checksums, etc. For ease of presentation, we omit the details of the actual transformation. More detail on the construction of Bitcoin addresses can be found in [1].
Bitcoin Protocol Specification
35
blockchain that is backed by the majority of the computing power in the network will eventually prevail.
3.2
BUILDING BLOCKS AND CRYPTOGRAPHIC TOOLS
The Bitcoin protocol limits its use of cryptographic tools to cryptographic hash functions such as SHA256 and RIPEMD160, Merkle trees, and the Elliptic Curve Digital Signature Algorithm (ECDSA). 3.2.1
Cryptographic Hash Functions
Hash functions map an arbitrary long input byte sequence to a fixed size output— commonly referred to as a digest—effectively fingerprinting the input sequence. Cryptographic hash functions refer to hash functions that exhibit two essential properties: one-wayness, and collision-resistance. Let H : {0, 1}∗ → {0, 1}n refer to a cryptographic hash function. Informally, the one-way property implies that given H(x), it is (computationally) infeasible to derive x. On the other hand, the collision resistance property implies that it is computationally infeasible to find x 6= y such that H(x) = H(y). The collusion-resistance property of cryptographic hash functions constitutes important security pillars in Bitcoin. For example, the PoW in Bitcoin is mainly based on computing hashes, and the id of a transaction corresponds to the hash of the transaction. Hash functions are a base component of different types of data structures used in Bitcoin (e.g., Merkle trees). 3.2.2
Merkle Trees
Merkle trees allow the combination of multiple input sequences in a hash tree converging into the topmost Merkle root hash. This data structure allows the compact representation of a set of transactions, such as when the tree is built up from the transaction hashes (see Figure 3.1). Merkle trees can be used to instantiate cryptographic accumulators, which answer a query whether a given candidate belongs to a set. A Merkle tree is a binary tree in which the data is stored in the leaves. More specifically, given a tree of height `, a Merkle tree accumulates elements of a set X by assigning these to the leaf nodes (starting from position 0). Let ai,j denote a node in the tree located at the ith level and jth position. Here, the level refers to
36
Bitcoin and Blockchain Security
the distance (in hops) to the leaf nodes; clearly, leaf nodes are located at distance 0. On the other hand, the position within a level is computed incrementally from left to right starting from position 0; for example, the leftmost node of level 1 is denoted by a1,0 . In a Merkle tree, the intermediate nodes are computed as the hash of their respective child nodes; namely ai+1,j = H(ai,2j , ai,2j+1 ), where H(X) refers to the cryptographic hash of X. Figure 3.1 depicts an example of a Merkle tree accumulating eight elements. Here, a30 is referred to as the Merkle root and commits to all leaf elements U0 , . . . , U7 . To prove the membership of element U3 (highlighted in Figure 3.1) in the root a30 , intermediate nodes a02 , a10 , and a21 (highlighted in ovals in Figure 3.1) are needed. We say that these nodes form the sibling path of U3 . Given n leaves, Merkle trees require O(n) for constructing the tree and O(log(n)) to prove membership of any element in the tree. Formally, a Merkle tree comprises the following algorithms: δ ← Acc(X). This algorithm accumulates the elements of a set X into a digest δ. Here, δ corresponds to the root node (i.e., δ = a`,0 ). This can be used to prove that the exact set X is correctly accumulated in δ. πM ← ProveM (X, x). Given a set X and element x ∈ X, this algorithm outputs a proof of membership πM asserting that x ∈ X. πM consists of the sibling path of x in the modified Merkle tree and the root a`,0 . VerifyM (δ, x, πM ). Given δ, an element x, its sibling path and the root a`,0 , this algorithm outputs true if and only if δ = a`,0 where ` is the length of the sibling path and the sibling path of x matches the root a`,0 . 3.2.3
ECDSA
Bitcoin currently relies on the Elliptic Curve Digital Signature Algorithm (ECDSA) with the secp256k1 curve [3]. ECDSA is a variant of the Digital Signature Algorithm (DSA) that uses elliptic curve cryptography. The required secp256k1 private keys have a length of 256 bits and can be transformed (deterministically) into the corresponding secp256k1 public keys. Additional details on ECDSA can be found in [3].
3.3
BITCOIN DATA TYPES
In this section, we introduce the main Bitcoin specific data types.
37
Bitcoin Protocol Specification
a30
Depth 3
a20
a21
a10
a11
a12
a13
a00
a01
a02
a03
a04
a05
a06
a07
U0
U1
U2
U3
U4
U5
U6
U7
Figure 3.1 A Merkle tree of depth 3, accumulating eight elements U0 , . . . , U7 .
3.3.1
Scripts
Bitcoin introduced a custom non-Turing complete scripting language in an attempt to support different types of transactions and extend the applicability of transactions beyond the simple transfer of funds. Scripts are stack-based, support a number of functions (commonly referred to as opcodes), and either evaluate to true or false. The language supports dozens of different opcodes ranging from simple comparison opcodes to cryptographic hash functions and signature verification. Since scripts are supposed to be executed on any Bitcoin node, they could be abused to conduct denial-of-service attacks and therefore, a considerable number of opcodes have been temporarily disabled. This was one of the main reasons why scripts do not provide rich support when compared to standard programming languages. An example script program <signature> <publicKey> OP CHECKSIG contains two constants (denoted by <...>) and one opcode (execution goes from the left to the right). Constants are pushed by default onto the stack, and upon execution, the stack would therefore contain <signature> <publicKey>. The next step is to execute OP CHECKSIG, which verifies the <signature> under the provided <publicKey>. If the signature matches the provided public key, OP CHECKSIG returns true, and in return, the script outputs true. Otherwise, the script will output false.
38
3.3.2
Bitcoin and Blockchain Security
Addresses−8. 3.3.3. 3.3.3.1
Supported Transaction Types
Bitcoin supports a number of default transaction types. Typically, only supported transaction types are broadcasted and validated within the network. Transactions that do not match the standard transaction type are generally discarded. Note that because transactions can have multiple outputs, different output types can be combined within a single transaction.
Bitcoin Protocol Specification
39
Figure 3.2 An example of a transaction with a single input spending (w + x) BTCs to two output addresses (X and Y).
Pay To Public Key Hash (P2PKH) A P2PKH transaction output contains the following opcodes: OP DUP OP HASH160 <PubkeyHash> OP EQUALVERIFY OP CHECKSIG The corresponding input that would be eligible to spend the output specifies the required signature and the full public key as follows: <Sig> <PubKey> Pay To Script Hash (P2SH) A P2SH transaction output can only be redeemed by an input that provides a script that matches to the hash of the corresponding
Figure 3.3 The input of transaction 2 points to the output of transaction 1.
40
Bitcoin and Blockchain Security
Table 3.1 Transaction Format Within a Bitcoin Block
Field Version number Input counter List of inputs Output counter List of outputs Locktime
Description Version, currently 1 positive integer see Table 3.2 positive integer see Table 3.2 Block height or time when transaction is valid
Size 4 bytes 1—9 bytes 1—9 bytes Variable 4 bytes
Table 3.2 Transaction Input Format
Field Previous transaction hash Previous transaction output index Script length ScriptSig (Signature script) Sequence number
Description Dependency Dependency index Input script 0xFFFFFFFF
Size 32 bytes 4 bytes 1—9 bytes Variable 4 bytes
output. For example, a P2SH output contains the following transaction outputs: OP HASH160 <Hash160(redeemScript)> OP EQUALVERIFY The redeeming input consequently needs to provide a redeemScript, that hashes to the input’s hash. Note that every standard script can be used for this purpose: <sig> <redeemScript> P2SH outputs are currently widely used in multisignature (multisig) transactions. Multisig A multisignature (or commonly referred to as multisig) transaction requires multiple signatures in order to be redeemable. Multisig transaction outputs are usually denoted as m-of-n, m being the minimum number of signatures that are required for the transaction output to be redeemable, out of the n possible signatures that correspond to the public keys defined in the transaction output. An example transaction output is:
41
Bitcoin Protocol Specification
Table 3.3 Transaction Output Format
Field Value Script length ScriptSig (Pubkey script)
Description Positive integer of Satoshis to be transferred Output script
Size 4 bytes 1—9 bytes Variable
<m> <A pubkey> [B pubkey] [C pubkey..] <n> OP CHECKMULTISIG while the redeeming input follows this structure: OP 0 <A signature> [B signature] [C signature..] Note that P2SH allows an entity to create a transaction so that the responsibility for providing the redeem conditions is pushed from the sender to the redeemer of the funds. Consequently, the sender is not required to pay an excess in transaction fees if the redeem script happens to be complex. Multisignature transactions can be realized with either M -of-N output scripts as well as P2SH. 3.3.3.2
Script Execution
We now describe the process of script execution in Bitcoin. In order to validate a new transaction, the input (signature script) and the output of the former transaction (pubkey script) are concatenated. Once concatenated, the script is executed according to the scripting language. During execution, constants, denoted by <..> are pushed onto the stack, and opcodes execute their respective actions by taking into account the topmost stack value. In Figure 3.4, we depict an example of the validation of transaction 2 that spends a former output of transaction 1. The output and input script are concatenated (signature script first and then the PubKey script). In a first step, the two constants <PubKey> <Sig> are pushed onto the stack. Subsequently, OP DUP duplicates the topmost stack value, <PubKey> in this case. The next opcode OP HASH160 hashes the <PubKey> and saves it as <PubKeyHash> on the stack. Again, a constant <PubKeyHash> is pushed onto the stack and OP EQUALVERIFY verifies whether the two topmost stack elements are equal. If they are equal, they are removed from the stack and the last opcode OP CHECKSIG verifies whether the public key on the stack (<PubKey>) matches the signature (<Sig>). If the
42
Bitcoin and Blockchain Security
Figure 3.4 Script execution for a P2PKH transaction.
signature is valid, the script returns true, meaning that the input of transaction 2 is allowed to spent the output 1 of transaction 1. 3.3.3.3
Transaction Change and Fees
A transaction output is required to be spent in its entirety. Typically, however, it is unlikely that the input coins exactly match the desired output amount. Bitcoin addresses this problem by creating change outputs to where the difference between the input coins and output coins is spent. Change outputs typically correspond to new randomly generated Bitcoin addresses whose private keys are retained by the current owner of the input coins. These addresses are often referred to as shadow addresses.
Bitcoin Protocol Specification
43
Note that the sum of BTCs issued to all transaction outputs cannot exceed the sum of BTCs of the transaction inputs. The sum of Bitcoins of the transaction outputs, however, can be smaller than the sum of Bitcoins of the transaction inputs. This difference is paid as a fee to the Bitcoin miner that includes the transaction within a block. 3.3.3.4
Locktime
All transactions contain a field nLockTime that specifies the earliest time or the earliest block within which the transaction can be confirmed. Once a time-locked transaction is broadcasted in the network, miners can keep it in their list of transactions to be mined at a later stage. If the creator of the time-locked transaction changes his or her mind, they can create a new transaction that uses the same inputs (at least one overlapping input) as the time-locked transaction. The nontime-locked transaction would be confirmed immediately in a block, which would effectively make the time-locked transaction invalid. The locktime field is 4 bytes long and is interpreted in two ways: (1) if locktime is less than 500 million, it corresponds to a block height (the highest block number of the current main blockchain), (2) if the locktime is greater to or equal than 500 million, then the locktime field is parsed as a UNIX time stamp. 3.3.4
Blocks. 3.3.4.1
44
Bitcoin and Blockchain Security
Table 3.4 Bitcoin Block Header Format
Field Version Hash of previous block Merkle root hash Time nBits Nonce
Description Block version number Hash of previous block header Transaction Merkle root hash Unix time stamp Current difficulty of the network Allows miners to search a block
Size 4 bytes 32 bytes 32 bytes 4 bytes 4 bytes. 3.3.4.2
Proof-of-Work
In order to achieve consensus among peers in the Bitcoin network, Bitcoin relies on the synchronous communication assumption along with a hash-based PoW concept. Here, peers have to prove that they have expended a certain amount of computations; peers that perform the proof-of-work are commonly referred to as miners. The more hashes a miner can perform, the more likely the miner is able to find a block, thus the ability to find blocks is proportional to the miner’s hashing power. Bitcoin’s particular proof-of-work mechanism is to require the double SHA256-hash of the block header content. The difficulty of the mining process is adjusted dynamically in order to meet an average block generation time of 10 minutes. More specifically, to generate a block, miners must find a nonce value that, when hashed with additional fields (i.e., the Merkle hash of all valid and received
Bitcoin Protocol Specification
45
Figure 3.6 An example of a fork in the blockchain; gray blocks are called orphan blocks.
Table 3.5 Example of The Header of Block in 364082 Bitcoin
Hash: 00000000000000000bcd79fd8739a43205f4286e68a4d7bd3a83bcb0c7158d99 Previous block: 000000000000000009a4f7f94f2e7fc81e64182b0e2540b3cc91c89076f3da5b Time: 2015-07-06 08:39:20 Difficulty: 49,402,014,931.23 Transactions: 436 Total BTC: 1,081.04970944 BTC Size: 244.1259765625 KB Merkle root: 2ade89c464e2f46e393a292e474d391f6055d8f19486a98930775b8926f43934 Nonce: 1386491545
transactions, the hash of the previous block, and a time stamp), the result is below a given target value. If such a nonce is found, miners then include it (as well as the additional fields) in a new block, thus allowing any entity to verify the PoW. Upon successfully generating a block, a miner is granted a number of BTCs (currently 12.5 BTCs). These BTCs are awarded by means of a coinbase transaction that transfers the generated BTCs onto a newly created Bitcoin address controlled by the miner. These mechanisms offer a strong incentive for miners to commit their computational resources in order to support the Bitcoin system and continuously confirm transactions. An example of a Bitcoin block is shown in Tables 3.5 and 3.6. Note that Bitcoin adopts a limited supply strategy. That is, Bitcoin defines the rate at which currency will be generated. For example, in 2009, each miner
46
Bitcoin and Blockchain Security
Table 3.6 Block 364082 in the Main Blockchain.2 Tx Hash Source & Amount Recipient & Amount Tx Hash Source & Amount Recipient & Amount Tx Hash Source & Amount Recipient & Amount Tx Hash Source & Amount Recipient & Amount Tx Hash Source & Amount Recipient & Amount
69f45d539f9d2744116c43a4fd157d54b11f092248db2cfe0db36baccd6d3fe5 Generation 1KFHE7w8BhaENAswwryaoccDb6qcT6DbYY: 25.06175045 BTC ... ... ... 3ad5d3f813168ac2246c3e80ff6d9279023c30a661a41e1fa522e82dce608d03 16C1bgMsxPyfJVDPkv367ajXjgpjkiUxUA:0.23689801 BTC 1Gz9aQkk61r5VSD1WvnoyVgSfFafcziD8N:0.23689801 BTC 69a9421b77e2ec609b98adadd29da98dc1fa4d16e2fc75acd6637ddf7bbc069a 1FxddVRcF7tttvwc1cyaLUUeosXSoiMVFE:0.74550674 BTC 1CdV9rovEYUJkGEkejWY5MbmqPSTy1E4Rk :0.74550674 BTC ... ... ...
was awarded 50 new BTCs upon generating a Bitcoin block. This amount is halved approximately every 4 years until the generation of BTCs in the system depletes. Once a block is generated, it is broadcast in the entire network. Any entity that receives the block can verify the correctness of the PoW by computing the hash over the announced block fields, checking the correctness of the transactions included within, and verifying that it is below the target difficulty. Note that the Bitcoin network has a global block difficulty, which is updated every 2016 blocks. In essence, the difficulty is adjusted depending on the generation time of the last 2016 blocks in the network. That is, if the last 2016 blocks took more than 14 days of time to compute (i.e., more than 10 minutes on average), then the difficulty is reduced. Otherwise, if they took less than 14 days of computation, then the network difficulty is increased. To generate a block, miners work on constructing a PoW. In particular, given the set of transactions that have been announced since the last block’s generation, and the hash of the last block, Bitcoin miners need to find a nonce such that: SHAd256{Bll || MR(TR1 , . . . , TRn ) || No} ≤ target , 2
(3.1)
The block contains a total of 436 transactions; here, we only show 3 transactions confirmed in the block.
Bitcoin Protocol Specification
47
where SHAd256 is the SHA-256 algorithm applied twice, Bll denotes the last generated block, MR(x) denotes the root of the Merkle tree with elements x, TR1 || . . . || TRn is a set of transactions that have been chosen by the miners to be included in the block,3 No is the 32-bit nonce, and target is a 256-bit number. To generate the PoW, each miner chooses a particular subset of the candidate solutions’ space and performs a brute-force search. It is apparent that the bigger the target is, the easier it is to find a nonce that satisfies the PoW. The resulting block is forwarded to all peers in the network who can then check its correctness by verifying the hash computation. If the block is deemed to be valid,4 then the peers append it to their previously accepted blocks. Since each block links to the previously generated block, the Bitcoin blockchain grows upon the generation of a new block in the network. As mentioned earlier, when miners do not share the same view in the network (e.g., due to network partitioning), they might work on different blockchains, thus resulting in forks in the blockchain. Block forks are inherently resolved by the Bitcoin system; the longest blockchain will eventually prevail. Transactions that do not appear in blocks that are part of the main blockchain (i.e., the longest) will be readded to the pool of transactions in the system and reconfirmed in subsequent blocks. Currently, in the Bitcoin system, a transaction can be redeemed by the payee if it has received at least six confirmations; that is, there are five new blocks that build on the block which confirms it. This mechanism offers an inherent protection against double-spending attacks since it is computationally infeasible for an adversary to change the history of a transaction that has been confirmed by six blocks in the system. This process is exemplified in Figure 3.7.
3.4
BITCOIN ARCHITECTURE
The Bitcoin ecosystem emerged over the need to provide services to different nodes depending on their available resources. We describe in the following the different node types in Bitcoin and how they interoperate in the network.
3 4
These transactions are chosen from the transactions that have been announced (and not yet confirmed) since Bll ’s generation. That is, the block contains correctly formed transactions that have not been previously spent and have a correct PoW.
48
Bitcoin and Blockchain Security
Figure 3.7 Confirming transactions in Bitcoin.
3.4.1
Node Types
Due to the heterogeneity of nodes in the network in the Bitcoin ecosystem, multiple nodes types are supported in the system.
3.4.1.1
Miner
Miners perform the proof-of-work in order to find and broadcast blocks in the Bitcoin network. Their operations consists mainly of quickly retrieving information about the newest blocks and validating transactions that are included in new blocks. Miners typically operate dedicated mining hardware to perform as many hash operations as possible, and can use dedicated communication links to efficiently spread found blocks to the whole network. Note that the term “mining” originates from the traditional process of acquiring scarce/precious material (e.g., gold)— hence the analogy to Bitcoin mining. As mentioned earlier, every discovered Bitcoin block provides a monetary reward to the miner. Miners typically organize themselves in groups, commonly referred to as mining pools. Because of its higher collective hashing power, a mining pool has a higher probability to find a block, and mining pool members can consequently receive larger payouts than individual miner. We discuss the impact of mining on the security of Bitcoin in Chapter 4.
Bitcoin Protocol Specification
3.4.1.2
49
Full Node
We define a full Bitcoin node as a node that (1) maintains the full copy of the blockchain, (2) validates all incoming transactions and blocks, and (3) forwards transactions and blocks to its peers. In addition to providing validation services to the Bitcoin network, a full node might provide an open TCP port (Bitcoin uses the TCP port 8333) to where other Bitcoin peers connect. Throughout the rest of this book, we use the terms “full node” and “regular node” interchangeably. 3.4.1.3
Lightweight Clients
Lightweight clients are clients that do not store, nor maintain the full Bitcoin blockchain, but follow a simple payment verification (SPV) scheme. This latter scheme allows the lightweight client to verify that a transaction has been included in the blockchain by receiving and verifying only the block headers. In addition, lightweight clients receive only transactions that are relevant to their wallets and do not need to perform transaction or block validation. As a result, lightweight clients require significantly less resources to operate than full nodes or miners. In Chapter 6, we discuss the operation of lightweight clients and detail the SPV mode. 3.4.2
Peer. 3.4.2.1
50
Bitcoin and Blockchain Security.5 More specifically, the peers check if each of the addresses advertised are fresh (using the time stamp field), and if so, they forward it to two neighbors of their choice.6
5 6
The current reference implementation of Bitcoin nodes recognizes three types of addresses: IPv4, IPv6, and OnionCat addresses. Here, we assume that the receiving peer is a reachable address.
Bitcoin Protocol Specification
51). 3.4.2.2
Peer. 3.4.2.3
52
Bitcoin and Blockchain Security
the difficulty of the block headers, and subsequently to download in parallel the actual block content once the main blockchain has been determined. 3.4.2.4
Dedicated Relay Networks. 3.4.2.5
Alert Mechanism.
Bitcoin Protocol Specification
53].
3.5
SCALABILITY MEASURES IN BITCOIN
At the time of writing, almost one transaction per second (tps) [6] is executed in Bitcoin; this results in an average block size of almost 400 KB. The maximum block size is currently limited to 1 MB, which corresponds to less than seven transactions per second. Given the increasing adoption of Bitcoin, the number of transactions, and the block sizes are only expected to increase. For example, if Bitcoin were to handle 1% of the transactional volume of Visa,7 then Bitcoin needs to scale to accommodate almost 500 tps—requiring a large amount of information to be broadcasted in the network. Motivated by these factors, the current Bitcoin protocol implements various bandwidth optimizations and measures in order to sustain its scalability and correct operation in spite of ever-increasing use. In what follows, we detail the existing measures taken by Bitcoin developers. 3.5.1
Request Management System
Bitcoin uses an advertisement-based request management system to minimize the information spread in the network. Namely, to minimize information spread in the network, messages are propagated in the Bitcoin network with the help of an advertisement-based request management system. More specifically, if node A receives information about a new Bitcoin object (e.g., a transaction or a block) from another node, A will advertise this object to its other connections (e.g., node V) by sending them an inv message. These messages are much smaller in size than the actual objects, because they only contain the hash and the type of object that is advertised. Only if node V has not previously received the object advertised by the inv message, will V request the 7
Currently, the Visa network is designed to handle peak volumes of 47,000 tps [7].
54
Bitcoin and Blockchain Security
Figure 3.8 Propagation mechanism for blocks and transactions.
object from A with a getdata request. Following the Bitcoin protocol, node A will subsequently respond with a Bitcoin object (e.g., the contents of a transaction or a block). By doing so, inventory messages limit the amount of data broadcast in the network. Note propagated directly following the reception of the corresponding transaction inv message. This process is summarized in Figure 3.8. To minimize bandwidth consumption, Bitcoin nodes request a given object only from a single peer, typically the first peer that first advertises the object. Requesting the same object from multiple peers entails downloading the same data several times, and therefore can only increase the bandwidth consumption of the system. As in the case of address discovery protocols, when a client generates a transaction, the client schedules it for forwarding to all of its neighbors. In particular, the client computes a hash of a value composed of the transaction hash and a secret salt. If the computed hash has the last two bits set to zero, the transaction is forwarded
Bitcoin Protocol Specification
55
immediately to all the eight entry nodes. Otherwise, the transaction is queued for announcement as described before for addr messages: the neighbor receives the transaction whenever it is selected as a trickle node. To avoid flooding the network with unnecessary messages and messages similar to addr messages, a Bitcoin peer maintains history of all forwarded transactions for each connection. If a transaction was already sent over a connection, the transaction will not be resent another time. In addition, a Bitcoin peer keeps all received transactions in a memory pool, such that if the peer received a transaction with the same hash as one in the pool or in a block in the main blockchain, the received transaction will be ignored. 3.5.2
Static Time-outs
Bitcoin relies on static time-outs in order to prevent blocking while tolerating network outages, congestion, and slow connections. Blocking can occur, for example, when a node stops responding during communication. Given that Bitcoin runs atop an overlay network, communication latency and reliability pose a major challenge to the correct operation of the system. For example, in Bitcoin version 0.10, the Bitcoin developers introduced a block download time-out of 20 minutes.8 Similarly, for transactions, the Bitcoin developers introduced a 2-minute timeout. Note that the choice of the time-out is a nontrivial task and depends on a number of parameters such as bandwidth, object size, latency, processing power, and the Bitcoin version of each node. On the one hand, overly long time-outs might deteriorate the quality of service of the whole network and can be abused to conduct, for example, double-spending attacks [8]. On the other hand, short timeouts might hinder effective communication under varying network conditions or when communicating with slow peers. 3.5.3
Recording Transaction Advertisements
Bitcoin clients keep track of the order of the received transaction advertisements. If a request for a given transaction is not delivered, the next peer in the list is queried. When a transaction T is advertised via an inv message to a given node, the latter keeps track of the order of announcements with a first-in first-out (FIFO) buffer. Each time a peer advertises T , the peer is inserted into the buffer. Transaction T is only requested from one peer at a time. For each entry in the buffer, Bitcoin 8
Available from.
56
Bitcoin and Blockchain Security
clients maintain a 2-minute time-out, after which the next entry in the buffer is queried for T . 3.5.4
Internal Reputation Management System
Bitcoin combats the broadcasting of ill-formed blocks and transactions by maintaining an internal reputation management system. Namely, whenever a node receives objects (e.g., blocks, transactions), it checks their correctness before forwarding them to other peers in the network. First, objects are validated based on their respective syntax and size (e.g., oversized objects are discarded). If this verification passes, the contents of the objects are subsequently validated. For transactions, this includes verifying the signature and the input and output coins used in the transaction; similarly, the PoW included in block headers is verified with respect to the current network difficulty. To prevent any abuse of the Bitcoin overlay network (e.g., denial-of-service attacks), a receiving node locally assigns a penalty to peers who broadcast illformed objects. Once a node has reached 100 penalty points, the receiving node disconnects from the misbehaving node for 24 hours. For example, if a node broadcasts invalid alerts, then it will be given 10 penalty points. Nodes that attempt more serious misbehavior, such as inserting invalid transaction signatures, are immediately assigned 100 points, and therefore directly banned. Penalties also apply to ill-formed control messages such as inv (inventory) or addr commands. Note that locally assigned penalties are not transmitted to other peers.
References [1] Technical background of version 1 Bitcoin addresses, 2013. available from https: //en.bitcoin.it/wiki/Technical_background_of_version_1_Bitcoin_ addresses. [2] Ghassan O. Karame, Elli Androulaki, and Srdjan Capkun. Double-spending fast payments in bitcoin. In Proceedings of the 2012 ACM conference on Computer and communications security, CCS ’12, pages 906–917, New York, 2012. ACM. [3] D Brown. Sec 2: Recommended elliptic curve domain parameters, 2010. available from http: //. [4] Matt Corallo. Bitcoin relay network. available from. [5] Bitcointalk Forum, available from. [6] Bitcoin Wiki, available from.
Bitcoin Protocol Specification
57
[7] Stress Test Prepares VisaNet for the Most Wonderful Time of the Year, 2015. available from http: //. [8] Ghassan O. Karame, Elli Androulaki, Marc Roeschlin, Arthur Gervais, and Srdjan CĚŒapkun. Misbehavior in Bitcoin: A Study of Double-Spending and Accountability. ACM Trans. Inf. Syst. Secur., 18(1):2:1–2:32, May 2015.
Chapter 4 Security.
4.1
59
60
Bitcoin and Blockchain Security. 4.1.1
Transaction Verification
Since each payment references the last transactions where each of the coins has been spent, coin expenditure can be easily traced in the network, as shown in Figure 4.2. Moreover, since all transactions are broadcasted in the entire network, all peers in the network can verify their correctness. Namely, whenever a peer in the network (including the payee) receives a transaction, it checks its signature, format, the correctness of its fields, and that the sum of the coins referenced by the inputs matches that of the outputs (and the fees1 ). Additionally: • Each peer verifies that the input coins have not been spent earlier by checking against the history of all executed transactions in the network • Each peer verifies that the input coins refer to correct transactions that have already been confirmed in the network. By doing so, Bitcoin prevents the double-spending of coins in the system. This is mainly achieved since the details and order of all transactions are publicly 1
As described later, fees are collected by miners who include the transaction in their newly generated block.
Security of Transactions in Bitcoin
61
Figure 4.2 Coin expenditure in the Bitcoin network. Here, we show an example comprising of four transactions spending a coin from Owner 0 to Owner 1, Owner 2, Owner 3, and Owner 4.
announced in the system, and since the communication in Bitcoin is synchronous. All verified transactions are included temporarily in the peer’s memory pool until they are confirmed by the network (see Section 4.1.4). A study in [1] has shown that transactions are propagated in the network in few seconds. Recently, several attacks [2, 3] were reported on the delivery of transactions and blocks in Bitcoin. In what follows, we discuss these attacks in greater detail. 4.1.2
Eclipse Attacks in Bitcoin
Recently, Heilman et al. showed how to attack the Bitcoin mining protocol by monopolizing the connections of nodes in the system [2]. For example, the attacker could possess a large number of IP addresses at his or her disposal and controls a large number of machines (e.g., a botnet). Alternatively, the adversary might be an Internet service provider (ISP) or a nation state adversary. On the other hand, the victim node is assumed to have a public IP address; that is, the victim is not located behind network address translators (NATs). Finally, the attack requires the ability of the adversary to cause a restart of the victim’s Bitcoin client (e.g., by means of distributed denial-of-service attacks or when the victim upgrades his or her Bitcoin client). The intuition behind eclipse attacks is straightforward. Eclipsing entails blinding the view of the victim from the blockchain and requires that the adversary is able
62
Bitcoin and Blockchain Security
to isolate the victim from the rest of the network by monopolizing all of the victim’s outgoing and incoming connections. In [2], this is achieved by exploiting the way in which Bitcoin clients store the IP addresses that are advertised in the network. Namely, in Bitcoin, peers exchange addr messages that contain IP addresses and their time stamps. These messages are used by nodes to obtain network information from peers. Public IPs are stored at each node in two tables: tried and new tables. The tried table consists of 64 buckets; each can store 64 unique addresses from peers with whom the node has established communication before. The node also keeps the time stamps of each tried IP address. When the node connects to a new peer, his or her address and the time stamp are added to the tried bucket. If the bucket is full, the new address is inserted at a random location in the bucket and replaces the address that was stored there. Similarly, the new table contains 256 buckets, each containing up to 64 addresses. Since each Bitcoin node can have up to eight outgoing connections, and 117 incoming connections, the node selects the IP addresses to connect to from tried with probability ρ. Otherwise, the node selects with probability 1 − ρ the address to connect to from the new table; note that node selection is biased to IP addresses that have a recent time stamp. Therefore, for the adversary to monopolize the connections of his victim, the adversary has to populate the tried table with addresses that are under their control. Heilman et al. suggest populating the new table with bogus IP addresses (e.g., nonexisting IP addresses). Later on, when the victim restarts, then it will attempt to connect with addresses from the tried and new tables; all of the addresses that the victim will connect to are guaranteed then to be under the control of the adversary. Real experiments in the Bitcoin network show that eclipse attacks succeed with a probability of 84%. 4.1.2.1
Implications
The aforementioned attack can have serious implications on the Bitcoin network. Implication 1: The adversary can increase its advantage in selfish mining (see Section 4.1.4.1), by splitting the mining power of the honest nodes. Implication 2: The adversary can double-spend transactions even if these transactions are confirmed by six consecutive blocks. For example, the victim can be a merchant and the adversary can simply pay him, eclipse the miners working on confirming this transaction, and then issue a double-spending transaction to uneclipsed miners. Since the blocks performed by eclipsed miners will be eventually obsolete, this attack is likely to succeed.
Security of Transactions in Bitcoin
4.1.2.2
63
Countermeasures
Heilman et al. suggest a number of countermeasures to thwart this attack: Countermeasure 1: One possible hardening technique is to ensure that the same address hashes to the same bucket and the same location in the tried table. By doing so, one can prevent the adversary from reusing the same address more than once to fill the tried table. Countermeasure 2: Another countermeasure would be to simply avoid any bias in choosing addresses that are recent. Currently, there is a bias in choosing recently time-stamped addresses, which will increase the probability to connect to the adversary’s addresses. Countermeasure 3: Another basic countermeasure would be to ensure that an IP address exists (e.g., by attempting to ping/connect to it) before overwriting an older address in the tried and new. Countermeasure 4: One possible countermeasure would be to simply add new buckets, which will harden the realization of such attacks. Countermeasures 1, 2, and 4 have been integrated in the official Bitcoin client v0.10.1. Note that in [2], the adversary needs to have almost 5120 IP addresses at his or her disposal to eclipse a victim. Moreover, the adversary would need clients to restart. Recently, Gervais et al. [4] have however shown that even resourceconstrained adversaries can perform similar eclipse attacks without requiring any node restart. Namely, the authors show that the adversary can abuse existing scalability measures adopted in Bitcoin in order to deny information about transactions to Bitcoin nodes for a considerable amount of time. In what follows, we sketch out this attack and outline a number of countermeasures. 4.1.3
Denying the Delivery of Transactions
To
64
Bitcoin and Blockchain Security.
4.1.3.1
Possible Countermeasures
Little can be done to thwart this attack. For instance, even if nodes limit the number of connections they accept (since the attack requires a direct connection to the victim), it is hard for nodes to ensure that all their current connections are trustworthy. Moreover, nodes can try to filter the received inv messages by IP or can randomly (instead of sequentially) query the next peer after a time-out has occurred. However, an adversary that possesses several nodes at his or her disposal can easily thwart these countermeasures and flood the victim with inv messages corresponding to the desired transaction from a large number of nodes. Even if they randomly select the peer to query from the advertiser’s list, then the probability of consistently selecting the adversary can be considerable, depending on the number of nodes controlled by the adversary. This shows the limits of synchrony in Bitcoin and motivates the need for a redesign of Bitcoin’s object request management system.
Security of Transactions in Bitcoin
65
Figure 4.3 Transaction advertisement management system in Bitcoin.
4.1.4
Transaction Confirmation
In Section 4.1.1, we described the transaction verification process in Bitcoin. Note that this process cannot guarantee by itself the security of transactions since a powerful adversary may try to modify the history of transactions that occurred in the system (e.g., in order to double-spend or to increase its advantage in the system). To this end, Bitcoin relies on the hash-based PoW mechanism in order to (computationally) prevent any entity from modifying the history (and order) of the transactions executed within the system. As mentioned in Chapter 3, to generate a block, miners must find a nonce value that, when hashed with the Merkle hash of all valid and received transactions included in their memory pool, the hash of the previous block, and a time stamp, the result is below a given target difficulty. Since each block links to the previously generated block, the Bitcoin blockchain grows upon the generation of a new block in the network. In this way, blocks confirm Bitcoin transactions and commit them in the system. Namely, if any entity wants to modify the transactions executed in the system, then it does not only have to redo all the work required to compute the block where that transaction was included, but it has to also recompute all the subsequent blocks in the chain. That is, the older
66
Bitcoin and Blockchain Security
a Bitcoin transaction is, and thus the deeper it is included in the blockchain, the harder it becomes to modify the transaction. 4.1.4.1
Selfish Mining
The original white paper of Bitcoin [5] claimed that the security of transactions in the system can be guaranteed as long as more than 50% of the network miners are honest. The main intuition here is that, in case of conflict or fork in the blockchain, then honest peers will adopt the longest Bitcoin chain—which is backed up by the majority of the computing power in the system. As long as honest peers control the majority of computing power in the system (i.e., they control more than 50% of the hash rate), then they can sustain the prolongation of the longest chain and ensure that only valid transactions are confirmed in this chain. On the other hand, an adversary that controls more than 50% of the computing power in the system can, in theory, double-spend transactions, prevent transactions from being confirmed, prevent honest miners from mining valid blocks, and so on. This clearly invalidates the entire security of Bitcoin. Eyal and Sirer [6] have shown that this limit can be considerably reduced. Namely, the authors showed that selfish miners which command more than 33% of the total computing power of the network can acquire a considerable mining advantage in the network. In [7], Sapirshtein et al. extended these results and provided even lower bounds on the computational power an attacker needs in order to benefit from selfish mining. Namely, in the selfish mining strategy of [6], a selfish miner does not directly announce its newly mined blocks in the network and instead keeps them secret until the remaining network finds new blocks. This strategy aims at wasting the computing power invested by other honest miners in the system; these miners will be investing their computing power toward building a block that is unlikely to be part of the longest chain. To deter this misbehavior, Eyal and Sirer propose the following countermeasure: when a miner is aware of two competing blocks, the miner should propagate both blocks and select a random block to mine on. Recent analysis has shown that this countermeasure can be easily circumvented by the adversary. For instance, it was recently shown in [4,8] that a resourceconstrained adversary can deny the delivery of blocks in the system for a considerable amount of time. Namely, by exploiting the object request management system of Bitcoin as described in Section 4.1.3, a resource-constrained adversary can prevent the delivery of blocks for at least 20 minutes (since the time-out of block reception in the request management system of Bitcoin is 20 minutes). By doing so,
Security of Transactions in Bitcoin
67
an adversary can subvert the aforementioned countermeasure indicated by Eyal and Sirer against selfish mining [4]. Several other recent works examined the game theoretic consequences of attacks and cooperation between pools. For instance, Eyal [9] has shown that pools can gain additional advantage in the network by infiltrating into other pools. Namely, by registering with the victim pool, the attacking pool will then receive tasks and transfer them to some of its own miners. Although the attacker mining power is reduced, since some of its miners are used for block withholding, the attacker earns additional revenue by infiltrating into the other pool—which might increase the revenue of the attacker (and decrease the mining difficulty in the Bitcoin protocol). Even worse, recent results show that by combining the aforementioned mining attacks with network-level attacks, the adversary can considerably increase its advantage in the selfish mining game [4, 10]. For instance, the findings of Gervais et al. [4] suggest that an adversary who performs selfish mining and denies block delivery from other miners can acquire considerable advantage in the network if he or she commands more than 26.5% of the computing power. Moreover, the authors showed that an adversary that commands less than 34% of the computing power can effectively sustain the longest blockchain and therefore control the entire network. 4.1.4.2
Transaction Confirmation Time
In what follows, we analyze the transaction confirmation time in Bitcoin (adapted from [1, 11]). Recall that transactions are confirmed by means of a PoW, as shown in (3.1). We start by noting the following observations: 1. The probability of success in a single-nonce trial is negligible. Since SHA-256 target is a pseudorandom permutation function, each of the 232 nonces has 2256 −1 probability of satisfying the PoW. 2. Miners compute their PoW independently; therefore, the success probability of one miner does not depend on the progress of others. 3. Miners frequently restart the generation of their PoW and whenever a new transaction is added to the memory pool of a miner, the Merkle root (included in the block) changes. 4. The time interval, dt, between the announcement of successive transactions is in the order of a few tens of seconds.
68
Bitcoin and Blockchain Security
Given the first two observations, the probability of a miner of succeeding in a single block generation attempt can be modeled as an independent Bernoulli target process with success probability ε = 2256 −1 . Based on the last observation, consecutive block generation attempts can be modeled as sequential Bernoulli trials with replacement. This claim is justified by the fact that the PoW progress invested by a miner (expressed as a number of hash calculations) prior to a PoW reset is negligible in comparison to 2256 − 1.2 Let ni refer to the number of attempts that a miner mi performs within a time period δ. The probability pi of mi finding at least one correct PoW within these trials is given by pi = 1 − (1 − ε)ni . Since ε and ni are small, pi can be approximated to pi = 1 − (1 − ε)ni ≈ ni ε. Therefore, the set of trials of mi within δ constitutes a single Bernoulli process with success probability ni ε. Assuming that there are ` miners, mi , i = 1 . . . ` with success probability pi , i = 1 . . . ` respectively, the overall probability of success in block generation can be approximated to: pr ≈ 1 −
` Y
(1 − pi ), or pr = 1 − (1 − p)` ≈ ` · p.
i=1
This is true when p` 1 and when the miners have equal computing power (i.e., pi = p, i = 1 . . . `). We divide time into equal-sized intervals of size δ; let t0 = 0 denote the time when the last block was generated. Here, each miner can make up to ni trials for block generation within each interval. Let the random variable Xk denote the event of success in the time interval between tk and tk+1 . That is, 1 if a block is created between tk−1 , and tk , Xk = 0 otherwise. It is clear that Prob(Xk = 1) = pr. We denote by Y, the number of attempts performed by miners until a success is achieved. Note that Y’s values are distributed according to the geometric distribution model since:
Prob(Y = k) = Prob(Xk = 1)
k−1 Y
Prob(Xi = 0) = pr(1 − pr)k−1 .
i=1
2
This is the case since the PoW progress approximates 235 2256 − 1 given the computing power of most Bitcoin miners [12, 13].
Security of Transactions in Bitcoin
69
Assuming a constant rate of trials per time window δ, the number of failures until a success is observed in block generation is proportional to the block generation time T . Prob(T = k · δ) = Prob(Y = k) = pr(1 − pr)k−1 . Given this, we conclude that the distribution of block generation times can be modeled with a shifted geometric distribution with parameter pr [14]. In Figure 4.4, we sketch this distribution using parameters p = 0.19, and δ = 60 seconds as advocated in [11]. Figure 4.4 shows that each block confirmation requires on average 10 minutes with a standard deviation of almost 20 minutes.
4.2
SECURITY OF ZERO-CONFIRMATION TRANSACTIONS
In the previous section, we discussed the security of standard transactions in Bitcoin. We showed that transaction confirmation relies on the PoW process, whose solution follows a shifted geometric distribution. Our findings show that the time required to confirm transactions impedes the operation of many businesses that are characterized by a fast-service time. Namely, a client can wait up to 100 minutes before his or her payment receives the six confirmation required to validate standard payments in Bitcoin. It is also clear that vendors, such as vending machines and takeout stores [15], cannot rely on transaction confirmation when accepting Bitcoin payments. To remedy these problems, Bitcoin encourages vendors to accept fast Bitcoin payments with zero confirmations (i.e., without requiring that these transactions are confirmed in blocks), as soon as the vendor receives a transaction from the network transferring the correct amount of BTCs to one of its addresses [15, 16]. In what follows, we show that zero-confirmation transactions are insecure. We also outline a countermeasure to strengthen the security of zero-confirmation transactions. 4.2.1
(In-)Security of Zero-Confirmation Transactions
In what follows, we show that double-spending attacks are easily realizable on zeroconfirmation transactions. For that purpose, we assume a setting featuring a malicious client A and a vendor V connected through a Bitcoin network (see Figure 4.5). We assume that A wishes to acquire a service from V without having to spend its BTCs. More
70
Bitcoin and Blockchain Security
Figure 4.4 Cumulative distribution function (CDF) of block generation times. Approximately 30% of Bitcoin blocks take between 10 and 40 minutes to be generated.
specifically, A could try to double-spend the coin that he or she already transferred to V. By double-spending, we refer to the case where A tricks the vendor V into accepting a transaction TRV that V will not be able to redeem subsequently. In this case, A creates another transaction TRA that has the same inputs as TRV (i.e., TRA and TRV use the same BTCs) but replaces the recipient address of TRV —the address of V— with a recipient address that is under the control of A. Figure 4.6 shows an example of transactions TRV and TRA . We assume that A can only control few peers in the network (that he or she can deploy since Bitcoin does not restrict membership) and does not have access to V’s keys or machine. The remaining peers in the network are assumed to be honest and to correctly follow the Bitcoin protocol. We further assume that A does not participate in the block generation process. Given this, we outline the necessary conditions for A’s success in performing a double-spending attack on zero-confirmation transactions.
Security of Transactions in Bitcoin
71
Figure 4.5 Our system model.
Figure 4.6 Example of two transactions involving double-spending coins labeled Input 1 and Input 2.
Requirement 1: TRV is added to the wallet of V If TRV is not added to the memory pool of V, then V cannot check that TRV was indeed broadcasted in the network. A Let tV i and ti denote the times at which node i receives TRV and TRA , A respectively. As such, tV V and tV denote the respective times at which V receives A TRV and TRA . Note that for TRV to be included in V’s wallet, then tV V < tV ; otherwise, V will first add TRA to its memory pool and will reject TRV as it arrives later. This requirement can be easily satisfied if A (or one of its helper nodes) connects directly to V and sends TRV to V first, before forwarding the doublespending transaction TRA to the rest of the network. In this way, V will immediately receive TRV and add it to its memory pool before receiving TRA .
72
Bitcoin and Blockchain Security
Requirement 2: TRA is confirmed in the blockchain If TRV is confirmed first in the blockchain, TRA cannot appear in subsequent blocks. That is, V will not have its BTCs back. Recall that the goal of A is to acquire a service offered by V without having to spend his or her BTCs. As shown experimentally in [1], this requirement can be satisfied by broadcasting TRA quickly to as many nodes in the network. Note that if TRV and TRA are released by V and A at the same time in the network, they are likely to have similar chances of getting confirmed in an upcoming block. This is the case since Bitcoin peers will not accept multiple transactions that share common inputs; they will only accept the version of the transaction that reaches them first, which they will consider for inclusion in their generated blocks and will ignore all subsequent transactions. Given this, a double-spending attack can succeed if V receives TRV (see Requirement 1), and the majority of the peers in the network receive TRA so that TRA is more likely to be included in a subsequent block. This can be achieved if A can rely on the cooperation of one or more helper nodes that help him or her broadcast TRA to a large number of nodes. Note that A (and its helpers) can try to increase the number of their immediate neighbors in order to increase the success probability of their attack. Requirement 3: V’s service time is smaller than the time it takes V to detect misbehavior Since Bitcoin users are anonymous and users hold many accounts, there is only limited value in V detecting misbehavior after the user has obtained the service (e.g., left the store). As such, for V to successfully detect any misbehavior by A, the detection time must be smaller than the service time. We point out that requirements (1) and (2) are sufficient for the case where the vendor only checks for the reception of the transaction as a proof of payment and does not employ any other double-spending prevention/detection techniques. This is currently the case in most existing Bitcoin client implementations. This analysis shows that requirements 1, 2, and 3 can be realizable in existing Bitcoin implementations. This analysis is confirmed by means of experimental results adapted from [1] and summarized in Table 4.1. These results confirm that double-spending attacks succeed with a probability of almost 100% if A utilizes at least one additional helper node. This clearly shows that zero-confirmation transactions are not secure in Bitcoin and should not be accepted directly by vendors.
Security of Transactions in Bitcoin
73
Table 4.1 Success Probability in Double-Spending Zero Confirmation Payments in Bitcoin3
Location Asia Pacific 1, 125 connections Asia Pacific 2, 125 connections North America 1, 8 connections North America 2, 40 connections Asia Pacific 1, 8 connections Asia Pacific 2, 125 connections North America 1, 40 connections
4.2.1.1
# Helpers 2 2 1 1 2 2 1
Success probability 100% 100% 100% 90% 100% 100% 100%
Finney Attack
Note that we assume so far that A does not participate in the mining process. In case A is a miner or compromises a node that participates in the mining process, then the advantage of A in mounting double-spending attacks on zero-confirmation transactions can further increase depending on the mining power available to the adversary. Namely, Finney [17] describes a double-spending attack in Bitcoin where the attacker includes in his or her generated blocks a number of transactions that transfer some coins between his or her own addresses. These blocks are only released in the network after the attacker double-spends the same coins using zero-confirmation payments and acquires a given service. Clearly, the success probability of this attack depends on the mining power available to the adversary. Given the tremendous computing power that supports the current Bitcoin network,4 the success probability of an adversary that does not control a considerable fraction of the mining power is only negligible.
3
4
Here, “Location” denotes the location of V, “connections” denote the number of V’s connections. The success probability is adapted from the findings of [1] and is interpolated by means of experiments using Amazon nodes. The hashing rate in Bitcoin amounted to 0.5·1015 hashes per second in November 2015.
74
4.2.2
Bitcoin and Blockchain Security
Possible Countermeasures
We start by discussing a number of countermeasures to alleviate this attack. We also present a solution that is integrated in Bitcoin XT and we analyze the limitations of this solution. Adopting a Listening Period As advocated in [15], one possible way for V to detect double-spending attempts is to adopt a listening period of a few seconds before delivering its service to A; during this period, V monitors all the transactions it receives, and checks if any of them attempt to double-spend the coins that V previously received from A. This techniques are based on the intuition that since it takes every transaction a few seconds to propagate to every node in the Bitcoin network, then it is highly likely that V would receive both TRV and TRA within the listening period (and before granting service to A). This detection technique can be circumvented by A as follows. A can attempt V to delay the transmission of TRA such that t =(tA V − tV ) exceeds the listening period (requirement (3)) while TRA still has a significant chance of being spread in the network. On one hand, as t increases, the probability that all the immediate neighbors of V in the Bitcoin P2P network receive TRV first also increases; when they receive TRA later on, TRA will not be added to the memory pool of V’s neighbors and as such TRA will not be forwarded to V. On the other hand, A should make sure that TRA was received by enough peers so that requirement (2) can be satisfied. To that end, A can increase the number of helpers it controls. As shown in Table 4.2 (results adapted from [1]), an adversary can successfully double-spend transactions even if the merchant adopts a listening period of 15 seconds. The detection probability in this case varies between 10% and 80% depending on the topology of the underlying overlay Bitcoin network. Even worse, as shown in Table 4.3, there are cases in which the vendor can never detect a doublespending attack even if he or she adopts an infinite listening period time. These cases correspond to the scenario where all the neighbors of V have received TRV first and therefore they will never forward TRA to V. Inserting Observers in the Network Note that V can also rely on additional nodes that it controls within the Bitcoin network—observers—which would directly relay to V all the transactions that they
Security of Transactions in Bitcoin
75
Table 4.2 Experimental Detection Probability Using a Listening Period of 15 Seconds
Detection probability 10% 10% 6.66% 20% 11% 10% 30% 63% 20% 30% 45% 10% 10% 10% 20% 40% 20% 26.66% 80%
receive. This countermeasure circumvents the limitations of the listening period as it is hard for an adversary to identify all observers in the network and deny them the delivery of TRA . However, this technique incurs additional costs on merchants who have to invest in additional equipment to deploy observers in various locations around the globe. As shown in Table 4.4 (results adapted from [1]), almost five different observers need to be deployed across the globe to ensure that at least one observer detects double-spending attacks.
5
Here, ‘Setting’ refers to the location of V, the number of connections of V at the time of the attack, and the number of helpers employed by A.
76
Bitcoin and Blockchain Security
Table 4.3 Experimental Instances Where TRA Is Not Received by V
Setting South America, 8 connections, 3 helpers South America, 8 connections, 4 helpers Asia Pacific, 8 connections, 3 helpers Asia Pacific, 8 connections, 3 helpers North America, 20 connections, 3 helpers Asia Pacific, 60 connections, 1 helper
6
Success probability 7.7% 57% 57% 66% 47% 20%
Refusing Incoming Connections The success probability of double-spending attacks on zero-confirmation transactions heavily depends on the propagation delay of TRV from A to V. Clearly, a direct communication channel between A and V considerably contributes to the success of the attack. For instance, if merchants do not accept incoming connections or are located behind firewalls and NATs, then the success probability of doublespending attacks can be reduced. Note that it is hard, however, for merchants to ensure that all their current connections are trustworthy. For example, their connections can be compromised by the adversary. Increasing the Number of Neighbors We additionally point out that the number of connections established by V is an important parameter affecting the success of double-spending attacks. That is, the fewer connections of V, the more likely is that all the neighbors of V receive TRV before TRA and thus that V does not receive TRA and therefore cannot detect the attack. Similarly, as the number of connections of V increases, it is more likely that
6
In this case, V cannot detect double-spending attacks even if it adopts a very large listening period. Here, ‘Setting’ refers to the location of V, the number of connections of V at the time of the attack, and the number of helpers employed by A.
77
Security of Transactions in Bitcoin
Table 4.4 Experimental Detection Probability Using 5 Observers
% Observed 53% 47% 62% 91% 46% 74% 78% 78% 60% 60% 87% 42% 42% 36% 36% 57% 18% 28% 88%
some of these neighbors receive TRA before TRV and forward it to V, who can immediately detect a double-spending attempt. Inflicting Penalties on Misbehaving Nodes Bitcoin recently adopted a penalty system to punish misbehaving nodes; for example, nodes that broadcast ill-formed objects can be temporarily (up to 24 hours) banned from connecting to a peer. One possible way to deter a double-spending 7
Here, ‘Setting’ refers to the location of V, the number of connections of V at the time of the attack, and the number of helpers employed by A. ‘% Observed’ refers to the fraction of observers detecting double-spending attacks.
78
Bitcoin and Blockchain Security
attack would be to rely on the alert mechanism of Bitcoin to alter the network of a misbehaving address that is attempting double-spending attacks. For example, in our case, nodes can forward both TRV and TRA using an alert message to the rest of the network. All Bitcoin nodes can then verify that the address of A is attempting a double-spend and can decide not to accept any transaction issued by this address. However, the impact of this penalty is limited since A could double-spend using addresses that contain little (or no) BTCs. In Chapter 5, we show how to link different Bitcoin addresses of an entity in an attempt to inflict a harsh penalty on A. Not Advertising TRV Bamert et al. [18] suggested that V can effectively avoid isolation by not relaying transaction TRV . By doing so, all the neighbors of V will forward TRA to V who will be able to detect the double-spending attack immediately. This countermeasure can be circumvented if the attacker also similarly does not immediately advertise TRA in the network. As soon as the V advertises TRV (and one of the attacker’s nodes receives it), TRA can be advertised in the network to prevent V from detecting the attack. Forward First Double-Spend Attempt In order to efficiently detect double-spending on zero-confirmation transactions, Karame et al. proposed in [1,11] that Bitcoin peers forward transactions that attempt to double-spend the same coins in the Bitcoin network. Namely, whenever a peer receives a new transaction, it checks whether the transaction uses coins that have not been spent in any other transaction that resides in the blockchain to different recipients, then the peers forward the transaction to their neighbors (without adding the transaction to their memory pools). To decrease the number of transactions circulating in the Bitcoin network and to prevent the deterioration of the performance of the network, peers can only forward the first double-spending transaction attempt in the network and drop all subsequent double-spending of the same coin. This variant ensures that all peers in the network can identify and verify the misbehaving address and refuse to receive
Security of Transactions in Bitcoin
79
any subsequent transaction from this address. This variant detection technique has been integrated in Bitcoin XT [19]. Recently, Gervais et al. [4] showed that the protection of Bitcoin XT is not effective in preventing double-spending attacks of fast payments. They show that A can deny the delivery of double-spending transactions to the merchant using the attack described in Section 4.1.3, thus effectively preventing a Bitcoin XT node from discovering any double-spending attempt.
4.3
BITCOIN FORKS
We now discuss another important security threat to Bitcoin, forks. During the normal Bitcoin operation, miners work on extending the longest blockchain in the network. If miners do not share the same view in the network (e.g., due to network partitioning), they might work on different blockchains, thus resulting in forks in the blockchain (see Figure 4.7). Block forks are inherently resolved by the Bitcoin system; the longest blockchain (which is backed up by the majority of the computing power in the network) will eventually prevail. In rare instances, the Bitcoin developers can force one chain to be adopted at the expense of others [20]. Transactions that do not appear in blocks that are part of the main blockchain (i.e., the longest) will be readded to the pool of transactions in the system and reconfirmed in subsequent blocks. During block forks, the adversary bears little risk in performing doublespending attacks. Indeed, under such settings, the adversary can try to include TRV in one chain, and TRA in another [17]. In what follows, we discuss in greater detail double-spending attacks in the special case where Bitcoin is subject to blockchain forks [21]. 4.3.1
Exploiting Forks to Double-Spend
In what follows, we describe an example of a double-spending attack—that was tested in Bitcoin in [22]—and takes advantage of block forks. This attack leverages an exploit in Bitcoin that arises from the simultaneous adoption of client versions 0.8.1 and 0.8.2 (or beyond) in the network. Starting from version 0.8.2, Bitcoin clients no longer accept transactions that do not follow a given signature encoding. As we show, this incompatibility with prior client versions can potentially lead to a double-spending attack on zero-confirmation payments in Bitcoin. Note that this attack can only work when V operates on any client version prior to 0.8.2.
80
Bitcoin and Blockchain Security
Up to version 0.8.1, a transaction signature could contain zero-padded bytes and the signature check would still be valid. However, starting from version 0.8.2, transactions with padding will no longer be accepted to the memory pool of nodes nor will they be relayed to other nodes.8 This gives a considerable advantage for A to mount a double-spending attack as follows: 1. A sends a transaction TRV with a zero-padded signature to V. 2. TRV will be relayed to the miners. Miners that use any Bitcoin version newer than 0.8.1 will not accept the transaction in their memory pool and thus not include it into a block. Miners with an older Bitcoin version will accept it. 3. A waits for a short time t (e.g. 1—5 minutes) until he or she acquires service from the merchant. 4. Then, provided that TRV was still not included in a Bitcoin block, A sends another transaction TRA that double-spends the inputs of TRV to the benefit of a new Bitcoin address that is controlled by A. TRA is not padded with additional zeros. 5. If most peers in the network use newer client versions than version 0.8.1, they will accept TRA (and will reject TRV ). The higher the fraction of peers that use version 0.8.2 (or beyond), the larger is the likelihood that TRA is included in a block and that the attack succeeds. While block forks might naturally occur from time to time in the network, such forks are unlikely to last for more than few blocks, as the network views tend to naturally converge on the longest blockchain within a few blocks. We argue that new version releases, on the other hand, can cause more serious damage since they might result in long-lasting block forks that can only be stopped by manual intervention. Version releases should therefore be carefully designed for backward-compatibility; otherwise, the Bitcoin system might witness severe misbehavior. 4.3.2
Fork Resolution
As mentioned earlier, blockchain forks are detrimental to the operation of the Bitcoin system. Since one blockchain will eventually prevail (the longest), all transactions that were included in all other chains will be invalidated by the miners in the system. 8
This applies to all Bitcoin versions starting from version 0.8.2 until the time of writing (i.e., version 0.8.5).
Security of Transactions in Bitcoin
81
Figure 4.7 Sketch of the Bitcoin blockchain fork that occurred in Bitcoin on 11.03.2013.
Note that Bitcoin does not embed any mechanism to alleviate this problem; instead, if the fork persists for a considerable period of time, Bitcoin developers have to make a decision in favoring one chain at the expense of another (e.g., by sending alert messages and hard-coding the preferred chain in the client code). As an example, we describe a recent chain fork in March 2013 (adapted from [20]) that solicited intervention from the Bitcoin developers. Bitcoin client version 0.7 stored the blockchain in the BerkleyDB database, while client version 0.8 switched to the more efficient LevelDB database. Version 0.7 sets the threshold for the maximum number of locks per BerkleyDB update to 10,000; this limit, on the other hand, is set to 40,000 in version 0.8. This discrepancy caused a serious fork in the blockchain starting from block 225,430 on March 11, 2013. This block contained around 1,700 transactions, affected more than 5,000 block index entries, and therefore exceeded the required number of locks for version 0.7 (each block index entry requires around 2 locks in BerkleyDB). As a consequence, this resulted in a severe block fork in the chain; all version 0.7 miners rejected block 225,430 and continued working on a blockchain that did not include it, while miners with version 0.8 accepted that block and added it to their blockchain. The chain adopted by version 0.8 clients was supported by the majority of the computing power in the network (it exceeded the chain adopted by 0.7 clients by 13 blocks at block 225,451). Nevertheless, the Bitcoin developers decided, 90 minutes after the fork occurred, to force the smallest chain to be the genuine one. This decision comes at odds with the claim that Bitcoin is a decentralized system and that the majority of the computing power regulates Bitcoin. Less than 10 entities [23] took a decision to outvote the majority of the computing power in the network; this decision has affected the transactions of thousands of users. We also point out that such influential entities also have the power to make more radical decisions (e.g., accepting or rejecting transactions in the system).
82
Bitcoin and Blockchain Security
References [1]. [2] Ethan Heilman, Alison Kendler, Aviv Zohar, and Sharon Goldberg. Eclipse attacks on bitcoin’s peer-to-peer network. In Proceedings of the 24th USENIX Conference on Security Symposium, SEC’15, pages 129–144, Berkeley, CA, USA, 2015. USENIX Association. [3]. [4] Arthur Gervais, Hubert Ritzdorf, Ghassan O. Karame, and Srdjan Capkun. Tampering with the delivery of blocks and transactions in bitcoin. IACR Cryptology ePrint Archive, 2015:578, 2015. [5] S. Nakamoto. Bitcoin: A Peer-to-Peer Electronic Cash System, 2009. [6] Ittay Eyal and Emin Gün Sirer. Majority is not enough: Bitcoin mining is vulnerable. CoRR, abs/1311.0243, 2013. [7] Ayelet Sapirshtein, Yonatan Sompolinsky, and Aviv Zohar. Optimal selfish mining strategies in bitcoin. CoRR, abs/1507.06183, 2015. [8] Nicolas T. Courtois and Lear Bahack. On subversive miner strategies and block withholding attack in bitcoin digital currency. CoRR, abs/1402.1718, 2014. [9] Ittay Eyal. The miner’s dilemma. In Proceedings of the 36th IEEE Symposium on Security and Privacy (Oakland), 2015. [10] Kartik Nayak, Srijan Kumar, Andrew Miller, and Elaine Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In IACR Cryptology ePrint Archive 2015, 2015. [11] Ghassan O. Karame, Elli Androulaki, Marc Roeschlin, Arthur Gervais, and Srdjan Čapkun. Misbehavior in Bitcoin: A Study of Double-Spending and Accountability. ACM Trans. Inf. Syst. Secur., 18(1):2:1–2:32, May 2015. [12] Comparison of Mining Pools, 2013. available from Comparison_of_mining_pools. [13] Comparison of Mining Hardware, 2013. available from Mining_hardware_comparison. [14] Proofwiki, 2013. available from: Shifted_Geometric_Distribution. [15] Myths - Bitcoin, 2013. available from of_sale_with_bitcoins_isn.27t_possible_because_of_the_10_minute_ wait_for_confirmation. [16] FAQ - Bitcoin, 2013. available from. [17] The Finney Attack, 2013. available from The_.22Finney.22_attack. [18] Tobias Bamert, Christian Decker, Lennart Elsen, Roger Wattenhofer, and Samuel Welten. Have a snack, pay with bitcoins. In 13th IEEE International Conference on Peer-to-Peer Computing (P2P), Trento, Italy, September 2013., 2013.
Security of Transactions in Bitcoin
83
[19] Bitcoin XT, 2015. available from. [20] Arthur Gervais, Ghassan Karame, Srdjan Capkun, and Vedran Capkun. Is Bitcoin a Decentralized Currency? IEEE Security and Privacy Magazine, 2014, May/June issue 2014, 2014. [21] C. Decker and R. Wattenhofer. Information Propagation in the Bitcoin Network. In 13th IEEE International Conference on Peer-to-Peer Computing, 2013. [22] Arthur Gervais, Hubert Ritzdorf, and Ghassan O Karame. Double-spending fast payments in bitcoin due to client versions 0.8. month, 6(789):2, 2013. [23] IRC Bitcoin incident resolution, available from bitcoin-dev/logs/2013/03/11.
Chapter 5 Privacy in Bitcoin To strengthen the privacy of its users, Bitcoin users participate in transactions using pseudonyms (i.e., Bitcoin addresses). Generally, each user can have hundreds of different Bitcoin addresses that are all stored and transparently managed by its client. In spite of the reliance on pseudonyms, the public time-stamping mechanism (i.e., the blockchain) of Bitcoin raises serious concerns with respect to the privacy of users. In fact, given that Bitcoin transactions basically consist of a chain of digital signatures, the expenditure of individual coins can be publicly tracked [1–5] in the blockchain. Moreover, information is leaked in the Bitcoin system through the P2P network (the peer’s connection and traffic relayed via these connections). For example, a potential attacker could link transactions to their originator IP address [6, 7] by studying connectivity and traffic of the peers. Given the sharp increase of the user base of Bitcoin, and the growing use of Bitcoin as a currency and payment protocol in various online applications, the need to support privacy in Bitcoin is becoming more prevalent. As a first step in this direction, there have been a number of studies that quantify the privacy provisions of Bitcoin, assuming that the latter is used for everyday payment needs [3]. These studies clearly show the limits of privacy offered by Bitcoin. Motivated by these findings, the Bitcoin community has focused on enabling privacy-preserving payments within Bitcoin. On the one hand, there has been a considerable number of start-ups that assume the role of payment mixers; in other words, these companies perform Bitcoin transactions on behalf of users registered to their service, and in this way they obfuscate the payment’s origin. While such services have clear potential in solving both sides of the problem (protocol and network), they require trusting a third-party (e.g., the mixing service).
85
86
Bitcoin and Blockchain Security
On the other hand, the literature features a considerable number of proposals extending the standard Bitcoin protocol in order to conceal the traceability of payments within Bitcoin [8–10], and/or to hide the payment amounts [9,10] without the need of a trusted third-party. Throughout this chapter, we assume a realistic threat model as encountered in the current Bitcoin deployment. More specifically, we assume that users of the system act either as payers (coin senders) or as payees (coin recipients). From the system usability perspective, a user may be paying at the same time one or more users, and in some cases is able to merge multiple coins into a single coin of greater value. From a security perspective, we assume that the adversary is motivated to acquire information about the addresses/transactions pertaining to all or a subset of Bitcoin users. As such, the adversary does not only have access to the public log of transactions, denoted by pubLog, but is also part of the Bitcoin system and can perform or receive payments through Bitcoin. Here, the adversary can also access the (public) addresses of some vendors along with (statistical) information such as the pricing of items or the number of their clients within a specified amount of time. We, however, assume that the adversary is computationally bounded and as such cannot construct ill-formed Bitcoin blocks, double-spend confirmed transactions, forge signatures, and so on. This chapter is divided into two parts. In Section 5.1, we start by evaluating the privacy provisions of Bitcoin in light of a number of reported attacks on the system. In Section 5.3, we describe and analyze a number of proposals for enabling privacy-preserving payments in Bitcoin.
5.1
USER PRIVACY IN BITCOIN
The literature contains a number of proposals that analyze the privacy offered in Bitcoin [3, 4, 11]. Triggered by the fact that all Bitcoin transactions are posted in a publicly available ledger, the research community investigated the degree to which this public log of transactions would leak information on the profiles of Bitcoin users or enable the tracing of a single person’s activities. As mentioned in previous chapters, to strengthen the privacy of its users, Bitcoin users participate in transactions using pseudonyms (or addresses). However, since transactions basically consist of a chain of digital signatures, the expenditure of individual coins can be publicly tracked [1].
Privacy in Bitcoin
87
Motivated by these facts, we investigate and quantify in what follows the privacy that is provided by Bitcoin. 5.1.1
Protocol-Based Privacy Quantification in Bitcoin
To quantify privacy in Bitcoin, we observe the public log of Bitcoin, denoted by pubLog, within a period of time ∆t. During this period, nU users, U = {u1 , u2 , . . . , unU }, participate in pubLog through a set of nA addresses: A = {a1 , a2 , . . . , anA }. We assume that within ∆t, nT transactions have taken place as follows: T = {τ1 (S1 → R1 ), . . . , τnT (SnT → RnT )}, where τi (Si → Ri ) denotes a transaction with (unique) ID number i and Si and Ri denote the sets of senders’ addresses and recipients’ addresses, respectively. For the analysis presented here, we consider the privacy measures adopted by existing Bitcoin clients. Namely, we assume that (1) users own many Bitcoin addresses, and (2) users are encouraged to frequently change their addresses (by transferring some of their BTCs to newly created addresses); this conforms with the current practices adopted in Bitcoin. Moreover, conforming with the operation of existing client implementations, a new address—the shadow address [12]— is automatically created and used to collect back the change that results from any transaction issued by the user. Besides the reliance on pseudonyms, shadow addresses constitute the only mechanism adopted by Bitcoin to strengthen the privacy of its users. In what follows, we introduce a notion to quantify Bitcoin privacy, activity unlinkability, and and we provide metrics to appropriately quantify it in the existing Bitcoin system. Activity linkability refers to the ability of an adversary A to link two different addresses (address linkability) or transactions (transaction linkability) that pertain to the same user of the system, and in this sense, activity unlinkability is strongly associated to accountability. That is, the more a third-party (e.g., law enforcement) is able to reconstruct the set of addresses or transactions of an individual, the easier it is to make Bitcoin users accountable for any misbehavior. More specifically, feebased punishments for double-spending acts could be more effective, for example, by blacklisting or invalidating the BTCs of the addresses that are linked to the double-spender address. However, activity linkability seems to contradict the privacy requirements of a payment system with public transaction logs as Bitcoin, where it is crucial to maintain the confidentiality of each individual’s balance and transactions. Therefore, we see activity unlinkability as the privacy-preserving complement of linkability.
88
Bitcoin and Blockchain Security
We note that since two Bitcoin transactions are not more linkable than the addresses that participate in those transactions, we focus our analysis on unlinkability of addresses. In particular, we define address unlinkability through the following AddUnl game, and we quantify it by assessing the advantage of an adversary A in winning this game over an adversary who responds to all game challenges with random guesses, AR . We assume that A has access to pubLog and that both A and AR have gathered the same a priori knowledge KA with respect to correlations of a subset of addresses. KA can include any information related to address ownership (e.g., the identity of the owner of the address) the transactional habits of the latter, whether two specific addresses are owned by the same individual, and so on. For simplicity, we assume in the following that KA consists of a list of probabilities of correlating every pair of addresses in pubLog; clearly, the correlation probability between addresses for which the adversary has no prior knowledge about, equals the default probability that the two addresses are owned by the same individual (depending on the assumed game). The adversary can gather this a priori knowledge, for example, by interacting with users in the system [5]. We construct the following address unlinkability game in Bitcoin, AddUnl, which consists of an adversary A and of a challenger C who knows the correct assignment of addresses to Bitcoin entities. The adversary A chooses an address a0 , chosen among the addresses that appear in pubLog, but for which the adversary has no prior knowledge (expressed in KA ), and sends it to the challenger C. The challenger C chooses a bit b uniformly at random. If b = 1, then C chooses another address a1 randomly from pubLog such that a0 , a1 belong to the same user; otherwise, C randomly chooses a1 such that the two addresses are owned by different users. The challenger sends ha0 , a1 i to A, who responds with his or her estimate b 0 on whether the two addresses belong to the same user. A wins the game if he or she answers correctly (i.e., b = b 0 ). We say that Bitcoin satisfies address unlinkability if for all probabilistic polynomial time (p.p.t.) adversaries A, and ∀ha0 i, A has only at most a negligible advantage over AR in winning: Prob[b 0 ← A(KA , a0 , a1 ) : b = b 0 ] − Prob[b 0 ← AR (KA , a0 , a1 ) : b = b 0 ] ≤ ε, where ε is negligible with respect to the security parameter κ. Quantifying Address (Un-)linkability: In what follows, we quantify the unlinkability offered by Bitcoin by measuring the degree to which Bitcoin addresses can be linked to the same user. To do so, we express the estimate of A through an nA × nA matrix, Elink , where Elink [i, j] = {pi,j }i,j∈[1,nA ] . That is, for every address ai , A assesses the probability pi,j with which ai is owned by the same user as every
Privacy in Bitcoin
89
other address aj in pubLog. Note that Elink incorporates KA , and any additional information that A could extract from pubLog (e.g., by means of clustering or statistical analysis). Similar to [13], we quantify the success of A in the AddUnl game as follows. Let GTlink denote the genuine address association matrix; that is, GTlink [i, j] = 1, if ai and aj are of the same user and GTlink [i, j] = 0, otherwise for all i, j ∈ [1, nA ]. For each address ai , we compute the error in A’s estimate which denotes the distance of Elink [i, ∗] from the genuine association of ai with the rest of the addresses in pubLog ||Elink [i, ∗] − GTlink [i, ∗]||, where || · || denotes the L1 norm of the corresponding row-matrix. Thus, the success of A in AddUnl, SuccA , can then be assessed through A’s maximum error: max (||Elink [i, ∗] − GTlink [i, ∗]||). ∀ai ∈K / A
Similarly, we represent the estimate of AR in the AddUnl game for all possible pairs of addresses by the nA × nA matrix ER link that are constructed as follows. ER [i, j] = πi,j if hai , aj i ∈ KA , and ER [i, j] = ρ + (1 − ρ) 21 otherwise. link Here, πi,j represents the probability that addresses ai aj correspond to the same user according to KA , and ρ is the fraction of addresses that cannot be associated to other addresses (i.e., when their owners have only one address). For pairs of addresses that are not included in KA , this probability equals to 12 (1 + ρ); that is, the probability that at least one of the two cases happens: (1) a0 is the only address of its owner, or (2) AR did not succeed in guessing b correctly. Given this, we measure the degree of address linkability in Bitcoin by evaluating the additional success that A can achieve from pubLog when compared to AR . We call this advantage Linkabs A = SuccA − SuccAR , and its normalized version SuccA −SuccAR LinkA = . SuccAR Address unlinkability can then be measured by the normalized complement SuccA −SuccAR of Linkabs . A , UnLinkA = 1 − SuccAR 5.1.2
Exploiting Existing Bitcoin Client Implementations
Current Bitcoin client implementations enable A to link a fraction of Bitcoin addresses that belong to the same user. Heuristic 1: Multi-input Transactions:
90
Bitcoin and Blockchain Security
input addresses belong to the same user [3, 4]. Heuristic 2: Shadow Addresses: As mentioned earlier, the standard Bitcoin client generates a new address, the shadow address [12], on which each sender can collect back the change [3]. This mechanism suggests a distinguisher for shadow addresses. Namely, in the case when a Bitcoin transaction has n output addresses, {aR1 , ¡, aRn }, official Bitcoin client started to support transactions with multiple recipients since December 16, 2010.
5.1.3
Summing Up: Behavior-Based Analysis
Besides exploiting current Bitcoin implementations, A could also make use of behavior-based clustering techniques, such as K-Means (KMC), and the Hierarchical Agglomerative Clustering (HAC) algorithms. Let U be the set of users populating Bitcoin and (GA1 , . . . , GAnGA ) denote the GAs that A has obtained by applying the two aforementioned heuristics on pubLog, respectively. Given this, the goal of A is to output a group of clusters of addresses Eprof = {g1 , . . . , gnU } such that Eprof best approximates U. Since each GA is owned by exactly one user, the estimate on the assignment of each GAi can be modeled by a variable zi such that zi = k, if and only if, GAi belongs to gk . In fact, HAC assumes that initially each GA represents a separate cluster GA ({zi = i}ni=1 ) and computes similarity values for each pair of clusters. Clusters with higher similarity value are combined into a single cluster and cluster-to-cluster similarity values are recomputed. The process continues until the number of created clusters equals the number of users nU . KMC is then initialized using the output of HAC and assumes that each user is represented by the center of each cluster. The algorithm iterates assignments of GAs to clusters and aims at minimizing the overall distance of GAs to the center of the cluster they have been assigned to. The centers of the clusters and the GA-to-cluster distances are recomputed in each round. In [3], Androulaki et al. investigated the effectiveness of behavior-based clustering algorithms in profiling Bitcoin users using a Bitcoin simulator. Their results can be summarized as follows:
Privacy in Bitcoin
91
• The authors show that given 200 simulated user profiles, almost 42% of the users have their profiles captured with 80% accuracy—which clearly results in considerable leakage. • The profile leakage in Bitcoin is larger when users participate in a large number of transactions and decreases as the number of transactions performed by the user decreases. This is mainly due to the fact that users who participate in more transactions can be more easily profiled when compared to those users who only participate in few transactions. • The overall number of transactions exchanged in Bitcoin has little impact on the profile leakage of users in the system. Their results show that even when the network features 70% less transactions, then the fraction of captured transactions per user does not decrease significantly irrespective of the activity level of each user. These results suggest that the privacy provisions of Bitcoin are not strong, which opens the door to the integration of accountability measures in the system. It is straightforward to see that as accountability provisions in Bitcoin become stronger, the privacy provisions will become weaker. Notably, the less untraceable the user activity is within Bitcoin log, and therefore the more accountable the system is, the more individual privacy such as activity unlinkability is compromised. 5.1.4
Coin Tainting
Given that Bitcoin transactions basically consist of a chain of digital signatures, the expenditure of individual coins can be publicly tracked. This enables any entity to taint coins that belong to a specific set of addresses and monitor their expenditure across the network. The literature features a number of proposals that cluster Bitcoin addresses [3] and gather behavioral information about these addresses [4, 11]. Coin tainting could be used to achieve a degree of accountability in the Bitcoin network; if an address misbehaves, then Bitcoin users can decide to stop interacting with the address (i.e., not accepting its coins), thus deflating the value of all the coins pertaining to that address. For instance, following a theft of 43,000 BTCs from the Bitcoin trading platform Bitcoinica, the Bitcoin service MtGox traced the stolen BTC and deactivated the accounts that were receiving the tainted coins [14]. These incidents show that powerful entities in Bitcoin can—rightfully or not—devalue the BTCs owned by specific addresses. If these entities were to cooperate with the handful of developers that have privileged rights in the system,
92
Bitcoin and Blockchain Security
then all Bitcoin users can be warned not to accept BTCs that pertain from a given address (e.g., using alert messages). Even worse, developers can hard-code a list of banned Bitcoin addresses within the official Bitcoin client releases, thus blocking all interactions with a given Bitcoin address without the consent of users. Furthermore, while coin tainting can be used to punish provably misbehaving addresses, it could also be abused to control the financial flows in the network subject to government pressure, and due to social activism as well. This empowers a few powerful entities that are not necessarily part of the Bitcoin network, such as governments and activists, to regulate the Bitcoin economy. Even if all Bitcoin decisions and operations were completely decentralized—which they are not—coin tainting presents an obstacle to a truly decentralized Bitcoin. Coin tainting can be especially detrimental if coins are not widely exchanged among Bitcoin addresses. This enables entities to damage only a specific set of addresses without alienating other addresses in the system. Other users are then also likely to boycott the tainted coins. In [15], Gervais et al. conducted two experiments to analyze the impact of coin tainting on the Bitcoin network. In the first experiment, the authors measured the number of unspent transaction outputs (UTXO) that are affected when tainting a coinbase. Recall that a coinbase is the first transaction in a block, and attributes the block mining reward to a particular address. The authors randomly sampled 100 coinbases from the last 20,000 blocks of the blockchain that by the time of the experiment had the highest block-height at 247,054. The results show that a single coinbase tainting affects a large number of transaction outputs; on average, tainting a single coinbase affects 857,239 UTXO (with a standard deviation of 767,528), accounting for 12.9% of all possible UTXO. In a second experiment, Gervais et al. analyzed the effect of tainting addresses belonging to a single entity in Bitcoin. Given the absence of data to identify these addresses, the authors relied on the two aforementioned heuristics in order to cluster addresses across Bitcoin entities. As an application of this analysis, Gervais et al. [15] identified two Bitcoin addresses belonging to Torservers.net (using information available from blockchain.info). Given the knowledge of these two addresses, they were able to identify a total of 47 addresses belonging to the operator of Torservers with a total balance of 498.20 BTCs. If an external entity (e.g., a governmental institution) would like to stop the Torservers from receiving Bitcoin donations, it could taint all UTXO of the affected Bitcoin addresses. Recently, Silk Road—one of the most well known underground online black markets—was shut down by the FBI in October 2013. Note that Silk Road was only accessible through the Tor network; the FBI
Privacy in Bitcoin
93
seized over 27,000 BTCs stored within one or more Bitcoin addresses held by Silk Road. 5.1.5
Risks of Holding Tainted Bitcoins
Based on the aforementioned analysis, entities holding Bitcoins have to take into account various risks that are tightly coupled with coin tainting. For instance, the previous owners of their coins could have acquired these coins by means of illegal activities (e.g., theft). In general, any coin whose expenditure history involves a crime poses considerable risks for its new owners. Recall that coin tainting can be applied to incorporate accountability in the system; for example, it could be used to trace stolen BTCs or launch campaigns not to accept coins issued by suspicious senders. To be on the safe side, this almost suggests that users should be aware of such risks whenever they receive a transaction in the network. For instance, the receiver has to compute the risk of accepting these coins and whenever needed instruct the sender to use different coins as a payment. Even if the receiver accepts to receive the coins, he or she might be inclined to immediately spend them in order to minimize any risk associated with holding these coins. The risks associated with holding tainted coins has been thoroughly investigated in [16]. Here, the authors lay down a risk model for holding BTCs and outline a risk prediction approach using public knowledge from the Bitcoin blockchain. The authors observe there will be a certain timespan between the time at which an illegal transaction takes place and the time at which the corresponding coins are tainted/blacklisted in the network. Thus, honest users may accept a dirty BTC despite their willingness to comply with the blacklisting process. Clearly, freshly mined coins are less likely to be involved in suspicious activities when compared to coins who have been switched across several owners. This clearly motivates the need to predict the risk of holding/accepting coins to ensure that they do not lose value due to coin tainting or blacklisting over time.
5.2
NETWORK-LAYER ATTACKS
In this section, we show how an adversary can leverage information from the Bitcoin P2P network in order to profile Bitcoin users. We start with a brief refresher on information exchange in the Bitcoin network.
94
5.2.1
Bitcoin and Blockchain Security
Refresher on Bitcoin P2P Network Setup
Initially, Bitcoin peers communicated with the rest of the network peers through unencrypted and unauthenticated TCP connections. Since there was no authentication offered by the connection layer, peers maintain a list of IP addresses associated with their connections (neighbors). To avoid denial-of-service attacks, peers evaluate the behavior of their neighbors in the P2P network by implementing a reputation-based protocol. In particular, whenever a malformed message is received by a node, the peer decreases the reputation value (or increases the penalty score) of the associated connection categorized using its IP. As soon as the penalty score of a connection reaches a threshold value (currently 100), the peer rejects connection requests coming from that IP for 24 hours (see Chapter 3 for more detail on this process). As mentioned in Chapter 3, Bitcoin peers maintain by default eight outgoing connections, also known as entry nodes. 5.2.2
Privacy Leakage over the Bitcoin Network
In the following, we present a method to deanonymize Bitcoin users by linking their pseudonyms (addresses) to the IP addresses of the underlying clients. This attack was first introduced in [6] and later expanded in [7]. Note that this attack allows to deanonymize users even when they operate behind network address translators (NATs) or firewalls. More specifically, this technique allows an adversary to distinguish connections and transactions pertaining to different users that are located behind the same NAT. The main intuition behind this attack is that since entry nodes of any given client are not renewed by default (until the client restarts), each client can be safely and uniquely identified by the set of nodes that he or she connects to. In terms of required resources, the attach only requires few running instances of the Bitcoin clients (each residing on a different IP) to establish a certain number of connections (following the Bitcoin protocol) and log the incoming transactions [6]. In a specific example offered in [6], an adversary equipped with no more than 50 connections to each Bitcoin server can disclose the sender’s IP address for around 11% of all transactions generated in the Bitcoin network. Experimental results have shown that deanonymization rates of up to 60% can also be reached, if the adversary were to mount a small DoS on the network (see [6] for more details). The overall
Privacy in Bitcoin
95
cost of mounting such an attack on the full Bitcoin network is estimated to be around 1,500 EUR per month [6]. The attack evolves in three steps. First, the attacker attempts to disconnect users from Tor or other anonymizing networks that these clients may be leveraging for connecting to Bitcoin peers. This allows the adversary to use directly the information received by the network (e.g., to figure out the network’s topology). Finally, the adversary can use the acquired network knowledge in combination with the mechanism that Bitcoin uses to forward transactions in the network, to deanonymize transactions. In what follows, we detail these steps. Phase 1: Disconnecting Clients from Tor: The Tor network [17] comprises a set of relays that are publicly available online, and which can be used by any party to send a message while avoiding traffic analysis attacks. To establish a connection to a service or a node through Tor, a user chooses a chain of three Tor relays, through which the messages to the target service or node will be routed. The final node in the chain, also known as Tor Exit node, appears to the service as the originator of this connection. To prevent Bitcoin users from making use of Tor when transacting with Bitcoin, the adversary could exploit the Bitcoin built-in DoS protection. Recall that in Bitcoin, whenever a peer receives a malformed message, it increases the penalty score of the IP address from which the message came and bans that IP for 24 hours when that score reaches 100. To exploit this, the adversary can simply try to connect to various Bitcoin nodes, using Tor, and send malformed messages, such that all Tor exit nodes are banned from the majority of the Bitcoin nodes. Alternatively, the adversary could simply spoof the IP of the exit node and issue malformed messages from that IP that would result in a 24-hour ban of the exit node. Phase 2: Inferring Network Topology: This phase assumes that the use of Tor has been temporarily deactivated using the strategy described in the previous paragraphs. In this phase, the adversary targets Bitcoin clients that do not accept incoming connections and only exhibit the minimum (i.e., eight) outgoing connections to the rest of the network. The goal of the adversary is to learn the eight entry nodes of each targeted Bitcoin client. The attack unfolds as follows. Whenever a client C establishes a connection to one of its entry nodes, it engages in the address discovery protocol described in Chapter 3 and advertises its external addresses that have the highest local scores IPC . If the adversary is already connected to one of those entry nodes, the address
96
Bitcoin and Blockchain Security
IPC will be forwarded to them with some probability (which depends on the number of the attacker’s connections). This suggests that the attacker can shortlist the entry nodes of the target address IPC as follows: • The attacker connects to a large number of Bitcoin server nodes, say NA which is assumed to be close to the set of all Bitcoin server nodes NS . • The attacker logs the messages received from all connected servers, and for each advertised address, say IPC , the attacker logs the set of servers NIPC that forwarded it to the attacker’s machines. • The attacker designate NIPC as the entry node subset associated to address IPC . Note that address NIPC which is announced to the adversary by a node does not have to necessarily correspond to NIPC ’s entry node. At the same time, as the client does not simultaneously connect to all of its entry nodes, time intervals among the announcement of the same address by its entry nodes may mislead the attacker to a misconception of the network topology. Assuming that the adversary knows the target address IPC before this address reconnects to the network,1 one can leverage the antiflooding mechanism that Bitcoin has set in place in order to avoid advertising the same address multiple times [6]. Namely, the proposal in [6] ensures that the adversary advertises IPC enough before the IPC reconnects such that when IPC reconnects, the probability that its advertizement is sent to the adversary’s machines via a non-entry node is small. Phase 3: Deanonymizing Bitcoin Transactions: After preventing nodes from using Tor, and after short-listing certain servers as entry nodes for each victim address, deanonymization evolves as follows: 1. The attacker obtains the list NS of Bitcoin servers assuming that it is regularly refreshed. Here, the adversary first collects the entire list of peers by querying all his neighbors/known peers with a getaddr message. Given this, the attacker collects the list of advertised addresses and adds to the list of Bitcoin servers NS every listed address that is online and publicly reachable. This can be easily ascertained by the adversary by trying to establish a TCP connection and exchange version messages. 1
Note that this is a common scenario given that plenty of clients use the same machine to perform their Bitcoin transactions, or sit behind IPC a NAT.
Privacy in Bitcoin
97
2. The attacker composes the list C of Bitcoin clients to be deanonymized. Here, the attacker selects a set IPC of nodes that he or she wants to consider in the deanonymization attack. At this point, the attack is agnostic to how the attacker constructs C. For example, the attacker might randomly select IPs advertised throughout the network or obtain C as a set of the IPs used by a user retrieved by an out-of-band channel. 3. The attacker retrieves the entry nodes NIPC of each client IPC ∈ C when IPC connects to the network, as described above. 4. The attacker keeps monitoring the traffic from servers in NIPC and, by mapping transactions to entry nodes, the attacker can ultimately map transactions to clients. More specifically, the attacker monitors inv messages with transaction hashes received over all the established connections, and for each received transaction, it collects the addresses of Bitcoin servers that forwarded the associated inv message at each round of transaction advertisements. The attacker finally correlates the sets of servers that advertised each transaction at each round and extracts pairs hentry − node; transactioni from the matching pairs. Eventually, the adversary creates a list List = hIPC , IdC , PKC i, where IPC is the IP address of a peer or its ISP, IdC distinguishes clients sharing the same IP, and PKC is the address/pseudonym used in a transaction (hash of a public key).
5.3
ENHANCING PRIVACY IN BITCOIN
The significant limitations of Bitcoin with respect to user privacy have pushed the Bitcoin community to design mechanisms that ensure privacy-preserving payments in Bitcoin. In this section, we overview and analyze proposals for strengthening privacy in Bitcoin. We start by describing mixing services, and then proceed to describing other crypto-based privacy extensions of Bitcoin. Mixing services achieve user privacy in a holistic way (i.e., in both the network and protocol layers) without degrading payment performance, but require absolute trust in a third-party. In contrast, cryptographic extensions of Bitcoin eliminate the need for trusted thirdparties but tend to be taxing in terms of performance.
98
5.3.1
Bitcoin and Blockchain Security
Mixing Services
Mixing services play the role of trusted mediators between the users and the Bitcoin system, allowing to some extent the mixing of coins pertaining to several users—thus effectively preventing the public traceability of coin expenditure in the network. The first mixing services (also called tumblers) mix one’s funds with other people’s coins with the intention to confuse the trace back to the funds’ original source. In traditional financial systems, the equivalent would be moving funds to banks located in countries with strict bank secrecy laws, such as the Cayman Islands, the Bahamas, and Panama. The operation of a Bitcoin mixer is summarized in Figure 5.1. Users who make use of a mixing service for their payments are usually asked to open an account at that service, which serves as an out-of-band communication channel. Through this channel, the user and the service agree upon a service-owned address to which the user sends his payment. Assuming an honest service that has a nonnegligible number of registered users, there are two models to which such a service can fulfil its purpose: Coin history resetter Here, the mixer sends back to the user someone else’s coins of the same value. Clearly, in this case the user also needs to provide the service with a return address. From this point onward, the user can pay the intended recipient with the fresh coins that he or she received from the service. BitLaundry [18] and Bitcoin Fog [19] are some of the mixers operating under this model. Note that this model does not resist network layer attacks since the user is eventually making the payments. Payment mediator Here, the mixer keeps the funds, circulates them among its other addresses, and finally pays the recipient of the payment as indicated by the user. Blockchain.info [20] and most online wallets operate in this fashion. Unlike the first model, this variant model resists network layer attacks on the user since the mixer is issuing the payments on behalf of users. Mixing services usually charge a commission on the payment the user wishes to perform or on the value of the coins exchanged. Though mixing services offer—to a large extent—obfuscation of user payments, their impact on the privacy of usertransactions at the protocol layer has not been thoroughly assessed (up to the time of writing). At the same time, mixing services require absolute trust in the service itself that has the power to steal users’ funds.
Privacy in Bitcoin
99
Figure 5.1 A mixing service acting as a payment mediator.
5.3.2
CoinJoin
CoinJoin [21] is a popular technique that aims to mix payments without the need to trust a single trusted party. CoinJoin leverages the ability of recent Bitcoin clients to include in the transactions’ inputs originating from different (potentially remote) wallets. This allows a number of different entities to form a single transaction that mixes/shuffles all their coins among themselves. By doing so, these entities effectively hide their addresses among the anonymity set comprised by the clients participating in the CoinJoin protocol (see Figure 5.2). As such, CoinJoin transactions have the same number of outputs as inputs, and as long as the inputs have identical values, and all output addresses are for a single use and receive the same amount, this technique can hide a potential payer among the payers in the input. Although CoinJoin removes the need to trust a single third-party (e.g., the mixing service), it does require communication and cooperation among multiple parties (i.e., the ones contributing transaction inputs). Note that although the privacy of payments is stronger the longer the list of transaction inputs (and users creating them), a large number of transaction inputs results in higher computation and communication overhead at transaction creation time (due to multiple signatures generated/communicated by different parties) and computation overhead at verification time (due to multiple signature
100
Bitcoin and Blockchain Security
Figure 5.2 Main intuition behind CoinJoin.
validation). Such transactions are therefore typically expected to have higher fees than conventional transactions. We additionally point out that the CoinJoin protocol implicitly requires that all participants do not misbehave. That is, if one participant does not correctly complete the signing process or double-spends his or her own inputs, then the entire transaction cannot be confirmed. That is, any single participant can easily mount a denial-of-service attack on the CoinJoin protocol. 5.3.3
Privacy-Preserving Bitcoin Protocol Enhancements
The research community features a number of proposals to enhance user privacy in Bitcoin without the need for modifying the original Bitcoin trust model. Examples includes ZeroCoin [8], Extended ZeroCoin [10], and ZeroCash [9]. These constitute the most prominent initiatives that involve the conversion of BTCs to coins that can be spent anonymously. In the sequel, we first elaborate on the trust model assumed by these protocols, along with their security requirements and guarantees. We then detail how these properties are achieved in each of the aforementioned protocols. Note that we will
Privacy in Bitcoin
101
solely focus on extensions of the Bitcoin protocol—these are orthogonal to the network level linkability of transaction announcements in the Bitcoin network. 5.3.3.1
Model
In the privacy extensions of Bitcoin that we will be discussing in this chapter, we assume that users convert BTCs to untraceable or anonymous coins (zerocoins in [8], extended zerocoins in [10], and zerocash in [9]) through an operation called Mint. Users subsequently spend these anonymous coins in two possible ways: • The first payment type consists of converting anonymous coins back to BTCs that are sent to a payee’s address; this is the reverse operation of Mint and is referred to by Spend. ZeroCoin only supports this type of spending. • The second type of payment consists of transforming anonymous coins to other anonymous coins that are under the control of the payee using an operation denoted by Pour. ZeroCash and Extended ZeroCoin support this type of payment. Given the considered adversarial model, we identify the following security notions for Bitcoin: balance, anonymity, and activity unlinkability. Informally, the balance property requires that an adversary who has legitimately acquired a set of BTCs can spend anonymous coins (to other users) of at most the value of the BTCs he originally owned. The unlinkability property refers to the fact that an adversary should not be able to link two different spending transactions that pertain to a user. Finally, anonymity refers the fact that the spending of a coin should not be linked to a certain conversion transaction (i.e., Mint). Although the spirit of these definitions is the same across all systems presented in this chapter, we provide in the following more formal definitions that are adjusted to the operations performed in each case. 5.3.3.2
Cryptographic primitives
We start by describing a number of cryptographic building blocks that are essential for the operations of privacy-preserving payment systems. Commitment schemes: Commitment schemes allow a party (committer) to commit to a chosen message (chosen statement) while keeping it hidden to others, with the ability to reveal the committed value later. Commitment schemes consist of three operations:
102
Bitcoin and Blockchain Security
• params ← Setup(k), a randomized setup algorithm given input by a security parameter k, and outputting public parameters to be used for the computation of the commitment • Cr ← Commitparams,r (m), where a commitment on message m is computed using randomness r • hr, vi ← DeCommitparams (Cr ), where the committer opens the commitment to a verifier who can then validate it. Commitment schemes are designed so that they satisfy the following two properties: • Binding: Given a commitment Cr to a message m, the committer cannot change the value or statement m they have committed to. That is, they cannot construct another message m0 6= m such that m0 ← DeCommitparams (Cr ). • Hiding: Given a commitment value Cr to a message m, the attacker should not be able to retrieve m. For the schemes described below, Pedersen commitments [22] will be used, where the hiding property is guaranteed against an information theoretic attacker (information theoretically hiding), and the binding property is guaranteed against a computational attacker (computationally binding). Zero Knowledge Proofs of Knowledge and Signatures of Knowledge: Proof of knowledge protocols allow a party (prover) to prove that a statement is true to another party (verifier) (e.g., that a value v is part of a language L). Zeroknowledge proofs of knowledge enable the prover to achieve the same property without revealing anything else to the verifier apart from the fact that the statement holds. Most privacy-preserving payment systems built atop Bitcoin use zero-knowledge proofs of knowledge (ZKPoK), and in particular protocols where the prover proves knowledge of a committed value v, without leaking any information on v. In this chapter, we leverage the ZKPoK schemes of Schnorr [23] and its extensions [24–27], and convert them to noninteractive signatures of knowledge [28] using the Fiat-Shamir heuristic [27]. In signature of knowledge schemes, the knowledge of the secret committed value v is used as signing key. The unforgeability property of these schemes implies that no one but the party that has knowledge of v is able to provide a valid signature
Privacy in Bitcoin
103
on any message m (i.e., a signature on m) that the signature verification algorithm accepts. In the following, we will use the notation of Camenisch and Stadler [25,28,29] when referring to these proofs. Namely, NIZKPoK{(α, β) : h = g α ∧ c = g β } denotes a noninteractive zero-knowledge proof of knowledge of the elements α and β that satisfy both h = g α and c = g β . All values not enclosed in ()s are known to the verifier. Similarly, the extension ZKSoK[m]{(α, β) : h = g α ∧ c = g β } indicates a signature of knowledge of α and β on message m. Accumulators: Cryptographic accumulators basically constitute one-way membership functions; these functions can be used to answer a query whether a given candidate belongs to a set without revealing any meaningful information about the other set members. For the rest of the chapter, we will be referring to the accumulator Acc introduced by Camenisch and Lysyanskaya [30] that supports the following operations: • {N, u} ← ACC.Setup(k). On input a security parameter k, sample primes p and q (with polynomial dependence on the security parameter), Setup computes the RSA modulus N = pq and chooses value u ∈ QRN , 6= 1. Finally, Setup outputs (N, u), which will be denoted by params. • {Acc} ← ACC.Accumulate(params, PN). On input params and a set of prime numbers PN = {p1 , ..., pn |pi ∈ [A, B]}, where A, B can be chosen with arbitrary polynomial dependence on k, as long as 2 < A and B < A2 , ACC.Accumulate computes the accumulator Acc = p1 p2 · · · pn ( mod N). • ω ← ACC.GenWitness(params, v, PN). On input params = (N, u), a set of prime numbers PN as described above, and a value v ∈ PN, the witness ω is the accumulation of all the values in Acc besides v (i.e., ω = ACC.Accumulate(params, Acc|v)). • {0, 1} ← ACC.Verify(params, Acc, v, ω). On input params (N, u), an element v, and witness ω, ACC.Verify computes Acc0 ← ω v (modN) and outputs 1 if and only if Acc0 = Acc, v is prime, and v ∈ [A, B]. Accumulators in [30] satisfy the strong collision-resistance property if the strong RSA assumption is hard. Informally, this ensures that no computational
104
Bitcoin and Blockchain Security
adversary can produce a pair (v,ω) such that v ∈ / PN and yet ACC.Verify is satisfied. In fact, in [30], Camenisch and Lysyanskaya describe an efficient ZKPoK scheme that proves that a committed value is contained in an accumulator. We refer to the proof in [30] using the following notation: NIZKPoK(ν, ω) : Acc.Verify((N, u), Acc, ν, ω) = 1. As described before, the Fiat-Shamir heuristic [27] can be used to convert this scheme to a noninteractive signature of knowledge. ZK-Snarks: ZK-SNARKs [31] is a special kind of succinct noninteractive argument of knowledge (SNARK). The latter is a cryptographic primitive that enables a prover to argue that a certain statement satisfies an arithmetic circuit. Similar to signatures of knowledge, ZK-SNARKs are publicly verifiable and ensure zeroknowledge properties. In what follows, we informally define ZK-SNARKs for arithmetic circuit satisfiability. We refer the reader to [31] for a formal definition. Let Fa field, and an arithmetic circuit C with bilinear gates. We say that C is a F-arithmetic circuit when its inputs and outputs are elements residing in F. To capture nondeterminism, nondeterministic circuits with input x ∈ F n , are enhanced with an auxiliary input a ∈ F h , also known as witness. The arithmetic circuit satisfiability problem for an F-arithmetic circuit, C : F n × F h → F l , relates to the finding of input values hx, ai ∈ F n × F h , such that the output of the gates of C output 0 and can be well represented by the relation RC = (x, a) ∈ F n × F h : C(x, a) = 0l , and its language: LC = x ∈ F n : ∃a ∈ F h : C(x, a) = 0l . Assuming a field F, a (publicly verifiable preprocessing) ZK-SNARK for an F-arithmetic circuit satisfiability consists of the following algorithms: • hpk, vki ← KeyGen(λ, C), which on input a security parameter λ and circuit C, generates a proving key pk and verification key vk, that can henceforth be considered as the public parameters of the system. • π ← Prove(pk, x, a), where with input a proving key pk, a pair of input x, and witness a, where (x, a) ∈ RC , the prover outputs a noninteractive proof π, that input x is part of C satisfiability language LC .
Privacy in Bitcoin
105
• accept/reject ← Verify(vk, x, l) where the verifier using verification key vk, and proof π for input x, confirms that x ∈ LC (accept), or outputs an error message (reject). Informally, ZK-SNARKs satisfy the following properties: Completeness Here, it is required that for any statement that is a member of the circuit language, LC , an honest prover running Prove should be able to convince the verifier; that is, Verify should not output an error message. Succinctness This property ensures that Prove and Verify run in a number of steps polynomial to the security parameter λ. Proof of knowledge This property ensures that the verifier can only accept proof outputs of a computationally bounded prover running Prove if and only if the prover knows a valid witness for the provided input. Perfect zero knowledge Based on this property, no information on the secret statements or witnesses is revealed to the verifier. More formally, zero knowledge requires that for a given Verify transcript there is a simulator simulating KeyGen and Prove whose outputs cannot be distinguished from honestly executing KeyGen and Prove. 5.3.3.3
ZeroCoin
ZeroCoin [8] is one of the first cryptographic extensions of Bitcoin that aims at enhancing its privacy. It was introduced by Miers et al. in [8] to remedy the fact that Bitcoin enables the public tracing of coin expenditure in the network. ZeroCoin leverages ZKPoP protocols and cryptographic accumulators to implement a cryptographic mixer. More specifically, ZeroCoin resets the history of a BTC by transforming it into a ZeroCoin coin referred to in the sequel by ZC. During this conversion, ZCs are added to a cryptographic coin mixer (essentially an accumulator) that is publicly available. The resulting ZCs can be proven in zero-knowledge to have originated from valid and unspent BTCs (i.e., that they are part of the unspent subset of coins in the mixer). In this way, an entity is prevented from linking a transaction with the BTC (and the corresponding address) that generated the zc used therein. In other words, ZeroCoin ensures that the origin of a zc is hidden among all BTCs that were converted to eZCs. In addition, ZeroCoin preserves the payment security guarantees of Bitcoin (e.g., double-spending resistance). That is, no party can spend more BTCs
106
Bitcoin and Blockchain Security
or ZCs than the ones he or she possesses. Protocol specification: ZeroCoin consists of the following procedures: Setup, where the system parameters are set, the Mint operation, where a Bitcoin (BTC) is converted to a ZeroCoin (ZC), the Spend operation, where a ZC is spent/deposited to Bitcoin address, and automatically converted to a fraction of a BTC, and the Verify operation, through which Bitcoin peers can verify the validity of the ZC transaction and include it in a block. More specifically, • params ← ZC.Setup(1k ), where k is the security parameter. params include a group G of RSA modulus and of order o and its generators g, h : hgi = hhi = G. • {pkzc , skzc } ← ZC.Mint(params, btc), through which a BTC btc is converted to a fixed ZeroCoin value zc with secret information skzc and public information pkzc . On can see this operation as a conventional Bitcoin transaction, where the converted btc is the input and the output is pkzc , that uniquely defines the generated zc. Thus, the miners check again whether btc is valid (i.e., whether it is owned by the address that has signed the transaction and it has not been spent before by that address). If btc is valid, then the pkzc included in the transaction is included in the next block. All the zc-public information that are included in blocks are added in an accumulator Acc.2 Public pkzc included in the transaction is a Pedersen commitment to a serial number s of the form ED (s) = g s · hr , zc = CommitP r
where r ←R G. The secret information related to zc is set to skzc = (s, r) and is partially revealed in the ZC.Spend operation. • {π, s} ← ZC.Spend(params, skzc , pkzc , Acc), which is performed by the user who wishes to spend a zc with public information pkzc . To do so, the user reveals s and computes a ZKPoK π that s corresponds to a zc that has been confirmed in a block (i.e., is part of the accumulator). As such, ZeroCoin can be integrated in existing Bitcoin transactions without modifications: (s, π) constitute the inputs to a standard Bitcoin transaction, which takes as output a regular Bitcoin address. This special transaction contains a signature of knowledge 2
Note that the accumulator value is computed by the peers locally. Given the public parameters of the accumulator, and the confirmed ZC, each peer can locally compute the accumulator value at any point in the blockchain.
Privacy in Bitcoin
107
produced by π, and is released in the network to be confirmed in a block. • {π, s} ← ZC.Verify(params, s, π, Acc), which can be executed by every peer in the Bitcoin network to verify whether the signature of knowledge deriving from ZKPoK π is valid and that the serial number s has not been used before (and thus that the corresponding zc has not been spent before). Verified pairs of π, s are included by the Bitcoin miners in the next generated block, to establish their validity to all Bitcoin peers.
Limitations: In ZeroCoin, each ZC corresponds to exactly one (or a predefined number of) BTC; transactions whose values are larger than one BTC would therefore result in several back-to-back ZeroCoin transactions. This results in considerable overhead in propagating the corresponding transactions in the network and including them in valid blocks. Furthermore, while ZeroCoin prevents the traceability of constant value coins, it does not conceal the transaction amounts; multiple ZC spendings for the same payment are likely to be linked in time and the total amount per payment can be retrieved (since each ZC corresponds to a single BTC). Furthermore, ZeroCoin does not hide the total number of BTCs redeemed by Bitcoin addresses when the owners of these addresses transform their ZCs back to BTCs. As mentioned earlier, recent studies [3] have shown that tracing coin expenditure is not the only source of information leakage in Bitcoin. More specifically, Androulaki et al. have shown in [3] that behavior-based clustering algorithms can be used to acquire considerable information about the user profiles in Bitcoin. These algorithms mainly leverage user spending patterns, such as transaction amounts, transaction times, and so on in order to profile users. Clearly, ZeroCoin does not prevent such analysis, since the transaction times, transaction amounts, and address balances can still be derived from the blockchain. 5.3.4
Extending ZeroCoin: EZC and ZeroCash
In what follows, we present a number of proposals that address some of the limitations of ZeroCoin. In particular, we present EZC and ZeroCash, two independently designed systems, that support transactions where one or more anonymous coins can be spent in the form of fresh anonymous coins. In these proposals, the value of the coins spent is hidden from the rest of the network, and the coin recipient has complete control over the spent coins.
108
Bitcoin and Blockchain Security
Throughout the rest of this chapter, we present Extended ZeroCoin that, as its name suggests, builds on top of ZeroCoin. We then move on to ZeroCash that constitutes a ZK-SNARK based implementation of decentralized anonymous payment systems [9]. Besides overviewing the operations of EZC and ZeroCash, we will additionally compare these proposals with ZeroCoin. 5.3.4.1
Extended ZeroCoin
Extended ZeroCoin is an enhanced variant of ZeroCoin, dubbed EZC, that enables the expenditure of transaction amounts exceeding the value of 1 BTC while concealing these amounts from the network. Similar to ZeroCoin, EZC leverages accumulators and ZKPoK protocols to construct multi-valued ZCs. The resulting coins can be either spent as regular Bitcoins or can be spent directly in the network without the need to transform them back to equivalent value BTCs. Additionally, EZC transactions require that the transaction amounts only need to be revealed to the payment sender and recipient, and not to the rest of the participants of the system. It is noteworthy, that since EZC coins (henceforth referred to as eZC) do not have to be exchanged back to BTCs, this scheme also prevents the leakage of the balances of those addresses who opt out from exchanging their coins to BTCs. Figure 5.3 compares the main operations of EZC to ZC. Similar to ZeroCoin, a fundamental assumption for the security of EZC is that the public parameters of the system are generated honestly (i.e., in a way that complies with the security definition of the primitives). As mentioned earlier, EZC supports Mint operation, denoted by EZC.Mint, where arbitrary valued BTCs are converted to an anonymous EZC coin (eZC), Spend, denoted by EZC.Spend, where eZCs are spent in the form of BTCs, and Pour, denoted by EZC.Pour, where eZCs are spent in the form of freshly generated eZCs. Similar to conventional Bitcoin payments, the validity of each transaction is verified by the network peers, who subsequently work toward confirming valid transactions in blocks. For simplicity, we will refer to a transaction resulting from operation O by an O transaction (e.g., a Mint transaction refers to a transaction output by the Mint procedure). Overview of EZC: During setup, the system runs the Setup of a dynamic accumulator scheme (i.e., Acc.Setup). Let this accumulator be denoted by AccEZC . AccEZC includes all properly minted and confirmed eZCs. In EZC, the EZC.Mint transaction is constructed in a similar way to the that of ZeroCoin, and consists of an input (in BTCs) and an output that includes
Privacy in Bitcoin
109
Figure 5.3 Comparison between EZC and ZC. Each ZC corresponds to a single BTC and can only be spent in the form of BTCs. EZC, on the other hand, enables the construction of a multivalued eZC and can be spent in eZCs without the need to transform them back to BTCs.
information related to the created eZCs. However, unlike ZeroCoin, the coins generated with EZC.Mint can accommodate any payment value val (e.g., in BTCs). The outputs of this transaction consist of a commitment cmtr(ser ,val ) to val and to a random serial number ser , and of a proof of c’s correctness. As we show later, val is revealed by the peer who runs EZC.Mint to all the peers in the network but ser is kept private until the eZC that is minted is spent. The constructed transaction is signed using the Bitcoin private keys corresponding to the input address(es). The correctness of EZC.Mint transactions is verified by the rest of the peers in the network and correct EZC.Mint transactions are included in the longest blockchain. After confirmation of an EZC.Mint transaction, the commitment cmtr(ser ,val ) is added (using the Acc.Add procedure) as a member of AccEZC . To spend an eZC in the form of BTCs of value val , the eZC-owner—who knows ser and the opening of cmtrser , val —constructs a proof π that ser indeed
110
Bitcoin and Blockchain Security
corresponds to a commitment to a value val that is a member of AccEZC . He or she then engages in EZC.Spend by constructing a transaction signed with a signature of knowledge of π. The resulting transaction has a similar structure to a Bitcoin transaction, where π is used as input and its output can distribute val to one or more Bitcoin addresses. Note that for peers to be able to verify the correctness of such a transaction, the serial ser must be revealed; nevertheless, no entity is able to link the EZC.Spend transaction to a particular EZC.Mint transaction, and, thus to the btc-s that created it. On the other hand, to spend an eZC in the form of anonymous eZCs of the same or smaller value val 0 , the payer reveals ser , and engages in a similar set of operations as in EZC.Spend to construct a proof π that the coins involved in the transaction are properly created. To accommodate for the creation of the recipient’s eZC, the two parties construct a commitment cmtr0 (val 0 , ser 0 ) to a freshly generated serial ser 0 and to the payment amount val 0 . π contains a proof that the payment amount (val ) does not exceed the value of the payer’s coin (val 0 ). Finally, π is used to produce a signature of knowledge on the output commitment cmtr0 (val 0 , ser 0 ) into an EZC.Pour transaction, which is released to the network with ser 0 . As soon as the latter is confirmed, cmtr0 val 0 , ser 0 is added by the peers to AccEZC . Note that val 0 is kept private between the payer and the payment recipient, while ser 0 and the opening of cmtr0 val 0 , ser 0 is only known to the payment recipient. Protocol specification: In what follows, we detail the operations in EZC. In the sequel, we denote the public information associated with an eZC by pubeZC and the corresponding private information by seceZC . • params ← EZC.Setup(λ), the setup of EZC, running with input a security parameter λ and produces the system parameters params. More specifically, EZC.Setup runs AccEZC .Setup(λ) to obtain (N, u), and generates primes p, q, such that p = 2f · q + 1, f ≥ 1. It then picks g, h, and w such that G = hgi = hhi = hwi ⊂ Zq∗ . Finally, it sets params = {N, u, p, q, g, h, w}. • π, pubeZC , u(seceZC )} ← EZC.Mint(IBTC , params), that performs the conversion of one or more BTCs to eZCs. This operation is executed by the owner u of a set of BTCs IBTC that are converted into an eZC with public information pubeZC and private information seceZC . Here, u picks ser , r ←R G, where ser is
Privacy in Bitcoin
111
the serial number of the generated eZC computes pubeZC = cmtrser , val , and a ZKPoK π asserting that pubeZC is correctly formed: NIZKPoK(α, β) : pubeZC = g α · hval · wβ . Note that the EZC.Mint transaction is constructed similarly to a standard Bitcoin transaction, where the BTCs are used as input and hpubeZC , πi is used as output. Subsequently, peers verify that pubeZC is correctly formed by running the ZKPoK verification protocol for π and by confirming that the input BTCs were not spent in the past, as is currently done in Bitcoin. If the transaction is deemed valid by the majority of the computation power of the network, pubeZC is included in the blockchain, and pubeZC is considered as a valid member of the public accumulator AccEZC . User u’s private output is seceZC = hser , ri, while {seceZC , val } is stored in u’s local memory. • OBTC ← EZC.Spend[params, ser S , uS (seceZCS , pubeZCS )] that is performed to spend eZCs back to BTCs. This operation results in a transaction that uses as input (seceZCS ,pubeZCS ) and spends them in BTCs of value val to a set of Bitcoin addresses IBTC . Here, the sender uS first computes the public accumulator value AccEZC locally by running AccEZC .Accumulate(N, u, {pubeZC }∀∈pubLog ) for the set of EZC commitments that have appeared in the longest blockchain so far. The sender retrieves seceZCS from his or her local memory and runs: AccEZC .GenWitness(params, {pubeZC }∀∈pubLog , pubeZCS ), to compute the witness wS for pubeZCS ’s membership in AccEZC . Furthermore, uS computes a ZKPoK π to show that ser S corresponds to an eZC whose public information (here, pubeZCS ) is part of AccEZC , and that it corresponds to a value val . Subsequently, it converts π to a signature of knowledge on OBTC : ZKSoK[OBTC ](α, β, γ) : α = g ser S hval wβ ∧ Acc.Verify(N, u, AccEZC , α, γ) = 1}. Finally, uS announces the corresponding signature within a transaction to the EZC network, which, after confirming the transactions’ correctness, it includes the latter in a block.3 3
Note that if fees are to be supported, the fee amount should be explicitly stated within the message in the signature (transaction).
112
Bitcoin and Blockchain Security
• {hpub0eZCS , pubeZCR , uS (sec0eZCS ), uR (seceZCR )i/⊼} â†? EZC.Pour (params, ser S , AccEZC , uS (val eR , val S , rS , ser 0S , rS0 ), uR (val R , ser R , rR )), where an eZC is spent to one or more other eZCs. This is an interactive operation between a payment sender uS and a payment recipient uR . It takes as input the information associated with an eZC of uS (e.g., seceZCS and pubeZCS ) and spends it in the form of a new eZC that belongs to uR , eZCR ; if change amounts should be incorporated, this operation outputs additionally a eZC that belongs to uS denoted by eZC0S . Here, uR ’s private input seceZCR consists of a serial number ser R for his or her new coin and a random number rS ∈ Z(p−1)/2 that will be used in her new coin’s commitment. Assuming that hser S , rS , val S i is the entry for eZCS in uS ’s local memory, uS announces the serial number ser S of eZCS and privately contributes rS and val S to compute the eZCS validity proof as in EZC.Spend. Finally, uS ’s private input includes the values hser 0S , rS0 i used for eZCS 0 ’s construction. We emphasize that seceZCR should be kept private even toward uS so the latter is not able to trace further spendings of eZCR . In more detail, the payment sender uS and recipient uR engage in the following operations: 1. uS proves the validity of eZCS . This is achieved as follows. uS runs ACC.Accumulate(N, u, {pubeZC }∀pubLog ) for the set of eZC-commitments that appear in the EZC-blockchain to compute the current public accumulator value AccEZC . Subsequently, uS runs ACC.GenWitness(params, pubeZCS , seceZCS , AccEZC ) to extract a witness wS that eZCS has been confirmed in a block. Then, uS computes Ď€ as described in the previous section; that is: NIZKPoK(Îą, β, Îł, δ) : Îą = g ser S ¡ hβ ¡ wÎł ∧ ACC.Verify(N, u, AccEZC , Îą, δ) = 1. 2. uS mints eZC0S . This is achieved as follows. For the transaction output side, uS picks ser 0S â†?R Zp−1 , and rS0 â†?R Zp−1 , and computes the pub0 0 0 lic information associated to eZC0S , as pub0eZCS = g ser S hval S wrS , where 0 val S = val S − val R is the change value. Finally, uS updates Ď€ to include a proof that pub0eZCS is properly formed: NIZKPoK(Îą, β, Îł, δ, , Îś, Ρ) : Îą = g ser S ¡ hβ ¡ wÎł ∧ ACC.Verify(N, u, AccEZC , Îą, δ) = 1 ∧
Privacy in Bitcoin
113
pub0eZCS = g ¡ hÎś ¡ wΡ ∧ Îś ∈ Z(p−1)/2 . 3. uS needs to enable uR to privately mint for the payment coin eZCR . For that purpose, uS picks rSR â†?R Zp−1 , computes the auxiliary token `SR = hval R wrSR , and updates ZKPoK Ď€ so as to include a proof of correctness of `SR , and sends it to uR along with rSR : NIZKPoK(Îą, β, Îł, δ, , Îś, Ρ, θ) : Îą = g ser S ¡ hβ ¡ wÎł ∧ ACC.Verify(N, u, AccEZC , Îą, δ) = 1 ∧ pub0eZCS = g ¡hÎś ¡wΡ ∧ Îś ∈ Z(p−1)/2 ∧`SR ¡pub0eZCS = g ¡hβ ¡wθ . 4. uR mints eZCR : uR picks ser R â†?R Zp−1 , and rR â†?R Zp−1 and computes pubeZC R = g ser R ¡ hval R ¡ wrR ; he or she extends Ď€ to include proof of correctness of pubeZCR and converts it into a ZKSoK to sign ser S and pubeZC R and pubeZC 0S in the longest blockchain resulting into another transaction: ZKSoK[ser S , pubeZCR ](Îą, β, Îł, δ, , Îś, Ρ, θ, Κ, Îş, Âľ, Ď ) : Îą = g ser S ¡ hβ ¡ wÎł ∧ ACC.Verify(N, u, AccEZC , Îą, δ) = 1 ∧ pub0eZCS = g ¡ hÎś ¡ wΡ ∧ `SR ¡ pub0eZCS = g ¡ hβ ¡ wθ ∧ `SR = hΚ ¡ wÎş ∧ pubeZCR = g Âľ ¡ hΚ ¡ wĎ âˆ§ Îś ∈ Z(p−1)/2 ∧ Κ ∈ Z(p−1)/2 . The resulting transaction is announced to the network of EZCpeers, who upon verification of its correctness work toward its inclusion into a block. After such a transaction is included into a block, pubeZC R and pubeZC 0S are considered members of AccEZC . The security of Extended ZeroCoin rests upon the standard security assumptions of the underlying signature of knowledge, and dynamic accumulator schemes used within. It is easy to see that a spending operation in Extended ZeroCoin costs more in terms of computation that a ZeroCoin spending. However, note that in ZeroCoin direct spendings from ZeroCoins to ZeroCoins is not possible, and one would
114
Bitcoin and Blockchain Security
need to convert a ZeroCoin to a Bitcoin and the latter to a ZeroCoin to be able to always maintain its coin belongings in secrecy. If one takes into account that a ZeroCoin has a fixed value, and one would need to perform multiple spendings to perform a single payment worth of few Bitcoins, the overall cost of maintaining anonymous payments in ZeroCoin may end up being comparable to the one of Extended ZeroCoin.
5.3.4.2
ZeroCash
ZeroCash offers a more practical implementation of Extended ZeroCoin capabilities. Namely, ZeroCash implements a decentralized anonymous system (DAP) by leveraging ZK-SNARK (see Section 5.3.3.2 for more detail). In what follows, we offer an informal description of DAP model, outline its security provisions, and then provide an intuition of how ZeroCash leverages ZKSNARKs to implement it. Decentralized Anonymous Payments. Decentralized anonymous payments propose a system that leverages a ledger to announce messages associated to the DAP system payments. The ledger system is referred to as Basesystem while the associated currency is called Basecoin. Similar to Extended ZeroCoin, each coin coin in DAP systems is strongly associated with the coin’s serial number ser coin , the coin value val coin (i.e., the denomination of the value of that coin in Basecoins), and a commitment value cmtcoin (i.e., a string uniquely associated to coin’s creation). For example, in the case of EZC, this corresponds to the string that appears on the ledger once an eZC is minted. In addition, DAP systems consider a fixed association of a coin with an address key-pair: hapkcoin , askcoin i. Clearly, the owner of coin is assumed to be in possession of askcoin . DAP systems consider two types of transactions: Mint and Pour. Both transactions rely on cryptographic operations in order to enable the creation of new DAP coins from Basecoins (Mint) or the transfer of DAP coins to other DAP coins (Pour). More specifically, a Mint transaction of a DAP coin coin denoted by txmint is the tuple hcmtcoin , val coin i. Similarly, a Pour transaction, denoted by txpour , represents the pouring of two existing DAP coins coin old = {hser coin old , val coin old , cmtcoin old , i}i=0,1 , i i i i
Privacy in Bitcoin
115
to two fresh ones {coin new }j=0,1 with associated commitments {cmtcoin new }j=0,1 j j and has the following form: hroot, ser coin old , ser coin old , cmtcoin new , cmtcoin new , pub, info, ∗i. 1 2 1 2 Here info includes information about the coin recipient and root represents the current status of the underlying ledger. For example, root can correspond to the root of the Merkle tree composed by the commitments of DAP coins that have been generated and included in Basesystem blocks so far. Within a DAP system, one needs to reference the state of the DAP system itself and the underlying system ledger. For example, a Pour transaction takes as input a representation of the set of DAP coins that have so far been created (among others). The DAP system leverages the series of (ordered) blocks that have so far been advertised within the underlying Basesystem ledger and considers the Merkel tree over the set of DAP coin commitments advertised in DAP Mint or Pour transactions in the Baseledger. We denote this by rootncmt assuming a Baseledger of length n. Decentralized anonymous payment systems assume the following operations: • params ← Setup(λ) : This algorithm takes as input the security parameter, λ, and outputs the system public parameters. Note that it is imperative that this algorithm is initially executed by a trusted party. This party would no longer be needed as soon as the generated public parameters are announced to all entities in the system. • hapkcoin , askcoin i ← CreateAddress(params). CreateAddress generates a new address key pair apkcoin , askcoin . Similar to Bitcoin addresses, the secret part askcoin is known, maintained privately by the address owner, and used by the latter to redeem any coins owned by the address public key apkcoin . • {txmint , coin} ← Mint(params, val coin , apkcoin ), where a new DAP coin is generated with value val coin and is owned by the address apkcoin assuming the system parameters params. The Mint operation results in the creation of a new coin coin and the associated transaction. new old old new • {txpour , coin new 1 , coin 2 } ← Pour(params, rootctm , coin 0 , coin 1 , {cmtcoin new , val coin new , πcoin new }i=0,1 , info). Here, two existing ZeroCash coins i i i old old coin 0 and coin 1 are spent into two freshly generated coins coin new and 0 new , respectively. Additionally, this opcoin new of values val coin new andval coin 1 0 1 eration takes as input rootnew cmt that constitutes the current representation of old the ledger state and πcoin new , for i = 0, 1 that prove that coin old 0 and coin 1 i
116
Bitcoin and Blockchain Security
are included in the root. For example rootnew could constitute the root of the Merkel tree consisting of all DAP coin commitments that have been added to the ledger up to the point where coin new and coin new are generated. In this 0 1 case, {πcoin new } would constitute the corresponding sibling (authenticai=0,1 i tion) paths of cmtcoin old and cmt old with respect to root of the Merkle tree. coin 0 1 new The outputs of this operation are coin new 0 , coin 1 , and the associated transaction. In short, the input values val coin new and val coin new correspond to the new 0 1 value of the two freshly generated coins that are to be deposited in the two input address public keys apkcoin new and apkcoin new , respectively. 0 1 val pub specifies the amount to be publicly spent (i.e., allocated to transaction fees). Overall, it should hold that val pub + val coin new + val coin new = val coin old + val coin old , 0 1 0 1 where val coin old and val coin old denote the values corresponding to coin old 0 and 1 2 old coin 2 , respectively. • >/⊥ ← VerifyTransaction(params, LZCash , tx), that essentially verifies whether a transaction tx is correctly formed and does not conflict with the current version of the ledger. Security Analysis: The security of a DAP requires that three properties are respected: ledger indistinguishability, transaction nonmalleability, and balance. Each of these properties is formalized as a game between an adversary A and a challenger C. In each game, the transaction activity of honest parties is determined by the adversary. To enable this, the behavior of honest parties is realized via an oracle O that maintains a ledger L and provides a DAP interface (i.e., accepting queries) for executing any of the algorithm below on behalf of honest parties. {CreateAddress, Mint, Pour, VerifyTransaction, Receive} To control the behavior of honest parties, the adversary constructs a query mapped to the transaction that it wants an honest party to perform and passes this query to C that (after sanity checks) proxies the query to O. For each query that requests an honest party to perform an action, A specifies the identities of previous transactions and the input values and learns the resulting transaction, but not any of the secrets or trapdoors involved in producing that transaction. The oracle O also
Privacy in Bitcoin
117
provides a special query, Insert, that allows A to directly add arbitrary transactions to the ledger L. We now proceed to describe the security provisions of ZeroCash with respect to each of these properties. • Ledger Indistinguishability. This is a property defined for the first time within the ZeroCash paper [9]. This property aims to represent the feature that a computationally bounded adversary who is given access to the ledger should not be able to derive more information about the content of the ledger than what is publicly available for that ledger (i.e., the number of transactions in the ledger, the addresses participating in Mint transactions, and so on). In a more abstract way, ledger indistinguishability represents in the realm of privacy-preserving Bitcoin (payment) transactions the equivalent to message indistinguishability in the realm of encrypted messages. Recall that the standard message indistinguishability game take places between a challenger who sets up the encryption scheme and generates keypairs and an attacker who tries to break the security of the encryption scheme leveraging these keys. At the challenge phase, the adversary specifies two messages of the same length, and asks the challenger to encrypt exactly one of them using the encryption scheme whose security is to be proven. The challenger is assumed to pick one of the two messages at random, encrypts it, and sends the corresponding ciphertext to the adversary. The latter is asked to guess which message the ciphertext corresponds to. The challenged encryption scheme is said to offer message indistinguishability as long as the attacker cannot provide the correct answer with better probability than 12 . Ledger indistinguishability evolves in a similar way. It involves a challenger and a (computational) adversary who is given access to certain address keys and the power to control (to some extent) the transaction activity of honest users. Namely, the challenger samples a random bit b and initializes two DAP oracles implementing the DAP system interface O0 and O1 , each maintaining a separate ledger L0 and L1 , respectively. The adversary is then allowed to submit queries to the challenger in pairs, one destined for O0 and L0 , and the other destined for O1 and L1 . The challenger processes these queries; namely, it forwards these queries to the associated oracle if and only if these queries have matching type and are identical in terms of information available to the adversary (or the addresses it has corrupted). The challenger always provides the responses to these queries of the adversary to the two ledgers in a randomized order. That is, the challenger presents the adversary first with Lleft := Lb and then with Lright := L1−b .
118
Bitcoin and Blockchain Security
The adversary is then requested to guess b and distinguish the order in which the two ledgers are presented. Ledger indistinguishability requires that the adversary is not able to guess b with better probability than 12 . • Transaction nonmalleability. This property requires that a computationally bounded adversary is not able to modify the contents of transactions of other users at any point after these transactions were issued. Similar to ledger indistinguishability, transaction nonmalleability can also be expressed by means of a security game. In particular, we assume that the adversary constructs a ledger by communicating with an oracle and, as a result, the adversary sees the set of transactions T that were added to the ledger. At the challenge phase, the adversary is requested to output a transaction tx∗ ∈ / T. The adversary wins the game, if there is another transaction tx ∈ T , such that tx∗ and tx have the same serial number, i.e., both transactions attempt to spend the same coin, and tx∗ is accepted by VerifyTransaction in the ledger that preceded tx. Transaction nonmalleability requires that the probability that the adversary succeeds in this game is negligible with respect to the security parameter. • Balance. This property requires that a computationally bounded adversary should not be able to own more coins than the coins that were converted through Mint operations or received from others via Pour operations. The balance property is formalized in a similar way as transaction nonmalleability and assumes a bounded adversary who is in possession of a set of addresses A and is given oracle access to a ledger. At the challenge phase, the adversary is asked to present a set of valid coins owned by addresses in A such that the total amount of these coins exceeds those acquired from Mint and Pour transactions (that appear in the ledger). Clearly, this property is guaranteed if and only if the adversary succeeds in this game with negligible probability with respect to the security parameter.
ZeroCash: Implementing DAP Using ZK-Snarks. ZeroCash is an instantiation of a decentralized payment system using zk-SNARKS, and Bitcoin as Basecoin. In this paragraph, we take a closer look at how ZeroCash works. As opposed to Extended ZeroCoin, coins in ZeroCash are associated with addresses. That is, a ZeroCoin is associated with an address public key apkc, and the knowledge of the respective secret key askc is needed to spend the coin.
Privacy in Bitcoin
119
To mint a coin c, the user is required to sample a random number ρc that will be used for the generation of the serial number ser c of the minted coin as follows: ser c = PRFser askc (ρc ), where PRFser askc is a collision resistant pseudorandom function using askc as seed and ρc as input. To complete Mint, the coin address public key apkc , the sampled number ρc , and coin value val c are incorporated into a commitment value cmtc . First, a commitment is generated to bind values apkc together with ρc into ˜ c: cmt ˜ c = cmtr˜c (apkc , ρc ), cmt ˜ c with val c into cmtc as follows: for randomly selected r˜c , and then bind cmt ˜ c , val c ), cmtc = cmtrc (cmt where rc is selected uniformly at random. Note that because of the two-layer ˜ c , one can prove commitment process, it is possible that given rc , val c , and cmt that the minted coin c has value val c without revealing the coin’s serial number or apkc . Now, we have a coin c generated. The question that comes next is how to spend this coin—an operation otherwise known as Pour. To this end, assume that one wishes to spend cold into two fresh coins c0new and c1new of value val c0new and val c1new , respectively. Recall that: cold = {ρcold , apkcold, , val cold }. Initially, the spender follows the same process as before to compute the output coins, that is: cinew = {ρcinew , apkcnew,i , val cinew }.i = 0, 1 The spender also computes the resulting commitment values cmtcinew }i=0,1 . ZeroCash leverages zk-SNARKs to show that cold has not been already spent and that value of the output coins (i.e., val coin new + val coin new ) does not exceed the 0 1 value of the spent coin (i.e., val cold ). In the sequel, we denote by rootcmt the root of the Merkle tree built from coin commitments that have been added to the ledger rootcmt . The spender of cold generates a zk-SNARK proof π for the following statement. Given rootcmt , coin serial number ser cold , and commitments {cmtcinew }i=0,1 , there are coins cold , c0new , c1new , and coin address askcold, such that the following hold:
120
Bitcoin and Blockchain Security
• Coins cold , c0new , and c1new are well formed. That is, their commitment values were correctly computed using {ρcoin , apkccoin , val coin }coin=cold ,c0new ,c1new . • Serial number ser cold was correctly computed using askcold, and ρcold . • cmtcold appears in the Merkle tree of coin commitments with root rootcmt . • val cold = val c0new + val c1new . Proof π (along with ser cold , cmtc0new , andcmtc0,1 ) constitutes essentially the new Pour transaction but still does not offer nonmalleability. To remedy this, the spender cold cold samples an additional signature key-pair hpkSig , skSig i that binds the spending of cold as follows: c
old 1. The spender computes hSig = hash(pkSig ), where hash is a collision-free hash function.
2. The spender computes h = PRFaskcold, (hSig ). 3. The spender adds h, hSig to the Pour transaction content. 4. The spender extends the statement associated with the Pour transaction in order to include a proof that handhSig are correctly computed. c
old 5. The spender uses skSig to sign the extended Pour transaction.
Observe that the ledger indistinguishability and balance properties are implicitly provided by the zero-knowledge, and proof of knowledge (soundness) provisions of zk-SNARKS.
5.4
SUMMARY
In this chapter, we detailed a number of prominent attacks on the privacy provisions of Bitcoin. More specifically, we outlined a number of network-based attacks that leverage information exchanged by Bitcoin peers throughout their participation in the network. We also discussed a number of protocol-based attacks on the system and quantified the privacy offered by Bitcoin in light of these attacks. Finally, we discussed a number of research attempts, such as ZeroCash and Extended Zerocoin, to enable privacy-preserving payments in Bitcoin. Our analysis shows that existing implementations of Bitcoin leak considerably information about the profiles of users. This is especially evident when an
Privacy in Bitcoin
121
adversary can leverage the use of behavior-based clustering algorithms and combine their use with a number of heuristics that capture multi-input transactions and shadow addresses. Note that the manual creation of new addresses can only partly conceal the profiles of users who participate in a small amount of transactions. Such a countermeasure, however, does not increase the privacy of users who are active in the network and participate in a large number of transactions. To enhance user privacy in Bitcoin, mixing transactions emerges as an effective technique to hide the linkability between inputs and outputs of Bitcoin transactions. Existing solutions rely on a mixing server to harden the tracing of coin expenditure in the network; here, the mixing server still needs to be trusted to ensure anonymity, since it learns the mapping of coins to addresses. Even when mixers can be instantiated without the need of a centralized mixing server, such protocols cannot fully prevent clustering analysis since they do not hide the amounts and times of payments. On the other hand, although privacy-preserving cryptographic enhancements of Bitcoin can prevent the leakage of transaction amounts and times, such protocols incur considerable computational overhead and require significant modifications to the Bitcoin protocol. We note that the lack of privacy offered by the current Bitcoin system can however be seen as an enabler for accountability measures in the system. Recall that incorporating accountability measures in Bitcoin is essential to deter misbehavior, especially given the lack of workable mechanisms to ban/punish Byzantine nodes.
References [1] Fergal Reid and Martin Harrigan. An Analysis of Anonymity in the Bitcoin System, pages 197–223. Springer New York, New York, NY, 2013. [2] Fergal Reid and Martin Harrigan. An analysis of anonymity in the bitcoin system. In Security and Privacy in Social Networks, 2013. [3]. [4] Dorit Ron and Adi Shamir. Quantitative analysis of the full Bitcoin transaction graph. In Financial Cryptography and Data Security - 17th International Conference, FC 2013, pages 6–24, 2013. [5] Sarah Meiklejohn, Marjori Pomarole, Grant Jordan, Kirill Levchenko, Damon McCoy, Geoffrey M. Voelker, and Stefan Savage. A Fistful of Bitcoins: Characterizing Payments Among Men
122
Bitcoin and Blockchain Security
with No Names. In Proceedings of the 2013 Conference on Internet Measurement Conference, IMC ’13, pages 127–140, New York, 2013. ACM. [6] Philip Koshy, Diana Koshy, and Patrick McDaniel. An analysis of anonymity in bitcoin using p2p network traffic. In Financial Cryptography and Data Security, 2014. [7] Alex Biryukov, Dmitry Khovratovich, and Ivan Pustogarov. Deanonymisation of clients in bitcoin p2p network. In ACM Conference on Computer and Communications Security, 2014. [8] Ian Miers, Christina Garman, Matthew Green, and Aviel D. Rubin. Zerocoin: Anonymous distributed e-cash from Bitcoin. In Proceedings of the 2013 IEEE Symposium on Security and Privacy, SP ’13, pages 397–411, Washington, DC, USA, 2013. IEEE Computer Society. [9] Eli Ben-Sasson, Alessandro Chiesa, Christina Garman, Matthew Green, Ian Miers, Eran Tromer, and Madars Virza. Zerocash: Practical decentralized anonymous e-cash from bitcoin. In Proceedings of the 2014 IEEE Symposium on Security and Privacy. IEEE, May 2014. [10] Elli Androulaki and Ghassan O. Karame. Hiding transaction amounts and balances in bitcoin. In Trust and Trustworthy Computing, 2014. [11] Micha Ober, Stefan Katzenbeisser, and Kay Hamacher. Structure and anonymity of the Bitcoin transaction graph. Future Internet, 5(2):237–250, 2013. [12] Bitcoin—Wikipedia, Introduction.
2013.
available
from
[13] Andreas Pfitzmann and Marit Hansen. Anonymity, Unlinkability, Undetectability, Unobservability, Pseudonymity, and Identity Management: A Consolidated Proposal for Terminology. pages 111– 144, 2008. [14] Bitcointalk Forum, available from. [15] Arthur Gervais, Ghassan Karame, Srdjan Capkun, and Vedran Capkun. Is Bitcoin a Decentralized Currency? IEEE Security and Privacy Magazine, 2014, May/June issue 2014, 2014. [16] Malte Möser, Rainer Böhme, and Dominic Breuker. Towards risk scoring of bitcoin transactions. In Financial Cryptography and Data Security - FC 2014 Workshops, BITCOIN and WAHC 2014, pages 16–32, 2014. [17] TOR project. available from. [18] Bitcoin Laundry. Bitcoin Laundry Mixing Service, 2013. bitlaunder.com.
available from https://
[19] Bitcoin Fog. Bitcoin Fog Mixing Service, 2009. available from. [20] Blockchain.info. Blockchain Mixing Service, 2013. available from. [21] CoinJoin: Bitcoin privacy for the real world, 2013. available from. org/index.php?topic=279249.0.
Privacy in Bitcoin
123
[22] Torben P. Pedersen. Non-interactive and information-theoretic secure verifiable secret sharing. In CRYPTO, 1992. [23] C. P. Schnorr. Efficient signature generation for smart cards. pages 239–252, 1991. [24] Ronald Cramer, Ivan Damgard, and Berry Schoenmakers. Proofs of partial knowledge and simplified design of witness hiding protocols. In CRYPTO, 1994. [25] Jan Camenisch. Group Signature Schemes and Payment Systems Based on the Discrete Logarithm Problem. PhD thesis, ETH Zurich, 1998. ETH Series in Information Security and Cryptography. [26] Stefan Brands. Rapid Demonstration of Linear Relations Connected by Boolean Operators. In EUROCRYPT, 1997. [27] Amos Fiat and Adi Shamir. How to Prove Yourself: Practical Solutions to Identification and Signature Problems. In CRYPTO, 1986. [28] Melissa Chase and Anna Lysyanskaya. On signatures of knowledge. In CRYPTO, 2006. [29] Man Ho Au, Willy Susilo, and Yi Mu. Proof-of-Knowledge of Representation of Committed Value and Its Applications. In ACISP, 2010. [30] J. Camenisch and A. Lysyanskaya. Dynamic accumulators and application to efficient revocation of anonymous credentials. In CRYPTO, 2002. [31] Nir Bitansky, Alessandro Chiesa, Yuval Ishai, Rafail Ostrovsky, and Omer Paneth. Succinct noninteractive arguments via linear interactive proofs. In Proceedings of the 10th Theory of Cryptography Conference, 2013.
Chapter 6 Security and Privacy of Lightweight Clients In this chapter, we briefly overview and analyze the security of simple payment verification in Bitcoin. We then discus the operation of current lightweight clients and analyze their security and privacy provisions.
6.1
SIMPLE PAYMENT VERIFICATION
We start by describing simple payment verification (SPV) in Bitcoin. 6.1.1
Overview
As described in Chapter 4, Bitcoin requires peers in the system to verify all broadcasted transactions and blocks. Clearly, this comes at the expense of storage and computational overhead. Currently, a typical Bitcoin installation requires more than 70 GB of disk space and considerable time (and computational resources) to download and locally index blocks/transactions that are contained in the blockchain. Moreover, the continuous growth of Bitcoin transactional volume incurs significant computational overhead on the Bitcoin clients when verifying the correctness of broadcasted blocks and transactions in the network. This problem becomes even more evident when users wish to perform/verify Bitcoin payments from resourceconstrained devices such as mobile devices and tablets. To remedy that, lightweight client implementations have been proposed in [1]. These clients only perform a so-called simplified payment verification (SPV). SPV
125
126
Bitcoin and Blockchain Security
clients do not store the entire blockchain, nor do they validate all transactions in the system. Notably, SPV clients only perform a limited amount of verifications, such as the verification of block difficulty and the presence of a transaction in the Merkle tree, and offload the verification of all transactions and blocks to the full Bitcoin nodes. Note that this clearly comes at costs of decentralization, since a critical security component of the network is outsourced to few nodes in the system. Recall that SPV clients currently connect by default to four regular nodes; if all these nodes are malicious, then they could effectively control the view of the SPV client on the network and could prevent the client from sending and receiving transactions. 6.1.2
Specification of SPV Mode
We now describe the detailed operations undergone by an SPV client. In the SPV mode, the sending of transactions is performed similarly to the regular Bitcoin protocol (see Chapter 4). Namely, SPV clients are equipped with public/private key pairs that enable them to assert ownership of coins and issue transactions in the system. Unlike full Bitcoin nodes, SPV clients do not receive all the transactions that are broadcast within the Bitcoin P2P network, but instead receive a subset of transactions filtered for them by the full nodes to which they are connected. Currently, SPV clients connect to a default of four different randomly chosen nodes. In order to calculate their own balance, SPV clients request full blocks from a given block height on; here, the full Bitcoin nodes can also provide filtered blocks to the SPV client that only contain relevant transactions from each block. The nodes verify the signature of the received transactions, verify the proof-of-work of the blocks, and verify that these transactions were indeed included in the Merkle tree committed in the received blocks. Note that the client can rely on the block depth in order to determine the transactions’ validity. For instance, if a transaction has been confirmed in an old block that appears in the blockchain, then it is highly unlikely that the transaction is incorrect. Recall that each block contains the root of the Merkle tree of all transactions that were confirmed in the block. To verify that a given transaction was included in a block, an SPV client requests a proof of membership from one of the (full) Bitcoin nodes that he or she connects to. The latter outputs the membership proof, πM . The SPV client then uses πM and the Merkle root committed in the block in order to verify the inclusion of a given transaction x.
Security and Privacy of Lightweight Clients
127
Note that lightweight clients are not involved in the mining process. Readers should not confuse the operations of lightweight client with the notion of SPV mining. SPV mining refers to the act of mining blocks without validating the previous block (and the transactions included therein) in the chain. Note that since such miners do not know which transactions have been included in the last block (since they skip the verification of the transactions), they need to mine without including any transaction (except for the coinbase transaction) in order to avoid the risk of including transactions that conflict with the previous block. As discussed in [2], this strategy allows the adversary to gain an advantage in the mining process (e.g., to be the first in finding the correct PoW). SPV mining recently caused a fork in the blockchain [3] due to the fact that some mining pools continued mining blocks that originally referenced an invalid block header.
6.1.3
Security Provisions of SPV mode
Clearly, the security offered by SPV mode is considerably weaker than that of the standard Bitcoin protocol. Nevertheless, SPV mode still offers reasonable security guarantees, provided that at least one of the client’s connections correctly follows the protocol and has up-to-date knowledge on the longest blockchain. Namely, if all connections of the SPV client are dishonest, then these connections could control the view of the SPV client on the network and could prevent the client from sending and receiving transactions. For example, if all the connections of the SPV client are populated by malicious nodes, then SPV client might not learn the height of the longest blockchain. In this case, malicious connections could then convince the SPV client of a chain that is not necessarily the one adopted by the network (i.e., of a fork chain that is smaller than the current blockchain adopted by the network). In this respect, SPV clients have no way to verify whether the blocks and transactions that they receive from their connections are part of the main blockchain. Clearly, it suffices that the SPV client learns the height of the longest blockchain from one honest neighbor in order to detect this misbehavior. Alternatively, the SPV client can measure the generation times of the received block headers; the client can suspect the occurrence of such an attack if he or she does not receive on average a block every 10 minutes. At all times, we point out that malicious nodes cannot convince the SPV client to accept an ill-formed block or transaction, since the SPV client always verifies the proof-of-work for all received block headers and verifies the membership of transactions of interest in the respective blocks.
128
Bitcoin and Blockchain Security
Figure 6.1 Sketch of the operation undergone by an SPV client. SPV clients connect to a regular Bitcoin node to which it also outsources its various Bloom filters. The regular node only forwards to the SPV clients the transactions relevant to their Bloom filters.
6.2
PRIVACY PROVISIONS OF LIGHTWEIGHT CLIENTS
In this section, we discuss the privacy provisions of existing lightweight Bitcoin clients as reported in [4]. We start by overviewing the operation of Bloom filters that are used in the SPV mode to filter transactions that are not relevant for SPV clients. 6.2.1
Bloom Filters
As mentioned earlier, SPV clients do not receive all the transactions that are broadcast within the Bitcoin P2P network, but instead receive a subset of transactions filtered for them by the full nodes to which they are connected. To reduce bandwidth consumption, SPV clients make use of Bloom filters [5, 6]. These filters basically consist of space-efficient probabilistic data structures that are used to test membership of an element. An SPV client constructs a Bloom filter by embedding all the Bitcoin addresses and public keys that appear in its wallets. The SPV client then outsources the constructed Bloom filter to a full Bitcoin node, as shown in Figure 6.1. Whenever the full node receives a transaction, it first checks to see if its input and/or output match the SPV client’s Bloom filter. If so, the full node forwards the received transaction to the SPV client. A Bloom filter B of an SPV client is typically specified by the maximum number of elements that it can fit, denoted by M , and a target false-positive rate Pt .
Security and Privacy of Lightweight Clients
129
Figure 6.2 Sketch of the basic operation of Bloom filters.
As shown in Figure 6.2, a Bloom filter B basically consists of an array B[.] of n bits accessed by k independent hash functions H1 (.), . . . , Hk (.), each of which maps an input string x ∈ {0, 1}∗ to one of the n bits of the array; all bits of B[.] are initialized to zero. To insert an element s, one has to set the bits at position h1 (x), . . . , hk (x) in B[.] to 1. Similarly, to test whether an element is a member of B[.], one has to check the bits at positions h1 (x), . . . , hk (x); if any of those bits are not 1, then the element is not a member of B[.]. Bloom filters can generate a number of false positives, but cannot result in false negatives. In this book, we compute the false positive rate of a filter B(M, Pt ) which has m elements, Pf (m), as follows [7]:
Pf (m) =
km !k 1 1− 1− n
(6.1)
Here, note that Pf (M ) ≈ Pt . That is, the target false positive rate of a Bloom filter is only reached when the number of elements contained in the filter matches M [4]. 6.2.2
Privacy Provisions
In what follows, we analyze the privacy provisions of SPV clients and show that the reliance on Bloom filters within existing SPV clients leaks considerable information about the addresses of Bitcoin users. We also show that this information leakage is further exacerbated when users restart their SPV clients and/or when the adversary
130
Bitcoin and Blockchain Security
has access to more than one Bloom filter pertaining to the same SPV client. Motivated by these findings, we also describe an efficient countermeasure introduced in [4] in order to enhance the privacy of users that rely on SPV clients; this countermeasure can be directly integrated within existing SPV client implementations. In our analysis, we assume that the adversary can compromise one or more full Bitcoin nodes and eavesdrop on communication links in order to acquire one or more Bloom filters pertaining to an SPV client. Here, the goal of the adversary is to identify the Bitcoin addresses that are inserted within a Bloom filter created by a particular SPV client. The addresses inserted in the Bloom filter typically correspond to addresses that the SPV client is interested in receiving information about (e.g., these addresses typically belong to the wallet of the SPV client). For example, the adversary might be connected to the node that generated the Bloom filter or might try to assign an identity to nodes according to their addresses. Note that since the Bitcoin network provides currently less than 10,000 reachable full Bitcoin nodes, it is likely that regular nodes receive one or more filter pertaining to each SPV client over a sufficiently long period of time. 6.2.3
Leakage Due to the Network Layer
Clearly, the adversary can try to link different Bloom filters to a single wallet by identifying the IP addresses used to outsource the Bloom filters. If, for example, the same IP address outsources two different Bloom filters to a regular node, then that node could directly infer that those filters belong to the same entity. This leakage is even more exacerbated since an adversary who is connected to an SPV client can see the transactions issued by the client and could potentially use this in order to learn the clients’ addresses. 6.2.3.1
Countermeasure
Information leakage that originates from the network layer can be countered, for example, by SPV clients using anonymizing networks such as Tor [8] whenever they issue Bitcoin transactions or they outsource their Bloom filters. 6.2.4
Leakage Due to the Insertion of Both Public Keys and Addresses in the Bloom filter
In current implementations of SPV clients, both the addresses and their public keys are inserted in the outsourced Bloom filter. As such, if the adversary knows both the
Security and Privacy of Lightweight Clients
131
address and its public key, then she he or can trivially test whether an address is a true positive of the filter by checking whether both the address and its public key are inserted within the filter. If not, then it is highly likely that the address is a false positive of the filter. We believe that the inclusion of both the address and its public key in the Bloom filter is a severe flaw in the current SPV client implementations. 6.2.4.1
Countermeasure
More than 99% of all Bitcoin transactions consist of payments to Bitcoin addresses (or the public key hash); moreover, only 4,587 out of 33 million studied addresses in the system received transactions destined for both their public keys and their public key hashes.1 This means that for the vast majority of Bitcoin clients, there is no need to include both the public keys and their hashes (i.e., the Bitcoin addresses) in the Bloom filters; inserting one or the other would suffice (in more than 99% of the cases). Note that only inserting the addresses in the Bloom filter would suffice since regular nodes can easily hash the public keys and check whether they match the Bloom filter. However, this clearly incurs additional computational overhead on regular Bitcoin nodes. 6.2.5
Leakage under a Single Bloom Filter
In the sequel, we assume that SPV clients use anonymizing networks when connecting to regular Bitcoin nodes. As mentioned earlier, this alleviates potential leakage in the network layer. We additionally assume that SPV clients do not insert the public key and the corresponding Bitcoin address into the same filter in order to prevent trivial leakages of their embedded addresses (see Section 6.2.4). We show that even with these measures, Bloom filters still leak considerable information about the embedded client addresses. Namely, in existing SPV clients, a node initializes its Bloom filter Bi with a random nonce r and specifies its target false positive rate Pt that can be achieved when a number of elements M have been inserted in the filter. By default, M is set to m + 100 = 2N + 100, where N is the number of Bitcoin addresses inserted in Bi . Here, the additional number 100 was originally added to m by the Bitcoin developers in order to avoid the recomputation of a new filter in the case where a user inserts up to 50 additional Bitcoin addresses (recall that a Bitcoin address is 1
These numbers were obtained by parsing the Bitcoin blockchain until block # 296000.
132
Bitcoin and Blockchain Security
inserted into the Bloom filter by adding both the corresponding public key and the public key hash to the filter; therefore m = 2N ). The default target false positive rate of the Bloom filter Pt is set to 0.05% at the time of writing. The size of the filter n and the number of hashes k in Bloom filters are computed as follows: n=−
M ln(Pt ) (ln(2))2
k = ln(2)
n M
(6.2)
(6.3)
Note that if the SPV client restarts (e.g., mobile phone reboots, mobile application is restarted), then the Bloom filter will be recomputed (the SPV client stores the Bloom filter in volatile memory). When the user acquires 50 or more additional addresses such that m > M , then the SPV client will resize the Bloom filter by recomputing M = 2N + 100, and will send the updated Bloom filter to the full Bitcoin nodes that it is connected to. Note that given n and k, the number of elements contained in a Bloom filter can be estimated by the adversary as follows [9]: m ≈ −n
ln(1 − k
X n)
(6.4)
Here, X corresponds to the number of bits of the Bloom filter set to one. Given n and Pt , M can also be computed by the adversary from (6.2). Note that, in April 2014, the Bitcoin blockchain comprised nearly |B| =33 million addresses. This means that an adversary can simply try all possible addresses in the Bitcoin system in order to compute the positives of the Bloom filter Bi , denoted in the sequel by Bi . Following from [4], we quantify the privacy offered by a Bloom filter using the probability, Ph(j) , that the adversary correctly guesses any j true positives of a Bloom filter among all positives that match the filter and which are not included in the knowledge of the adversary.2 More specifically, we measure Ph(j) achieved by Qj−1 −k −1 a Bloom filter Bi , as follows: Ph(j) = k=0 NN+S−k = NN+S · NN+S−1 . . . . Here, N refers to the number of Bitcoin addresses inserted into Bi and S denotes the cardinality of the set {Fi − K}; S therefore corresponds to all false positives that match Bi , but for which the adversary does not have any knowledge about. 2
Clearly, the higher is Ph(.) , the smaller is the privacy of the SPV node.
Security and Privacy of Lightweight Clients
133
It is straightforward to see that:
Ph(j) =
j−1 Y k=0
j−1 Y N −k N −k ≈ N +S−k N + |B − N |Pf (2N ) − k
(6.5)
k=0
Here, N |B|, and m = 2N is the number of elements contained in the Bloom filter seen by the adversary. Note that if the adversary is able to identify an SPV client (e.g., by some side channel information), then simply identifying any address pertaining to that client would be a considerable violation of its privacy. Otherwise, if the adversary can link a number of addresses to the same anonymous client, then the information offered by the clustering of these addresses offers the adversary considerable information about the profile of the client, such as its purchasing habits, and so on. It is worthy to note that Ph(.) may not always capture the probability of guessing addresses that belong to the user of the SPV client, for example, in the case where the SPV client may embed addresses that do belong to the user in its Bloom filter. 6.2.5.1
Poorly-populated Bloom Filters
Following from (6.5), Ph(1) (the probability of correctly guessing one address as a true positive) is large when 2N/M ≤ 0.4, as long as N < 100. Given a modest number of addresses in the Bloom filter (i.e, N < 100), Pf (m = 2N ) is small m m is small. As M increases, Pf (m = 2N ) increases (and Ph(1) decreases). when M For example, when N = 10, Ph(10) = 0.99 which corresponds to the probability of correctly guessing all the true positives when the SPV client has 10 addresses. This means that the information leakage in SPV clients is considerable for new Bitcoin users or for Bitcoin users that restart their SPV clients. For instance, at initialization time, the Bloom filter of SPV clients is typically instantiated using M = 102. Moreover, if the user is new in the Bitcoin system and only has 1 Bitcoin address, Ph(1) ≈ 1—which results in complete lack of privacy. Recall that this observation also holds when the SPV client restarts and N < 100. Gervais et al. analytically compute Ph(j) when the SPV client has 5, 10, 15 and 20 addresses [4]. Their results show that guessing all addresses given one filter that embeds less than 15 addresses can be achieved with almost 0.80 probability. This probability decreases as the number of addresses embedded within the filter increases beyond 15. This analysis has been validated by means of experimentation in the real Bitcoin network by Gervais et al. [4].
134
Bitcoin and Blockchain Security
On the other hand, when the Bloom filter comprises a considerable number of elements (i.e., when 2N/M > 0.4), then Pf (m = 2N ) is close to Pt .
6.2.6
Leakage under Multiple Bloom Filters
We now describe the information leakage due to multiple Bloom filters as analyzed in [4]. For that purpose, we assume that the adversary can acquire b > 1 Bloom filters pertaining to different users. For example, the adversary might be connected to SPV clients for a long period of time and receive their updated Bloom filter. Alternatively, the adversary can acquire additional Bloom filters by compromising/colluding with other full Bitcoin nodes. Similar to Section 6.2.5, we assume that SPV clients do not embed public keys and their corresponding addresses in the same filter; we also assume that these clients connect to regular nodes using an anonymizing network in order to avoid obvious leakage due to network layer information.
6.2.6.1
Two Bloom Filters
We start by analyzing the case where the adversary acquires two different Bloom filters B1 and B2 . In the sequel, we focus on computing Ph(.) corresponding to filter B1 , which we assume to be the smallest of the two filters (in size). In analyzing the information leakage due to the acquisition of two Bloom filters, we distinguish two cases.
6.2.6.2
B1 and B2 Belong to Different Users
Recall that each Bloom filter is initialized with a random seed chosen uniformly at random from {0, 1}64 . Therefore, if B1 and B2 pertain to different users, then it is highly likely that they are initialized with different random seeds. This means that the false positives generated by each filter are highly likely to correspond to different addresses. Moreover, since different users will have different Bitcoin addresses, B1 and B2 will contain different elements. Therefore, B1 ∊ B2 is likely to comprise only few addresses, if any. Notably, when B1 and B2 pertain to different users, then
Security and Privacy of Lightweight Clients
135
|B1 ∩ B2 | can be computed as follows: 1 |B| − N1 Pf (m1 )Pf (m2 )|B|2 ≈ |B| − N1 ≈ Pf (m1 )Pf (m2 )|B|,
E[|B1 ∩ B2 |] ≈ (|B1 | − N1 )|B2 |
(6.6) (6.7) (6.8)
where N1 corresponds to the number of elements inserted in B1 . E[|B1 ∩ B2 |] is the expected number of elements that match B2 and B1 . The number of elements in B that match B2 is given by Pf (m2 )|B|. Then, E[|B1 ∩ B2 |] can be computed by assuming a binomial distribution with success probability Pf (m2 ) and with Pf (m2 )|B| number of trials. Note that the adversary can compute m1 (using (6.4)); if m1 > |B1 ∩ B2 |, then this offers a clear distinguisher for the adversary that the two acquired Bloom filters B1 and B2 pertain to different user wallets. 6.2.6.3
B1 and B2 Belong to the Same User
On the other hand, in the case where B1 and B2 correspond to the same SPV client, three subcases emerge: B1 and B2 use the same size/seed: This is the case when users, for example, create additional Bitcoin addresses and need to update their outsourced Bloom filters to include those addresses. In this case, B1 and B2 are likely to comprise similar Bitcoin addresses. This includes both the actual elements of the filters (i.e., the Bitcoin addresses of the user and the false positives generated by the Bloom filter). In this case, |B1 ∩ B2 | can be computed as follows: E[|B1 ∩ B2 |] ≈ N1 + Pf (2N1 )|B| Ph(j) ≈
j−1 Y k=0
N1 − k N1 + Pf (2N1 )|B| − k
(6.9) (6.10)
In this case, Ph(j) is not affected by the acquisition of the second filter B2 . B1 and B2 use different seeds: In existing SPV clients, the random nonce r used to instantiate the Bloom filter is stored in volatile memory. Therefore, each time
136
Bitcoin and Blockchain Security
the SPV client is restarted (e.g., smartphone reboots), a new filter will be created with a new seed chosen uniformly at random. If the adversary acquires two Bloom filters of the same user that are initialized with different seeds, then these filters are likely to exhibit different false positives. B1 and B2 will however comprise a number of identical elements (which map to the Bitcoin addresses of the user). More specifically, |B2 | |B| − N1 ≈ N1 + Pf (m1 )Pf (m2 )|B|
E[|B1 ∩ B2 |] ≈ N1 + (|B1 | − N1 )
Ph(j) ≈
j−1 Y k=0
N1 − k N1 + Pf (m1 )Pf (m2 )|B| − k
(6.11) (6.12) (6.13)
Note that the obtained Ph(j) is considerably large in this case when compared to the case where the adversary has access to only one filter. B1 and B2 use the same seed, but have different sizes: This is the case when users, for example, create additional Bitcoin addresses beyond the capacity of their current Bloom filters. SPV clients therefore need to resize their Bloom filters. Note that filter resizing typically shuffles the bits of the Bloom filters; the resulting distribution of bits in the new resized filter is not necessarily pseudorandom (since the same seed is used) and depends on the sizes of the filters. As such, only the lower bound on |B1 ∩ B2 | can be estimated using (6.12) (which estimates the worst case where filter resizing causes a pseudorandom permutation of the bits of the filters). Recall that since there are only a few tens of millions of addresses in Bitcoin, the adversary can brute-force search the entire list of Bitcoin addresses in order to acquire B1 and B2 and compute B1 ∩ B2 . Given any two Bloom filters B1 and B2 , the adversary can easily guess whether these two Bloom filters contain Bitcoin addresses from the same wallet. Indeed, if |B1 ∩ B2 | is small, then it is highly likely that B1 and B2 map to different elements (if m1 and m2 are not small), and therefore pertain to different users. On the other hand, when |B1 ∩ B2 | 0, it is highly likely that all the Bitcoin addresses in the set B1 ∩ B2 belong to the same SPV client.
137
Security and Privacy of Lightweight Clients
Table 6.1 Ph(.) with respect to the Number b of Bloom Filters Belonging to the Same User
b 1 2 3 4 5
Ph(1) (Pt = 0.05%) 0.1713 0.9978 1 1 1
6.2.6.4
Ph(1) (Pt = 0.1%) 0.0926 0.9911 1 1 1
Ph(N/2) (Pt = 0.05%) 0 0.0091 1 1 1
Ph(N/2) (Pt = 0.1%) 0 0 1 1 1
Ph(N ) (Pt = 0.05%) 0 0 1 1 1
Ph(N ) (Pt = 0.1%) 0 0 1 1 1
Multiple Bloom Filters
In the previous paragraphs, we discussed the case where the adversary is equipped with only two Bloom filters. Note that our analysis equally applies to the case where the adversary possesses any number b > 2 of Bloom filters pertaining to the same entity. As mentioned earlier, by computing the intersection between each pair of filters, the adversary can find common elements to different filters; this also enables the adversary to guess with high confidence whether different filters have been generated by the same client. Given b filters that belong to the same SPV client, the adversary can compute the number of elements inserted within each filter using (6.4). In the sequel, we assume that filters B1 , . . . , Bb are sorted by increasing number of elements (i.e., Bb contains the largest number of elements), and that filters are constructed using different seeds. Let Kj = Bj ∩ · · · ∩ B(b−1) , ∀j ∈ [1, b − 1]. Note that |K1 | ≤ |K2 | · · · ≤ |K(b−1) |. Here, the larger the number of Bloom filters at the disposal of the adversary, the smaller is the error of the adversary in correctly classifying the genuine addresses of the SPV client, and the larger is Ph(.) . That is, the larger is b, the smaller are the number of common false positives that are exhibited by the different filters, and the higher is the confidence of the adversary in identifying the false positives of Bj . Moreover, as j increases, Kj will contain more false positives, and Ph(j) will decrease. In what follows, we analytically validate this analysis and investigate the impact of having b > 2 Bloom filters pertaining to the same SPV client. For that purpose, we use 5 Bloom filters B1 ,B2 ,. . . ,B5 generated using different seeds with N = {3070, 3120, 3170, 3220, 3270}. We then compute Kj = B1 ∩ · · · ∩
138
Bitcoin and Blockchain Security
B(j+1) , ∀j ∈ [1, b − 1], and the corresponding Ph(.) as follows: Y E[|K1 |] = min(|B1 |, |B2 |, . . . ) ≈ N1 + |B| Pf (mj )
(6.14)
∀j
Ph(j) ≈
j−1 Y k=0
Ni − k Q Ni − k + |B| ∀j Pf (mj )
(6.15)
The results (depicted in Table 6.1) validate the aforementioned analysis3 and show that the larger the number b of acquired Bloom filters of the same SPV client, the larger is Ph(.) and the smaller is the privacy of the user’s addresses. For instance, if the adversary is able to collect b > 2 different Bloom filters pertaining to the same wallet, then the adversary will be able to recover 100% of the true positives of the smallest Bloom filter. 6.2.7
Summary
We now summarize our findings with respect to the privacy provisions of existing SPV clients. • The number of elements inserted within a Bloom filter significantly affects the resulting false positive rate of the filter. This is especially true when the filter’s size is modest (e.g., < 500). Indeed, the number of elements inserted in the filter should match at all times the filter’s size in order to achieve the target false positive rate (i.e., Pf (m) = Pt ). • The acquisition of multiple Bloom filters considerably reduces the privacy of SPV clients. Namely, assume that the adversary is able to acquire two or more Bloom filters B1 and B2 , then the following holds: – Given any two Bloom filters B1 and B2 , if |B1 ∩B2 | min(m21 ,m2 ) (here, m1 , m2 denote the number of elements inserted in B1 and B2 , respectively), then an adversary can be certain that B1 and B2 do not belong to the same wallet. – If the two Bloom filters acquired by the adversary belong to the same SPV client, the adversary can identify whether the SPV client has restarted while generating his or her Bloom filters. – Ph(.) corresponding to b = 2 filters is considerably larger when compared to the case where the adversary has access to one Bloom filter. This means that 3
Here, we assume that each filter is generated using a different seed.
Security and Privacy of Lightweight Clients
139
an adversary that can acquire more than one Bloom filter pertaining to an SPV client can learn considerable information about the addresses of the node— irrespective of the size of the Bloom filter and Pt . In this case, our results show that Ph(N ) approaches 1, which signals full leakage of the addresses of the SPV client. Ph(.) increases to 1 as the number b of Bloom filters of the same SPV client captured by the adversary increases. • SPV clients should keep state about their outsourced Bloom filters (i.e., on persistent storage) to avoid the need to recompute a filter that contains the same elements using different parameters. • Inserting both the public key and the public key hash (the address) in the Bloom filter provides a sufficient distinguisher for the adversary in guessing whether an address is a true positive or not. Note that for the most common transaction type Pubkey Hash (P2PKH), inserting the hash of the public key is sufficient. However, there might be other transaction types where it is beneficial to also store the public key in the Bloom filter. In this case, the client can insert either a Bitcoin address or its corresponding public key (but not both) in the same Bloom filter.
6.2.8
Countermeasure of Gervais et al.
In what follows, we describe a countermeasure devised by Gervais et al. in [4] to enhance the privacy of SPV clients. This countermeasure emerges naturally from the current limitations of existing implementations. Here, each SPV client generates N Bitcoin addresses at the start, and embeds them in a Bloom filter that can fit M = m = N . Clients only insert the address within each filter. Note that the Bloom filter is constructed with a realistic target false positive rate Pt , which combined with N and M , results in a target privacy level (see (6.1)). Moreover, we note that since M = m, then we ensure that the Bloom filter’s false positive rate matches Pt . Clearly, since the user might not directly use all N addresses, some of his or her Bitcoin addresses will not be revealed and will remain in the wallet. In this case, shadow addresses (i.e., addresses that are typically generated by Bitcoin clients to receive change) are then automatically chosen from those unused addresses among the total N addresses. This eliminates the need for clients to constantly update their outsourced Bloom filters whenever a new shadow address has been created by their client—thereby minimizing bandwidth and maximizing privacy.
140
Bitcoin and Blockchain Security
Whenever users run out of their N addresses and need to get additional addresses, they repeat the aforementioned process. That is, users create an additional set of N addresses and embed them in a new Bloom filter—constructed with a new initial seed— with M = m = N , and using the previously chosen Pt . In this case, the advantage of an adversary that captures one or more Bloom filters pertaining to the same SPV client is negligible, since these filters do not have any element in common. Additionally, this solution requires the SPV clients to keep state, for example, about each Bloom filter, to avoid the need of recomputation of the same filter if the client restarts at any point in time. This proposed solution can be directly integrated within existing SPV clients and only incurs in small modifications to existing client implementations. Moreover, this solution does not incur additional overhead on the SPV clients—apart from the pregeneration of N Bitcoin addresses (which is only done at setup time), and the storage space required for each generated Bloom filter. More detail on this countermeasure can be found in [4].
References [1] S. Nakamoto. Bitcoin: A Peer-to-Peer Electronic Cash System, 2009. [2] Arthur Gervais, Hubert Ritzdorf, Ghassan O. Karame, and Srdjan Capkun. Tampering with the delivery of blocks and transactions in bitcoin. IACR Cryptology ePrint Archive, 2015:578, 2015. [3] What is spv mining, and how did it (inadvertently) cause the fork after bip66 was activated? available from. [4] Arthur Gervais, Srdjan Capkun, Ghassan O. Karame, and Damian Gruber. On the Privacy Provisions of Bloom filters in Lightweight Bitcoin Clients. In Proceedings of the 30th Annual Computer Security Applications Conference, ACSAC 2014, New Orleans, LA, USA, December 8-12, 2014, pages 326–335, 2014. [5] Mike Hearn. Connection bloom filtering, 2012. available from bitcoin/bips/blob/master/bip-0037.mediawiki. [6] Burton H Bloom. Space/time trade-offs in hash coding with allowable errors. Communications of the ACM, 13(7):422–426, 1970.
Security and Privacy of Lightweight Clients
141
[7] Ken Christensen, Allen Roginsky, and Miguel Jimeno. A new analysis of the false positive rate of a bloom filter. Information Processing Letters, 110(21):944–949, 2010. [8] TOR project. available from. [9] S Joshua Swamidass and Pierre Baldi. Mathematical correction for fingerprint similarity measures to improve chemical retrieval. Journal of Chemical Information and Modeling, 47(3):952–964, 2007.
Chapter 7 Bitcoin’s Ecosystem by Angelo De Caro and Ghassan Karame
In this chapter, we explore the salient points of the Bitcoin ecosystem. We also discuss the impact that this cryptocurrency is fueling with respect to business innovations. At the time of writing, Bitcoin is heavily used to perform cross-border payments, machine-to-machine transactions, as well as stock settlements, just to name a few. Data from blockchain.info shows that the overall daily transaction volume has increased over time moving from the approximately 100,000 confirmed Bitcoin transactions in June 2015 to the 230,000 of May 2016 [1]. Furthermore, data from Bitcoin merchant processor BitPay, launched in 2011, shows a general trend toward increased merchant transactions [2]. November 2015 recorded an all-time high of around 100,000 Bitcoin transactions. TigerDirect, a publicly traded online electronics retailer, has seen exciting results: among all clients that used Bitcoin in TigerDirect, 46% of them were new customers. Furthermore, orders placed with Bitcoin were 30% larger. Certainly, one of the main reason of this exciting expansion is related to the growth of the number of Bitcoin-accepting merchants. While only 2% of merchants currently accept Bitcoin, 25% of merchants expect to offer it within the next two years, according to a recent survey by Goldman Sachs and the Electronic Transactions Association [3]. Estimations also foresee that the number of merchants accepting Bitcoin will raise from the current 160,000 to 1.8 million in 2017.
143
144
Bitcoin and Blockchain Security
Bitcoin as an asset class is also maturing. For the majority of 2015, Bitcoin’s exchange price has remained relatively nonvolatile and constant—fluctuating between $200 to $300. From January 1, 2013 to January 1, 2014, the price went from $13.41 to $808.05, going as high as $1,147.25 on December 4. Just one month earlier, on November 4, 2013, the price was $225.20. On the other hand, the market cap of Bitcoin is down from an all-time high of nearly $14 billion to around $7 billion at time of writing. Regulations seem also to be changing. Organizations such as Coin Center and the Chamber of Digital Commerce work to help regulators in drafting rules that will ensure Bitcoin can continue to grow worldwide. The remainder of this section is organized as follows. In the first three sections, we explore the financial aspects of the Bitcoin ecosystem. Namely, we describe how payments and exchanges can be executed and how to store the Bitcoin coins (BTCs) that one possesses. We also discuss the security of Bitcoin wallets. Then, we will delve into mining and how different users can join their forces to mine BTCs. Finally, we discuss the impact of Bitcoin on the gambling business.
7.1
PAYMENT PROCESSORS
One of the biggest benefits of the decentralized nature of Bitcoin it that anyone can start accepting payments without the need to register an account with a third-party provider. Nevertheless, for many businesses, especially the small ones, the learning curve required by the new system is still sometimes too steep. For them, it is still more convenient to pay a small fee to a payment processor. Namely, instant conversion of Bitcoin (BTC) to local fiat currencies (e.g., USD, EUR, Yuan) is one of the most popular services offered by payment processors. This service is crucial for all the businesses that accept Bitcoin payments, but still have to pay all or part of their own costs using fiat currencies. Instant conversion reduces the risk of losses derived by the fluctuation of the exchange rates between Bitcoin and the fiat currencies. Instant conversion is not the sole service offered by payment processors. These usually provide an entire suite of tools and services to make the adoption of Bitcoin convenient and simple. We can divide the payment mechanisms in two main categories. The first one is the so-called person-to-person payment mechanism that addresses small businesses and is the simplest way to accept BTCs, while the second one is the
Bitcoin’s Ecosystem
145
point-of-sale (POS) solution that targets larger organizations. In Chapter 4, we discuss in detail the security of Bitcoin payments. In the context of person-to-person payment mechanism, the simplest way to accept Bitcoin payments is by having the customer sending the required amount of BTCs directly to the digital wallet of the merchant. In order to streamline this process, CoinBox [4], a leading Bitcoin trading platform in Malaysia, offers an interesting solution. Here, the merchant, using an application on his or her smartphone, can convert the price of a good or service to a QR code that contains the amount to be paid and the address of the recipient. The customer scans the QR code with his or her Bitcoin wallet application and the payment is sent. Despite their simplicity, person-to-person payment systems are unlikely to be used by large businesses that are interested in solutions to accept BTCs that integrate well and smoothly with their existing POS systems. Therefore, the market offers many POS solutions that a merchant can choose from in order to satisfy his or her specific requirements. Coinify [5] is a Danish firm that offers POS solutions that allow payments to be accepted in person anywhere from anyone. Merchants can get paid in 17 digital currencies or fiat currency or a mixture of the two. CoinKite [6] is a start-up that offers a Bitcoin payment terminal looking exactly like the usual chip-and-PIN terminals that are commonly found in stores. This handset reads a Bitcoin-based debit card, also offered by CoinKite, and can also serve as a Bitcoin and Litecoin ATM, as well as offer the option to print QR codes for customers to scan with their smartphone applications. BitPay [7] is an international payments processor for businesses and charities. It is integrated into the SoftTouch POS system for bricks-and-mortar retail stores. However, BitPay has an API that could be integrated easily within any other POS system. BitPay has various tariffs that merchants can subscribe to, enabling features such as using the service on a custom domain for online stores. Revel Systems [8] offers different iPad-based POS solutions to satisfy various merchant categories from restaurants to retail outlets. They also support Bitcoin as a method of payment. Paystand Bitcoin Merchants [9] aims to be a multipayment gateway that eliminates merchant transaction fees by supporting digital currency acceptance. Finally, XBTerminal [10] provides a Bitcoin POS device that allows the merchant’s customers to pay from any mobile Bitcoin wallet by NFC or QR code. Payment from off-line mobile devices is supported by Bluetooth. Payments take place through the company’s platform and, if desired, BTCs can be converted instantly to fiat currency at the time of sale.
146
7.2
Bitcoin and Blockchain Security
BITCOIN EXCHANGES
Essentially, Bitcoin exchanges allow the transfer of fiat currencies into BTCs or other digital currencies, and vice versa. The basic workflow is quite simple: a user, equipped with an account in his or her preferred exchange service, first deposits money (in the currencies supported by the exchange service) to their account. Subsequently, the user can start trading with other users of the same exchange service or with the service itself, and withdraw money from their account. Similar to traditional currency exchange services, this exchange is performed by placing buy or sell orders. The exchange service is in charge of matching the orders. A buy order is an offer to buy BTCs in exchange for another currency (fiat or digital). On the other hand, a sell order is an offer to sell BTCs. An exchange can be performed if the price of a buy order is higher than the price asked by a sell order. As long as the service behaves correctly, there is no risk of losing money. While exchanging BTCs, users must take particular care given the intrinsic nature of the currency; once a BTC is spent, it is hard to reverse such a payment. Therefore, exchanging BTCs with payment methods like credit cards exposes to the risk of charge-back fraud [11]. A number of solutions are currently offered in order to streamline the exchange process. For example, Coinbase has an option to link a user’s bank account to his or her Coinbase wallet [12]. Coinbase also offers automatic purchasing of BTCs at regular intervals. On the other hand, BitStamp acts as a mediator enabling a user to trade with other users. Exchanges are not the only way to acquire BTCs. A popular service called Local Bitcoins [13] pairs up potential buyers and sellers. People from different countries can exchange their local currency to BTCs. Local Bitcoins offers an escrow service to protect the buyer of BTCs, meaning that Local Bitcoins holds money on behalf of transacting parties.
7.3
BITCOIN WALLETS
As mentioned earlier, the main goal of wallets is to securely store the private keys needed to spend the BTCs that one possesses. They come in various forms designed to satisfy specific requirements. The cheapest way to store BTCs securely is by using paper wallets. Basically, a paper wallet service generates an address for the user and creates an image that
Bitcoin’s Ecosystem
147
consists of two QR codes. The first one encodes the address the user can receive BTCs to and the second QR encodes the secret key to be used to spend the BTCs received at the address assigned to the user. A number of mobile wallets also emerged in order to allow payments via smartphones. These wallets typically locally store the private keys needed to spend the BTCs and enable the payment directly from the phone. NFC technology can further streamline the payment process. By tapping the phone against a reader, the payment can be processed without the need for extra interaction. Mobile wallets are made possible using the simple payment verification mechanism (SPV). Indeed, SPV allows a smartphone to verify that a transaction is included in the Bitcoin blockchain without downloading the entire blockchain (see Chapter 6). On the other hand, classical desktop wallets can offer more advanced services such as support for transaction anonymization to prevent tracking. In this space, Electrum [14] is one of the most interesting offerings. It has support for multisig (see Chapter 3) to split the permission to spend your coins between several wallets and Bitcoin hardware wallets, and supports SPV mode for transaction verification. Another category in the Bitcoin wallet space is that reserved to the online wallets. These are web-based wallets that store the private keys in the cloud and provide high availability and ubiquity. An online wallet can be accessed from virtually everywhere and from any device. At the same time, these wallets exhibit one fundamental drawback: the private keys are not under the control of the owner anymore. This puts the BTCs that one possesses at serious risk. Hardware wallets are dedicated devices that digitally hold private keys and offer payments assistance. The market offers various devices that are sometimes also certified against different kinds of attacks (both physical and logical). In the case of hardware wallets, recoverability is probably one of the most crucial aspect in case of hardware failure. The combination of services offered by an online wallet and security provided by hardware wallets, is certainly very attractive and can reduce the risks of having one’s BTCs stolen. These wallets are currently very limited in number and can be tamper-resistant. Examples include the Trezor hardware wallet and the Ledger USB wallet. Although Bitcoin is the most prominent cryptocurrency at the time of writing, it is not the only one. This often leads to the situation where several different digital wallets need to be maintained to keep the coins, at least one for each of them. A viable solution to this issue is to use the so-called multicurrency wallets that unify the management of multiple coins under a single interface.
148
Bitcoin and Blockchain Security
A Ripple wallet [15] can be used to hold any currency or asset (including fiat money as well as digital currencies) for which there is a gateway. Gateways are businesses that provide a way for money and other forms of value to move in and out of the Ripple [16]. The list of supported currencies is quite large [17]. HolyTransactions [18] is another interesting multicurrency web wallet that supports different coins and allows spending them from the same place or exchanging them instantly one for another. 7.3.1
Securing Bitcoin Wallets
Recall that Bitcoin transactions basically consist of transferring the outputs of unspent previous transactions to a new public key (address). Therefore, to redeem a given coin/transaction, peers simply have to sign the transfer of this coin using a private key that matches the public key to whom the transaction was sent (see Figure 4.1). Clearly, the compromise or loss of a private key means that peers can no longer redeem any transaction sent to the corresponding public key. By default, each user possesses a digital wallet that is part of the standard (desktop and/or mobile) Bitcoin installation. Wallets can be migrated from one machine to the other and contain/manage all the private keys corresponding to the user. Clearly, if the machine of the user is broken, these keys will be lost and the coins owned by the user will be consequently lost. Moreover, the literature features a number of anecdotes where private keys were stolen from the devices of users, wallets, and exchange markets [19]. For example, almost half a billion dollars have been claimed to be stolen from one of the biggest Bitcoin exchange markets, Mt. Gox [19]. In another incident affecting the MyBitcoin web wallet, more than 78,739 BTC [20] were stolen. Protecting private keys from damage, loss, and compromise is therefore of outmost importance. In what follows, we discuss the security of existing Bitcoin wallets. 7.3.1.1
Security of Online Wallets
Different types of online wallets have emerged. Some store the private keys on the server side (some of which encrypt the private keys before storing them), while others store them locally in the browser of the user. Depending on where the private key is stored, online operators can gain unilateral powers over the BTCs of their users.
Bitcoin’s Ecosystem
149
For example, in April 2013, a theft of 923 BTCs occurred in the mining pool OzCoin. A subset of the stolen BTCs were transferred to a web wallet hosted by StrongCoin. Although StrongCoin claims that it supports user privacy and does not have access to the user funds, StrongCoin intercepted the allegedly stolen BTCs and transferred them back to OzCoin [21]. This exemplifies the degree of control that an online wallet can have on the BTCs owned by its clients. Note that if the private keys of clients are stored in the browser of the users or are stored encrypted (with a key stored at the user’s premises), then private keys can still be lost if, for example, the computer breaks. Therefore, existing solutions that place minimum trust at the wallet operator might not necessarily increase the resilience against the loss of BTCs (e.g., due to hardware failure). 7.3.1.2
Security of Hardware Wallets
Hardware wallets safely store the private keys of individuals without the need to rely on third-party Bitcoin storage services. Note that in the case of loss of the hardware wallet, the private keys are irrecoverable, as are the associated BTCs of the clients. 7.3.1.3
Security of Paper Wallets). 7.3.1.4
Multisig Transactions
Multisignature addresses are addresses associated with more than one ECDSA private key. Generally speaking, these addresses could be m out of n addresses, where n private keys are created for a given public address such that any m ≤ n private keys can spend the coins stored within the public address. The primary use of multisignatures is to considerably increase the difficulty of stealing coins. For example, the m private keys could be stored on different
150
Bitcoin and Blockchain Security
machines/devices. Moreover, this scheme can resist the loss of up to (n−m) private keys. Finally, multisignatures can also be used in scenarios where an address is shared by multiple people, and a majority vote is required to spend the BTCs stored within that address. 7.3.1.5
Trusted Computing
One possible alternative to secure the storage of private keys would be to borrow techniques from trusted computing [23]. For example, one can leverage hardware support, such as TPM chips, ARM Trustzone [24], and Intel SGX [25], to securely seal the private keys stored on users’ personal devices. These private keys can then only be unsealed and recovered if the software state of the device at the time of recovery is exactly the same at the time of sealing—thus ensuring that no malware is present at the time of unsealing. Note that this does not protect against hardware failures; in this case, private keys might not be recoverable. 7.3.1.6
Multicloud Storage
Another alternative to secure private keys against loss or theft would be to rely on multicloud storage systems. Multicloud storage systems typically rely on a number of commodity cloud providers (e.g., Amazon, Google) with the goal of distributing trust across different administrative domains. This model is receiving increasing attention nowadays with leading cloud storage providers such as EMC, IBM, and Microsoft, offering products for multicloud systems [26]. Here, the private keys could be secret-shared and each share can be stored across multiple clouds. Recall that secret-sharing schemes allows a user to distribute a secret among a number of entities, such that only authorized subsets of shareholders can reconstruct the secret. In threshold secret-sharing schemes, the user can define a threshold t such that any t out of n shares can reconstruct the secret. Secret-sharing guarantees security against a nonauthorized subset of shareholders; the combination of any 0 ≤ m < t shares does not leak any meaningful information about the secret [27–29]. By secret-sharing the private keys in a multicloud storage system, the security of the private keys is ensured unless t or more cloud operators collude. Moreover, such a solution resists against the failure of up to (n − t) clouds. Note that applications for multicloud storage systems can be made available on all devices of the users, thereby allowing users to seamlessly synchronize their keys across
151
Bitcoin’s Ecosystem
Table 7.1 Provisions of Alternatives to Secure Bitcoin Wallets
Technique Standard wallets Online wallets Hardware wallets Paper wallets Multisig transactions Trusted computing Multicloud storage
Resists Hardware Failures No Yes No Yes Yes No Yes
Resists Cyber Attacks No No Yes Yes Partially Yes Yes
Resists Loss No Yes No No Partially No Yes
their devices. Table 7.1 summarizes the provisions of the investigated alternatives to secure Bitcoin wallets.
7.4
MINING POOLS
The current difficulty level of mining BTCs is so prohibitively high that it reduces the incentives for miners to operate alone. Joining a mining pool is an attractive option to receive a portion of the Bitcoin block reward on a consistent basis. Namely, mining pools offer a way for miners to contribute their resources to generate a block and to split the reward between all the pool members following a certain reward payment scheme. Shares of the reward are assigned by the mining pool to its members who presents valid proof-of-work. In more detail, a mining pool sets a difficulty level between 1 and the currency’s difficulty. Subsequently, a share is assigned to those miners that provide a block header that scores a difficulty level between the pools difficulty level and the currency’s difficulty level. The main purpose of these block headers is to show that the miner is contributing with a certain amount of processing power. When deciding which mining pool to join, it is important to understand the reward payment scheme used by the pool and the fees that the mining pool operator deducts. Typical fee values range from 1% to 10%. However, some pools do not deduct any fees, meaning that the full block reward is distributed among the mining pool participants. The most basic reward payment scheme is the Pay Per Share (PPS) that offers an instant, guaranteed payout for each share that is solved by a miner. The pools use
152
Bitcoin and Blockchain Security
their existing balance to pay the miners who can withdraw their payout immediately without the need to wait for a block to be solved or confirmed. As such, the PPS scheme requires a large reserve of money to avoid bankruptcy. A variation of the PPS is the Shared Maximum Pay Per Share (SMPPS) that never pays more than the Bitcoin mining pool has earned [30]. In the Equalized Shared Maximum Pay Per Share (ESMPPS) [31], payments are distributed equally among all miners in the pool. Recent Shared Maximum Pay Per Share (RSMPPS), on the other hand, gives priority to the most recent Bitcoin miners. The Pay On Target (POT) takes in consideration also the difficulty level of the proof-of-work submitted by the miners [32]. A different approach is that offered by the Proportional (PROP) scheme [33]. PROP proposes a proportional distribution of the reward among all miners when a block is found—based on the number of shares they have each found. A variation on the PROP scheme is the Pay Per Last N Shares (PPLN) [34], where rather than counting the number of shares in the round, it looks at the last N shares (where N is a parameter of the scheme), no matter when they were generated. The SCORE scheme [35] uses a system whereby a proportional reward is distributed and scored by the time the work was submitted. The miners are then rewarded based on the score of their shares rather than their amount. Another approach is that offered by the double geometric method (DGM) scheme [36]. This scheme was designed to resist pool-hopping, a form of misbehavior where a miner only participates at the beginning of a round. In fact, the expected payout per share is always the same no matter when it was submitted. In order to reduce the risk for the mining pool to be cheated by miners that switch pools during a round, the Bitcoin pooled mining (BPM) [37] scheme assigns lower scores to older shares (from the beginning of a block round), and higher scores to more recent shares. Another important aspect to take into consideration when deciding to start mining is to choose which cryptocurrency to mine given that there are many alternative cryptocurrencies. An attractive option—whenever one is not sure which currency to mine—is a pool called Multipool that automatically switches one’s mining hardware between the most profitable currency [38]. 7.4.1
Impact of Mining Pools on De-centralization
Figure 7.1 depicts the distribution of computing power among mining pools in the Bitcoin network between October 10 and October 14, 2015. Our findings show that more than 75% computing power in Bitcoin was controlled in the investigated
Bitcoin’s Ecosystem
153
period by five major centralized mining pools. If these pools were to collude in order to acquire more than 50% of computing power share in the network, they could effectively control the confirmation of all transactions occurring in the system. This includes preventing transactions from being executed, approving a specific set of transactions, and double-spending transactions [39]. Moreover, given the results of [40–42], F2Pool and AntPool, which control around 38% of the computing power in the network, can further considerably increase their advantage in the network by performing selfish mining; in the worst case, these two mining pools can determine together the fate of all transactions in the Bitcoin system. Note that conscious miners can play an important goal in preventing a mining pool from controlling the computing power in the network. For example, miners of the Ghash.io pool actively abandoned the pool in 2014 in the fear of allowing the pool operator to acquire more than 51% of the computing power in the network [43]. This has forced Ghash.io’s owner to issue a public statement reassuring the community that the mining pool will make sure to take “all necessary precautions” in order to avoid controlling 51% of the computing power in the network. We however point out that it suffices that a given mining pool controls the computing power in the network for a short amount of time (i.e., before raising the suspicion of miners) in order to cause considerable damage in the network. Namely, a malicious pool could, for example, double-spend all transactions occurring within that period of time or selectively reverse payments—a misbehavior that might be economically justified when reversing/ill-spending large amounts. While most existing mining pool protocols assume the existence of a logically centralized operator that orchestrates the block generation process, a number of fully decentralized mining pools, such as P2Pool, have been proposed. Such pools share the benefits of centralized pools since all the participating users get regular payouts that reflect their contribution toward generating a block. However, these pools do not require the existence of any centralized coordinator and operate in a completely decentralized fashion. Currently, P2Pool only holds a marginal share of the computing power in the network; Bitcoin users can only hope that such decentralized pools can be transformed into profitable businesses in the near future in order to attract most miners. However, the Bitcoins reward mechanism provides no particular incentive for users to use such decentralized alternatives. Currently, members of mining pools need to frequently submit cryptographic proofs to the pool operator in order to demonstrate that they are indeed contributing their computing power to the benefit of the pool. This mechanism is used to enhance trust among different untrusted pool members. In [44], Miller et al. observed that
154
Bitcoin and Blockchain Security
if the proof-of-work would effectively enable the miner to steal the reward without leaving any evidence, then miners do not have any incentive to join pools. Namely, any pool operator wishing to outsource the proof-of-work risks losing the mining reward. Miller et al. then proposed two constructs of such nonoutsoureable puzzles, which ensure that even if there are legal contracts between the pool operator and the miners, then miners can effectively steal the awards without leaving any evidence of misbehavior. Implementation results show that these puzzles add additional but tolerable overhead to the cost of Bitcoin blockchain validation. Although few mining pools clearly control the computing power in the network (see Figure 7.1), we argue that there is still hope for reducing the impact of mining pools on the Bitcoin system. Given that Bitcoin relies on the notion of controlled supply that effectively limits the total number of generated BTCs (i.e., the amount of BTCs that are generated for each block is halved every 4 years), the ultimate dominance of mining pools is expected to decrease with time, since their profits would depend less and less on self-awarded BTCs and more on transaction fees. This, in turn, also increases the contribution of individual users to the Bitcoin economy. Indeed, mining pools operators would then have less incentives to accept client versions that are adopted by the minority of the clients (see Section 7.6). Users would then contribute more to the decision making process in Bitcoin by deciding to adopt a client version that suits their preferences. Recall that since the Bitcoin source code is open source, there are already a considerable number of different Bitcoin implementations [39]. to casino games with live dealers via poker that is currently in vogue. For many, BitcoinGG [46] is seen as the most prominent Bitcoin gambling platform; BitcoinGG constantly reviews the newest Bitcoin gambling site. Fairness is surely an important aspect of the online games. Today, many gambling sites provide proof that they are operating legitimately by providing evidence for provably fair games [47]. More specifically, a gambling site that offers provably fair games allows public verifiability of the outcomes of the games based on the gambler’s inputs and a secret information that is reveled at the end of a round.
Bitcoin’s Ecosystem
155
Hashrate Distribution on 4 Days 20%
15%
10%
5%
0%
ury
MC se l lip oo Ec P CK no ol Ka Po CK lo So n ow kn Un r nte Mi Bit ol ork Po P2 etw bN Clu Bit O h.I as GH 4 21 lco Te s giu Eli . Inc 21 M .CO BW sh Slu r ine l CM oo Kn aP hin CC BT ol tPo An l oo
P F2
F Bit
Figure 7.1 Distribution of computing power in Bitcoin between October 10, 2015, and October 14, 2015. More than 50% of the computing power in the network is controlled by F2Pool, AntPool, BTCChina Pool, and BitFury.
An example of such a gambling site is SatoshiDICE [48]. The game generates so-called lucky numbers by using the transaction ID and secret that is disclosed at the end of the day. Note that online gambling is not legal in a number of countries.
7.6
PROTOCOL MAINTENANCE AND MODIFICATIONS
The original Bitcoin client was developed by Satoshi Nakamoto in 2008. Nakamoto continued to support the maintenance and releases to the Bitcoin client until mid2010 where he was replaced by a small group of Bitcoin developers. The Bitcoin core developers have the authority to make all the necessary modifications to the Bitcoin protocol; according to the Bitcoin Github repository, all radical decisions require consensus among all the developers. For example, in the Bitcoin client version 0.8.2, the developers unilaterally introduced a fee policy change and decided to lower the default fee for low-priority transactions from 0.0005 BTC to 0.0001 BTC. Clearly, this empowers the Bitcoin developers to regulate and control the entire Bitcoin economy.
156
7.6.1
Bitcoin and Blockchain Security
Bitcoin Improvement Proposals
In order to affect the Bitcoin development process, Bitcoin users are requested to file a Bitcoin Improvement Proposal (BIP) [39] that is assessed by the Bitcoin developers. The developers then unilaterally make a decision whether such a proposal will be supported by the future Bitcoin releases. This limits the impact that users have, irrespective of their computing power, to affect the development of the official Bitcoin client. Recent events reveal that contributing within the Bitcoin community is not a trivial process [49, 50]. Recently, several of the original lead developers of Bitcoin decided to stop supporting the system due to a large debate on the future of emerging currency. At the core of this debate lies deep misunderstandings about the source code governance namely, with respect to expanding the block sizes in Bitcoin.. In theory, such a debate should be resolved by the computing power in the network (i.e., the miners). Note that most of the computing power was collectively held at the time among two Chinese mining pools; these considerably biased the decision of keeping the maximum block size at 1 MB. Some allege that this decision was politically motivated by the desire of the Chinese mining pools to prevent the growth of the system [50]. This large debate resulted in the exit of developers who were favoring the increase of the maximum block size; another immediate outcome of this debate was that Coinbase—one of the known Bitcoin start-ups—was banned from community forums for siding with those developers [50]. These events clearly show the lack of democracy in the governance of Bitcoin even among those developers who are behind the Bitcoin core software. 7.6.2
The Need for Transparent Decision Making
In some settings, it is inevitable that various client versions/implementations require constant maintenance and development by a group of leading developers. Here, problems arise in those situations where the developers have to take action in order to resolve possible conflicts that may have arisen. Indeed, this process needs to be completely transparent and should be tightly regulated in order not to abuse the trust of users and to minimize such unilateral interventions in the system. For example, in order to prevent the (ab-)use of alerts in Bitcoin, these alerts should be accompanied
Bitcoin’s Ecosystem
157
with provable and undeniable justifications. Based on these proofs, users can then decide whether to accept such warnings. For instance, double-spending alerts can include the double-spending transactions [51]; this provides irrefutable proof that a given address is double-spending. Finally, careful planning and testing of version releases is required so as to ensure backward compatibility with previous versions.
7.7
CONCLUDING REMARKS
In this chapter, we analyzed, with a market cap exceeding $3 billion [52]. As described in this chapter, there are numerous businesses, exchange platforms, and wallets that are currently built around the Bitcoin ecosystem. Unlike previous electronic cash proposals, Bitcoin’s proposal was rather straightforward,. We believe that an additional reason that led to the sustainability and growth of the Bitcoin system was the ability of the developers to assimilate research results from the security community and integrate them swiftly within the development of released client implementations. In the remaining chapters, we discuss in detail the various security and privacy provisions of Bitcoin and its underlying blockchain— effectively capturing eight years of thorough research on these subjects in the Bitcoin community. We however note that a large number of centralized services currently host Bitcoin and control a considerable share in the Bitcoin market. Even worse, Bitcoin developers retain privileged rights in conflict resolution and maintenance of the clients’ software. These entities altogether can decide the fate of the entire Bitcoin system, thus bypassing the will, rights, and computing power of the multitude of users that populate the network.
158
Bitcoin and Blockchain Security
Currently, almost every financial system is controlled by governments and banks; Bitcoin substitutes these powerful entities with other entities such as IT developers and owners of mining pools. While current systems are governed by means of transparent and thoroughly investigated legislations, vital decisions in Bitcoin are taken through the exchange of opinions among developers and mining pool owners on mailing lists. In this sense, Bitcoin finds itself now in unfamiliar territory: on one hand, the Bitcoin ecosystem is far from being decentralized; on the other hand, the increasing centralization of the system does not abide by any transparent regulations/legislations. This could, in turn, lead to severe consequences for the fate and reputation of the system.
References [1] Blockchain.info. The number of daily confirmed bitcoin transactions. available from https: //blockchain.info/charts/n-transactions. [2] Blockchain.info. Understanding bitcoin’s growth in 2015. available from. bitpay.com/understanding-bitcoins-growth-in-2015/. [3] Goldman Sachs. Eta goldman sachs merchant acquirer and iso survey: Spring 2015. available from. [4] coinbox.biz. Real time, safe and easy with coinbox. available from. [5] coinify.com. Conify: Blockchain payments. available from. [6] coinkite.com. Coinkite: bitcoin wallet with multi-signature bank-grade security. available from. [7] bitpay.com. Bitpay: A bitcoin payment processor. available from. [8] revelsystems.com. Revel ipad point-of-sale software. available from. com/. [9]. paystand.com/.
Paystand: 0% business payments.
available from.
[10] xbterminal.io. Xbterminal: Bitcoin pos system. available from. [11] Danny Bradbury. How credit card fraud sank one bitcoin exchange. available from http: //.
Bitcoin’s Ecosystem
159
[12] coinbase.com. Coinbase: Bitcoin wallet. available from. [13] localbitcoins.com. Localbitcoins.com: Fastest and easiest way to buy and sell bitcoins. available from. [14] electrum.org. Electrum: Bitcoin wallet. available from. [15] ripple.com. Ripple: Instant, certain, low-cost international payments. available from https: //ripple.com/. [16] ripple.com. What is a ripple gateway? available from center/gateways/. [17] ripple.com. Ripple popular gateways. available from center/gateway-information/. [18] holytransaction.com. Multi-currency wallet that actually works. available from https:// holytransaction.com/. [19] 2014. The 6 Biggest Bitcoin Heists in History, available from. [20] List of Major Bitcoin Heists, Thefts, Hacks, Scams, and Losses, available from https:// bitcointalk.org/index.php?topic=83794.0. [21] Arthur Gervais, Ghassan Karame, Srdjan Capkun, and Vedran Capkun. Is Bitcoin a Decentralized Currency? IEEE Security and Privacy Magazine, 2014, May/June issue 2014, 2014. [22] Bitcoin paper wallet generator, 2014. available from. com/. [23] Alexandra Dmitrienko, David Noack, Ahmad-Reza Sadeghi, and Moti Yung. Poster: On offline payments with bitcoin. In FC’2014: Financial Cryptography and Data Security Conference, 2014. [24] Building a Secure System using TrustZone Technology.. arm.com/help/topic/com.arm.doc.prd29-genc-009492c/PRD29-GENC009492C_trustzone_security_whitepaper.pdf, 2009. [25] Software Guard Extensions Programming Reference. sites/default/files/329298-001.pdf, 2013. [26] Ghassan O. Karame, Claudio Soriente, Krzysztof Lichota, and Srdjan Capkun. Securing cloud data in the new attacker model. IACR Cryptology ePrint Archive, 2014:556, 2014. [27] A. Shamir. How to Share a Secret? In Communications of the ACM, pages 612–613, 1979. [28] H. Krawczyk. Secret Sharing Made Short. In International Conference on Advances in Cryptology, 1993.
160
Bitcoin and Blockchain Security
[29] Amos Beimel. Secret-sharing schemes: A survey. In Third International Workshop on Coding and Cryptology (IWCC), pages 11–46, 2011. [30] eligius.st. Mining pool: Shared maximum pps. available from index.php/Shared_Maximum_PPS. [31] bitcointalk.org. Mining pool: Equalized shared maximum pay per share. available from https: //bitcointalk.org/index.php?topic=12181.msg378851#msg378851. [32] bitcointalk.org. Mining pool: Pay.
on
target.
available
from
[33] en.wikipedia.org. Mining pool: Proportional. available from wiki/Mining_pool#Proportional. [34] en.wikipedia.org. Mining pool: Pay-per-last-n-shares. available from. wikipedia.org/wiki/Mining_pool#Pay-per-last-N-shares. [35] en.bitcoin.it. Mining pool: Score. available from Comparison_of_mining_pools. [36] bitcointalk.org. Mining pool: Double geometric method. bitcointalk.org/index.php?topic=39497.0.
available from https://
[37] en.wikipedia.org. Mining pool: Bitcoin pooled mining. available from. wikipedia.org/wiki/Mining_pool#Bitcoin_Pooled_mining. [38] multipool.us. Multipool: A bitcoin, litecoin, and altcoin mining pool. available from https: //. [39] Bitcoin Wiki, available from. [40]. [41] Arthur Gervais, Hubert Ritzdorf, Ghassan O. Karame, and Srdjan Capkun. Tampering with the delivery of blocks and transactions in bitcoin. IACR Cryptology ePrint Archive, 2015:578, 2015. [42] Ittay Eyal and Emin Gün Sirer. Majority is not enough: Bitcoin mining is vulnerable. CoRR, abs/1311.0243, 2013. [43] Bitcoin miners ditch ghash.io pool over fears of 51% attack. available from. coindesk.com/bitcoin-miners-ditch-ghash-io-pool-51-attack/. [44] Andrew Miller, Ahmed Kosba, Jonathan Katz, and Elaine Shi. Nonoutsourceable scratch-off puzzles to discourage bitcoin mining coalitions. In Proceedings of the 22Nd ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, pages 680–691, New York, 2015. ACM.
Bitcoin’s Ecosystem
161
[45] William Chambers. The breakdown: How big is real-money gaming? available from. [46] bitcoingg.com. The breakdown: How big is real-money gaming? available from. bitcoingg.com/. [47] provablyfair.org. Provably fair. available from. [48] satoshidice.com. Satoshidice. available from. [49] [Bitcoin-development] Revisiting the BIPS process, a proposal, available from https: // msg02982.html. [50] Mike Hearn. The resolution of the Bitcoin experiment, 2016. available from. [51]. [52] Bitcoin market cap, 2015. available from.
Chapter 8 Applications and Extensions of Bitcoin In the last couple of years, most research was focused on the provisions of Bitcoin as a digital currency. Studies were analyzing the security and privacy of making payments in Bitcoin, the underlying economy of Bitcoin, and so on, but completely overlooked a key-enabling technology and a hidden potential within Bitcoin, the blockchain. Indeed, Bitcoin’s blockchain emerges as a truly genuine breakthrough. This blockchain implements a novel distributed consensus scheme that allows transactions, and any other data, to be securely stored and verified without the need for any centralized authority and while scaling to a large number of nodes. As such, the blockchain has fueled innovation in the last couple of years, and a number of innovative applications have already been devised by exploiting the secure and distributed provisions of the blockchain. In this chapter, we overview a number of interesting extensions of Bitcoin. We also briefly discuss a number of applications that leverage Bitcoin’s blockchain in order to create various services, such as decentralized storage and smart contracts, among others.
8.1
EXTENSIONS OF BITCOIN
At the time of writing, there are almost 500 alternate blockchains (also called altcoins) that offer alternative currency options besides BTCs. Most of these blockchains are clones of the Bitcoin blockchain, with minor configuration changes, namely: Coin supply A number of altcoins vary the total coin supply in the system.
163
164
Bitcoin and Blockchain Security
Hash function A number of altcoins rely on different underlying hash functions, such as SHA256 or SCRYPT. Block generation times Some altcoins rely on different block generation times/difficulty for the underlying proof-of-work. In the remainder of this section, we briefly describe a number of prominent altcoin instantiations. 8.1.1
Litecoin
Litecoin is a well-known altcoin which, at the time of writing, holds the fourth largest market cap, after Bitcoin, Ripple, and Ethereum. Litecoin’s code is basically a clone of Bitcoin’s with three basic differences. More specifically, the Litecoin network aims to generate a block every 2.5 minutes instead of the 10-minute interval featured by Bitcoin. This clearly allows for faster transaction confirmation times and in turn faster convergence on consensus in the network. This also suggests that Litecoin is more suited for fast payments where the time between the exchange of services and money is short (e.g., fast-food services). Another difference is that the Litecoin network is expected to generate 84 million Litecoins, which is four times larger than the number of currency units scheduled to be issued by the Bitcoin network. A major difference between Litecoin and Bitcoin lies in the fact that Litecoin uses scrypt, a sequential memory-hard function, in order to reduce the advantage of computationally powerful miners. Namely, scrypt requires a large amount of memories in order to be efficiently computed; by doing so, scrypt trades off CPUbound resources with memory-bound resources. The intuition is that memory access times vary much less than CPU speeds, and hence offer a fairer alternative for constructing PoW. There are currently several specialized ASIC mining hardware available for scrypt-based PoW systems. 8.1.2
Dogecoin
Dogecoin further reduces the confirmation times of transactions to almost 1 minute. A direct drawback is that Dogecoin features slightly higher probability of generating orphan blocks when compared to Litecoin and Bitcoin. Similar to Litecoin, Dogecoin also relies on scrypt, and increases the total coin supply that are forecast to be generated to around 100 billion Dogecoins. The Dogecoin currency has gained considerable traction as an Internet tipping system in which users grant Dogecoin tips to others in exchange for providing
Applications and Extensions of Bitcoin
165
interesting or noteworthy content. At the time of writing, Dogecoin is among the top ten currencies with respect to the total market capitalization. 8.1.3
Namecoin
One of the first examples of the application of the blockchain is Namecoin [1]. Currently, the Internet Corporation for Assigned Names and Numbers (ICANN) governs nearly all top-level Web address domains such as ”.com.” Namecoin acts as a decentralized Domain Name Service that is resilient to censorship and serves as a new domain name system for registering Web addresses that end in ”.bit.” By doing so, Namecoin empowers its miners to distributively control domain names. In Namecoin, each record consists of pair comprising a key and a value that can be up to 520 bytes in size. Each key points to a path, with the namespace preceding the name of the record. For example, key “d/example” signifies a record stored in the DNS namespace “d” with the name “example” and corresponds to the record for the example.bit website [1]. Note that the content of “d/example” should conform with the DNS namespace specification [2]. The current fee for inserting a record is 0.01 NMC (which denotes the currency in Namecoin) and records typically expire after 36000 blocks (approximately 200 days) unless they are updated. Similar to Bitcoin, Namecoin has a limited supply of 21 million Namecoins, which are released as a geometric series, by halving the generation amount every 4 years. Various statistics about the current usage dynamics of Namecoin can be found at. Recent studies [3] have however shown that most users of Namecoin are not active and that the existing market for domains is almost nonexistent. For instance, [3] reveals that among among Namecoins roughly 120,000 registered domain names, only 28 have nontrivial content. 8.1.4
Digital Assets
Digital Assets is a technology provider for blockchain-based services mainly aimed at settling transactions of financial institutions. Digital Assets owns two main blockchain products [4]: Hyperledger 1 [5, 6] and Bits of Proof 2 [7]. Hyperledger is a blockchain that instantiates distributed ledgers using Practical Byzantine Fault Tolerant protocols. By doing so, Hyperledger is able to support real-time financial settlements on a scale of tens of thousands of transactions per 1 2
The Hyperledger brand name was donated to the Linux Foundation in December 2015. The Bits of Proof code was donated to the Linux Foundation in December 2015.
166
Bitcoin and Blockchain Security
second. This is achieved by leveraging a private blockchain environment where all participants are all known and permissioned, On the other hand, Bits of Proof is an optimized variant of Bitcoin implemented in Java [8], which was later integrated within Hyperledger. Unlike BitcoinJ, Bits of Proof provides a Bitcoin server that is tailored for enterprise solutions with performance scalability in mind. Namely, the merge between Hyperledger and Bits of Proof resulted in switching Hyperledger’s codebase to Java/Scala and adopting the UTXO transaction model. UTXO transaction model allows Hyperledger to be interoperable with Bitcoin and other sidechains so that users of Hyperledger will benefit from the innovation from the Bitcoin community. A number of additional security features are also planned to be added to the Hyperledger blockchain: for instance, Hyperledger seeks to protect the confidentiality of transactions without hindering transaction validation process. This can be achieved, for example, by checking (in zero-knowledge) that the sum of outputs is smaller or equal to the sum of inputs. As mentioned earlier, the Hyperledger brand name was donated to the Linux Foundation in December 2015. In Chapter 9, we discuss in greater detail the current operation of the rebranded Hyperledger project.
8.2
APPLICATIONS OF BITCOIN’S BLOCKCHAIN
We now proceed to outlining a number of novel and innovative applications that leverage Bitcoin’s blockchain. 8.2.1
Robust Decentralized Storage
As mentioned earlier, the blockchain allows different entities, such as banks, governments, and industrial players, to efficiently and securely reach consensus on the order of transactions, correctness of data, and so on. One of the envisioned exploitations of the blockchain lies in the construction of decentralized storage systems. The beauty behind this approach is that all data stored in the blockchain is expected to be replicated across a large number of nodes which ensures a high level of reliability. In what follows, we start by discussing how to leverage the blockchain in order to store information. In the sequel, we assume that users have access to n storage nodes (e.g., public clouds), which have considerable storage capacity. • Prior to storing object O on the nodes, the user generates a master secret K.
167
Applications and Extensions of Bitcoin
Storage node 1
User
Storage node 2
Storage node n
Encrypt
Blockchain
Issue transaction containing a URI of the file
Figure 8.1 Using the blockchain as a metadata store.
• The user then computes Enc(K, O), which denotes the semantic secure encryption of object O under key K using function Enc. • The user then stores the encrypted object Enc(K, O) redundantly on the n nodes and acquires n URIs P1 , . . . , Pn that point to the location of Enc(K, O) on each of the n nodes, respectively. • The user then encrypts P1 , . . . , Pn using key K and stores the resulting encryption Enc(K, P1 ), . . . , Enc(K, Pn ) as well as H(Enc(K, O)) on the blockchain. Here, H(.) refers to a cryptographic hash function. That is, given H(x), it is computationally infeasible to compute x (i.e., H(.) is a one-way function), and it is likewise infeasible to compute y 6= x such that H(x) = H(y) (i.e., H(.) is collision-resistant). • Once the information stored by the user is confirmed in the blockchain, the user is certain that the metadata information (i.e., the object hash, the URIs) can never be modified by any entity. • To retrieve the information, the user’s client simply retrieves the encrypted URIs, decrypts them using key K, and uses them to fetch the data stored on one of the storage nodes.
168
Bitcoin and Blockchain Security
• The user verifies that the hash of the downloaded object matches H(Enc(K, O)) before decrypting and acquiring O.
Note that the aforementioned system (see Figure 8.1) is generic in the sense that the storage nodes could be emulated by the blockchain itself provided that the blockchain has enough storage capacity to store large objects. For instance, the PDF of Bitcoin’s white paper was included in a Bitcoin transaction [9]. Currently, Bitcoin’s blockchain contains a considerable amount of data [10], such as the biography of Nelson Mandela, Wikileaks documents, software, and so on. However, in the particular case of Bitcoin, it is not practical to store large objects in the blockchain due to the various limitations imposed by the developers; recall that Bitcoin implements practices to ensure system scalability and to avoid large storage overhead. The main challenge therefore when utilizing Bitcoin’s blockchain would be to store the minimum amount of data in the blockchain itself while still realizing robust storage. This can be achieved by storing the actual objects onto dedicated storage nodes, while only storing the metadata within Bitcoin’s blockchain. Recall that in Bitcoin, transactions can optionally have a field that can be used by developers to insert such metadata to transactions. Alternatively, metadata can be encoded within transactions in the form of (invalid) public keys. That is, instead of specifying a valid public key, developers can encode metadata (e.g., in HEX format) and include them in this field. Note that Bitcoin enables multioutput transactions and multisignature transactions. This would allow developers to encode considerable metadata in the fields reserved for multiple public keys. Clearly, by issuing such transactions, one has to pay the minimum Bitcoin fee to ensure that these transactions are included in the blockchain. Currently, the minimum fee is 0.0001 BTCs, which corresponds to almost 2 Euro cents given the current exchange rate. This basic scheme is secure even when n − 1 storage nodes are arbitrarily malicious and as long as the there are enough honest blockchain nodes to ensure the security of the metadata stored therein. Note that a similar scheme can be used to construct erasure-coded storage. In this case, instead of storing exact replicas at the storage nodes, the user can invoke an information dispersal algorithm to encode object O into n chunks in such a way that any of the m chunks are enough to reconstruct O. The user stores the URIs of the chunks, as well as their individual cryptographic hashes in the blockchain. This variant scheme is secure even when n−m storage nodes are arbitrarily malicious and as long as there are enough honest blockchain nodes to ensure the security of the metadata stored therein.
Applications and Extensions of Bitcoin
8.2.1.1
169
Authenticated Storage
Note that the aforementioned robust storage can be easily turned into an authenticated storage. Authenticated storage refers to a storage system where each entity can prove to another that it had stored a given object. Typical examples are court documents that need to be attested (e.g., that they are issued by a given entity) or modifications/updates to legal documents. Namely, blockchain users are typically equipped with nonrepudiable public/private key pairs.3 Since each transaction confirmed in the blockchain is authenticated, users can prove in the aforementioned storage protocol the ownership of object O. Clearly, the blockchain can also be used to prove data ownership without revealing the actual data. For instance, one can publicly reveal a file digest (e.g., hash) for an object that has been committed in the blockchain and if conflict arises the person can prove that he or she has the data that matches the hash. This is especially useful for contracts, copyrighted material, patents, and so on. For example, one can prove that he or she developed a specific software revision at any given point in time by time-stamping the hash of the revision tree. BTProof [11] and Proof of Existence [12] already offer such services by leveraging Bitcoin’s blockchain. 8.2.2
Permacoin
In [13], Miller et al. proposed Permacoin, a modification to Bitcoin that aims at repurposing its mining resources toward a more useful goal: decentralized storage. Permacoin’s mining process requires investment in computational and storage resources. More specifically, Permacoin relies on a puzzle for Bitcoin based on Proofs of Retrievability (POR) [14, 15]. To mine coins, users need to prove access to a given copy of a file. By doing so, Permacoin motivates the construction of a highly decentralized storage system. We start with a brief refresher on POR. Proofs of Retrievability (POR) are interactive protocols that cryptographically prove the retrievability of outsourced data. More precisely, POR consider a model comprising of a single user (or tenant) and a service provider that stores a file pertaining to the user. POR basically consist of a challenge-response protocol in which the service provider proves to the tenant that its file is still intact and retrievable. Note that POR only provide a guarantee that a fraction p of the file can be retrieved. For that reason, POR are typically performed 3
In the case of Bitcoin, each public key maps to a unique Bitcoin address.
170
Bitcoin and Blockchain Security
on a file that has been erasure-coded in such a way that the recovery of any fraction p of the stored data ensures the recovery of the file [15]. A POR scheme consists of four procedures [14], setup, store, verify, and prove. The latter two algorithms define a protocol for proving file retrievability. We refer to this protocol as the POR protocol (in contrast to a POR scheme that comprises all four procedures). setup. This randomized algorithm generates the involved keys and distributes them to the parties. In case public keys are involved in the process, these are distributed among all parties. store. This randomized algorithm takes as input the keys of the user and a file f ∈ {0, 1}∗ . The file is processed and store outputs f ∗ , which will be stored on the server. store also generates a file tag τ , which contains additional metadata information about f . prove. The prover algorithm takes as input the public key and the file tag τ and f that is output by store. verify. The randomized verification algorithm takes the secret key, the public key, and the file tag τ outputted by store during protocol execution. Algorithm verify outputs at the end of the protocol run TRUE if the verification succeeds, meaning that the file is being stored on the server, and FALSE otherwise. In Permacoin, mining is associated with the effort of performing a POR. This is achieved as follows: • Each participant pseudorandomly samples the data segments to store. The seed of the pseudorandom function is based on the miner’s public key. This also allows other participants in the network to verify that the participant is indeed storing the correct segments. • Unlike existing POR schemes, the challenge/response protocol needs to be made noninteractive. This is achieved by publicising epoch-dependent unpredictable puzzle instances puz. The challenge is then simply H(puz, s), where H(.) is a cryptographic hash function, and s is a random seed selected by the participant. By doing so, each participant basically pseudorandomly challenges a number of segments that it is storing. • The participant finally broadcasts a (POR) proof (based on Merkle trees) of possession of the challenged segments. Any participant in the network can simply check that the response is correct and that H(puz, s) conforms with the current difficulty in the network. Only correct responses are broadcast in the network.
Applications and Extensions of Bitcoin
8.2.3
171
Decentralized Identity Management
The blockchain can be similarly used as a decentralized identity system. Namely, each entity can reserve and confirm its identity in the blockchain, which will inherently prevent any other entity from spoofing that identity. By confirming the identity records in the blockchain, such an approach ensures that (1) the identity cannot be changed/modified and (2) the identity is uniquely assigned to a single entity. A successful instantiation of this application is OneName [16]. OneName is a protocol that enables the construction of decentralized identity system (DIS) using the Namecoin blockchain. Users are added to the OneName directory by means of a key-value store (KVS) interface, where the key is the username or ID and the value encodes the corresponding profile data (in JSON format). 8.2.4
Time-Dependent Source of Randomness
Bitcoin’s blockchain (and variant altcoins’ blockchain) can also be used to instantiate a time-dependent randomness generator GetRandomness : T → {0, 1}`seed where T denotes a set of discrete points in time. In a nutshell, GetRandomness produces values that are unpredictable but publicly reconstructible. More formally, let cur denote the current time. We define GetRandomness as follows. On input t ∈ T , GetRandomness outputs a uniformly random string in {0, 1}`seed if t ≤ cur, otherwise GetRandomness outputs ⊥. We say that GetRandomness is secure if the output of GetRandomness(t) cannot be predicted with probability significantly better than 2−`seed as long as t < cur. Similar to [17,18], we instantiate GetRandomness by leveraging functionality from Bitcoin, since the latter offers a convenient means (e.g., by means of API) to acquire time-dependent randomness. Recent studies show that a public randomness beacon—outputting 64 bits of min-entropy every 10 minutes—can be built atop Bitcoin [19]. Given this, GetRandomness then unfolds as follows. On input time t, GetRandomness outputs the hash of the latest block that has appeared since time t in the Bitcoin blockchain. Clearly, if t > cur corresponds to a time in the future, then GetRandomness will output ⊥, since the hash of a Bitcoin block that would appear in the future cannot be predicted. On the other hand, it is straightforward to compute GetRandomness(t) for a value t ≤ cur (i.e., t is in the past) by fetching the hash of previous Bitcoin blocks. In this way, GetRandomness enables an untrusted party to sample randomness without being able to predict the outcome ahead of
172
Bitcoin and Blockchain Security
time. Note that the security of GetRandomness depends on the underlying security of the blockchain. More specifically, if an entity is able to predict the outcome of GetRandomness, then he or she is able to predict a future block hash in the blockchain. 8.2.5
Smart Contracts
Developers can leverage multisignature transactions in Bitcoin in order to construct smart contracts. Smart contracts refer to binding contracts between two or more parties and are enforced in a decentralized manner by the blockchain without the need for a centralized enforcer. Recall that multisignature transactions (see Chapter 4) require m > 1 correct signatures to be considered valid transactions. Although the primary use of multisignature transactions mainly targeted resistance to coin theft, these transactions also support the construction of smart contracts in Bitcoin. In what follows, we discuss various types of achievable contracts in Bitcoin. 8.2.5.1
Making a Deposit
Recall that Bitcoin is mainly used to issue nonrefundable payments among users. However, there are a number of application scenarios where users need to make deposits (e.g., when using a service which requires assurance in case of damage or misuse). Bitcoin can plan for this case by enabling the creation of deposits to potentially untrusted entities. As described in [20], a user A can make a deposit of v BTCs to entity B by constructing a transaction T1 that spends v BTCs into an output address C in such a way that the signatures of both A and B are required to spend T1 . The user A does not immediately broadcast T1 in the Bitcoin network; instead, A sends B H(T1 ), C, as well as a new address that is owned by A using an off-line channel (e.g., using a direct TCP connection). Upon reception of H(T1 ), B constructs another transaction T2 that spends the BTCs stored in C (by linking it to H(T1 )) back to an address specified by A. T2 is formed such that the nLockTime field is set to a future preagreed date, and the sequence number for the input is set to zero. B then sends T2 to A using an off-line channel. Subsequently, A verifies that T2 is well-formed and signs T1 and broadcasts both T1 and T2 in the network. At this stage, the v BTCs cannot be spent individually by either A or B. Once the date specified in nLockTime is reached, the contract is completed and A will receive the v BTCs back by spending transaction T2 even
Applications and Extensions of Bitcoin
173
Figure 8.2 Making deposits in Bitcoin.
if B is not online. Note that by setting the sequence number to zero, the contract between A and B can be amended in future if both agree. This process is depicted in Figure 8.2. 8.2.5.2
Dispute Mediation
The aforementioned process of making deposits (see Section 8.2.5.1) can be inherently extended to deal with dispute mediation. For instance, A and B can agree on a neutral dispute meditator M. Here, all transactions issued by A can be constructed so that they can be spent using the signatures of any two out of the three parties: A, B, and M. We can now distinguish the following cases: 1. In case of a successful interaction between A and B, then the transactions will be spent as agreed in the contract.
174
Bitcoin and Blockchain Security
2. In case some issue arises between A and B, then M acts as a mediator. If M aligns with A, then the transactions will not be spent. Otherwise, if M agrees with B, then the transactions will be spent to B’s output address. 8.2.5.3
Managing Multiuser Funds
Bitcoin additionally enables different users to collaboratively raise funds for any given project without the need for an external arbiter, for example, to handle disputes (see Section 8.2.5.2). For instance, assume that different entities A1 , . . . , An decide to collaboratively raise funds of v BTCs in order to support a project. In this case, it is required that if v BTCs could not be jointly raised, then the funds committed by each entity should be reimbursed. In this case, each of the entities Ai , ∀i ∈ [1, n] can issue a transaction committing v1 < v of BTCs from one of their inputs to spend v BTCs to a common output address B. Clearly, this is not a correct transaction since the input amount is less than the output amount. The intuition here is that the input script is signed in such a way to allow an aggregator to combine all the various transactions issued by the different entities in a multi-input single-output transaction provided that at least v BTCs were Pnraised by these entities; that is, the transaction is only valid if and only if i=1 vi = v. This can be achieved using the SIGHASH ALLkSIGHASH ANYONECANPAY that signs the entire transaction except any signature scripts, preventing modification of the signed parts. More detail about this process can be found in [20]. 8.2.5.4
Using Smart Contracts for Crime
One major limitation of the aforementioned smart contracts is that they can also facilitate the mediation of illegal activities among distrustful criminal parties. Indeed, Bitcoin (and other decentralized frameworks such as Ethereum [21]) eliminate the need for trusted third-party intermediaries to conclude such contracts. This would clearly render illegal activities and smart contracts established using Bitcoin and similar blockchain technologies harder for law enforcement agencies to trace/detect. While traditional approaches require the intervention of thirdparties (which could be regulated and/or coerced by law enforcement agencies), Bitcoin does not necessarily require any third-party and minimizes interaction between criminal parties. This problem is even further exacerbated since Bitcoin inherently supports pseudonymity, which is an appealing property for criminal activities.
Applications and Extensions of Bitcoin
175
This problem was outlined by Juels et al. in [22]. More specifically, the authors show that criminal smart contracts (CSCs) for leakage of secrets are efficiently realizable in existing decentralized contracts. Additionally, the authors demonstrated that authenticated data feeds, another anticipated feature of smart contract systems, can facilitate CSCs for real world crimes. This indirectly motivates the need for policy safeguards to prevent the abuse of smart contracts.
8.3
CONCLUDING. Namecoin.
available from
digital-asset-holdings.html.
176
Bitcoin and Blockchain Security
90– 107, ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, pages 886–900, New York, 2015. ACM. available from.
Applications and Extensions of Bitcoin
177
—Bitcoin wiki, 2015. Contract.
available from
[21] Ethereum Homestead Release. available from. [22] Ari Juels, Ahmed Kosba, and Elaine Shi. The ring of gyges: Using smart contracts for crime. available from gyges.pdf.
Chapter 9 Blockchain Beyond Bitcoin As mentioned in Chapter 8, there are currently more than 500 alternate blockchains, most of which are simple variants of Bitcoin. Indeed, the blockchain instantiates a novel distributed consensus scheme which allows transactions, and any other data, to be securely stored and verified without the need for any centralized authority. Note that the entire community has been in search for a simple, scalable, and workable distributed consensus protocol for a considerable amount of time [1]. For instance, PoW-based blockchain is a permissionless system that does not require any identity management, and could scale to millions of miners. We contrast this to existing Byzantine Fault Tolerant (BFT) proposals, which are permissioned systems (i.e., they require the knowledge of the IDs of the miners prior to the start of the consensus protocol) and have limited scalability provisions. However, Bitcoin’s PoW has been often criticized due to its considerable waste of energy; indeed, the cost of mining per confirmed transaction is estimated to be 6.2$ at the time of writing. Existing studies also show that Bitcoin’s blockchain can only achieve a modest transactional throughput bounded by seven transactions per seconds. To remedy the limitations of PoW, a number of alternative consensus protocols have been proposed, such as Ripple and Ethereum. In this chapter, we overview a number of interesting blockchain proposals that are currently acquiring considerable attention in the media/literature.
179
180
Bitcoin and Blockchain Security
Two-way Peg
Blockchain Sidechain
Sidechain
Sidechain
Figure 9.1 Sidechains are blockchains that are interoperable with Bitcoin and with each other. This allows assets to move freely across all blockchains.
9.1
SIDECHAINS
Sidechains are blockchains that can be interoperable with Bitcoin and with each other. This allows assets to move freely across all blockchains. As mentioned earlier, a number of variants of Bitcoin were implemented as altcoins in order to provide enriched applications or better performance and security features. Those altcoins were maintained independently using a variant Bitcoin codebase and introduce their own currencies that are independent of Bitcoin. Clearly, this results in liquidity shortage and market fluctuation, since all altcoins are competing for market assets—which also discourages technical innovations for new altcoins. Sidechains (see Figure 9.1) attempt to remedy this challenge. Namely, pegged1 sidechains implement an infrastructure featuring interoperable blockchains where parties can easily switch and work with different blockchains. Here, SPV proof is used to transfer a coin from one sidechain (the parent chain) to the other via two waiting periods. First, a coin is locked by a transaction on the parent chain 1
The term “pegged” is used to emphasize that a sidechain supports coin transfer back and forth between sidechains.
Blockchain Beyond Bitcoin
181
until the confirmation period ends. In the meantime, a transaction will be created on the sidechain along with an SPV proof referring to the locked coin on the parent chain. Finally, the user must wait for the contest period before the newly transferred coin can be spent on the sidechain. With the peg protocol, the underlying currency can be reused within different blockchains (as well as most of the blockchain implementation). Since sidechains are typically independent blockchains, this allows developers to test beta versions of the system as a sidechain without affecting the experience witnessed by users. At the time of writing, the Elements Project [2] is exploring extended sidechains’ features such as Confidential Transactions (to improve payer privacy), Segregated Witness (to prevent transaction malleability), and New Opcodes (to provide more powerful scripts for smart contracts). Those new features are regarded as elements and can be combined within a sidechain. For example, Liquid [3] is a sidechain that integrates Confidential Transaction among other elements. Liquid is released by Blockstream [4], the founder of the Elements Project. Despite the flexibility that sidechains can provide, the security of different sidechains still needs to be independently addressed. This means that the sidechains are competing for the mining power in the market, which leaves newly introduced sidechains rather vulnerable to attackers (e.g., the new sidechain is typically supported by a small fraction of the computing power—which can be easily surpassed by the attacker). Therefore, one ought to be cautious when assets are transferred to a sidechain that exhibits such weaker security guarantees. To remedy this, merged mining [5, 6] was proposed to allow different blockchains to share their mining power by including the blocks of other chains during the mining process. Another challenge of sidechains is that interchain transactions may incur high latencies, as the waiting periods are required to ensure that the transactions remain in the blockchain.
9.2
ETHEREUM
Similarly to Bitcoin, Ethereum leverages proof-of-work blockchain technology to achieve distributed consensus. By providing a fully fledged Turing-complete programming language instead of Bitcoin’s simple scripting language, Ethereum allows arbitrary applications referred to as smart contracts to be run on its blockchain. For example, a basic Namecoin version for the Ethereum blockchain can be written
182
Bitcoin and Blockchain Security
with a few lines of code. Creating subcurrencies only requires minimal programming effort as well. Another concrete use case of Ethereum is to build decentralized autonomous organisations (DAOs). Ethereum in fact provides a decentralized platform to build smart applications. The contracts run exactly as programmed, with no possibility of downtime, censorship, fraud, or third-party interference [7]. In what follows, we discuss the basic operations of Ethereum in greater detail. 9.2.1
Accounts
Similar to Bitcoin, Ethereum features accounts that reside at specific addresses on the blockchain. At the time of writing, Ethereum provides two types of accounts: (1) externally owned accounts (EOAs), and (2) contract accounts. EOAs are controlled by private keys and akin to Bitcoin accounts. Contract accounts, on the other hand, are autonomous objects. They have associated code and persistent storage. Their code is executed whenever they receive a transaction or message (see Section 9.2.3). Both account types have a balance field indicating the amount of Ether (Ethereum’s internal currency) that the account currently possesses. Note the difference to Bitcoin where account balances are implicitly given by unspent transaction outputs (UTXOs). Moreover, accounts maintain a nonce field expressing the number of transactions that they have issued so far. The nonce, the balance, and the hashes of the account’s storage and code define the account state. 9.2.2
Transactions and Messages
Ethereum transactions correspond to data packages that are signed by the private key of the issuing EOA. Transactions contain the recipient, a signature identifying the sender, the amount of Ether to be transferred along with the transaction, and an optional data field. Resources used for transaction execution are subject to fees. The corresponding unit is gas. Transactions include two further fields, which are related to gas. The startGas value specifies the maximal amount of gas that the execution of the transaction may consume. The gasPrice value indicates how to convert between Ether and gas. Finally, transactions comprise a nonce field, whose value has to match the sender’s account current nonce. This is necessary to prevent replay attacks.
Blockchain Beyond Bitcoin
183
Figure 9.2 Transaction execution in Ethereum.
Note that in Ethereum, messages are constructed similarly to transactions, but are sent from contract accounts. 9.2.3
State and Transaction Execution
Ethereum is a state machine where transaction execution triggers a state transition. The Ethereum state is a mapping between account addresses and account states. The state is not stored within the blockchain, but maintained by the clients. An exemplary transaction execution is depicted in Figure 9.2. Note that transaction execution is completely deterministic. In this example, the transaction limits the amount of gas available for its execution to 2,000. The corresponding amount of Ether is then subtracted from the sender’s balance up front. The sender has to pay for each byte in the transaction. We assume that this consumes 1,000 gas. In the next step, 10 Ether are transferred from the sender’s balance to the receiver’s balance. Since the receiving account is a contract, its associated code is triggered. Here, the account’s storage is modified. Assuming that the code execution costs 200 gas, the remaining 800 gas are then refunded to the sender (in Ether). Making all resource consumption subject to fees and limiting the amount of gas that can be spent during the execution of a transaction prevents infinite loops, which are necessary to prevent DoS attacks. 9.2.4
Blocks
A block is a collection of data consisting of a header, the transactions incorporated within the block, and a list of uncles (stale blocks). Ethereum uses a generalized notion of uncles, where an uncle is a direct child of a kth generation ancestor of
184
Bitcoin and Blockchain Security
the including block, for 2 ≤ k ≤ 7. An uncle cannot be a direct ancestor of the including block. Block headers include a hash of the state (after all transactions in the block are executed) and the difficulty level of the block. Among other fields, block headers also contain the hash of the parent block’s header and the beneficiary’s address. The beneficiary is the account that is rewarded for successfully mining the block. 9.2.5
Mining and Blockchain
Similar to Bitcoin, mining is used to (1) confirm transactions and (2) issue Ether postlaunch. Mining consists of brute-forcing the header’s nonce until the Proofof-Work (PoW) algorithm output is below a certain threshold. Since this threshold is based on the block’s difficulty, mining gives both meaning and credence to the notion of difficulty [8]. The difficulty level is dynamically adjusted to achieve an average block generation time of approximately 15 seconds [9]. Note that the structure resulting from mining is a block tree. Consensus is needed on the path, starting from the genesis block, that should be selected as the blockchain. This is achieved by choosing the path that exhibits the highest total difficulty and thus possesses the highest amount of computation backing it. Similar to Bitcoin, an attacker aiming at rewriting the history would need to outperform the honest part of the Ethereum network. Successful mining of a winning block is rewarded in multiple ways. A static block reward of 5 Ether is added to the beneficiary’s balance. The beneficiary also receives the gas expended by executing all transactions in the block. Finally, an extra reward is given for including uncles. Contrary to Bitcoin, miners of uncles are rewarded in Ethereum as well, and receive 7/8 of the static block reward [9]. Ethereum’s PoW is expected to incorporate the GHOST protocol. GHOST [10] is an alternative to the longest chain rule for establishing consensus in PoW-based blockchains and aims to alleviate the negative impacts of stale blocks by incorporating the difficulty of uncle blocks when measuring the longest chain.2 Note that it is also envisioned that the PoW consensus utilized by Ethereum will be replaced by virtual mining in the form of Proofs of Stake, where each entity has to deposit some stake in the system to be held accountable in case of misbehavior. 2
At the time of writing, Ethereum does not incorporate the difficulty of uncle blocks when measuring the longest chain.
Blockchain Beyond Bitcoin
185
Figure 9.3 Basic architecture of Open Blockchain.
9.3
OPEN BLOCKCHAIN
Open Blockchain is a prominent project featuring an enterprise blockchain. Open Blockchain was originated by IBM, but later evolved as an open source project within the Hyperledger community [11]. Open Blockchain is a permissioned blockchain network, where end users or organizations go through a registration process that would authorize them to submit (as users) or process transactions (as validators) that are announced in the system. The permissioned nature of Open Blockchain allows the latter to mainly rely on nonproof-of-work based mechanisms, such as Byzantine fault-tolerant protocols, to achieve consensus in the network on transaction validation. The system has been designed in a modular way, allowing the easy replacement of these protocols with other protocols that can achieve consensus among participating nodes in the system. The architecture of Open Blockchain is summarized in Figure 9.3. Similar to Ethereum, Open Blockchain aims at accommodating arbitrary logic within its transactions. To do so, the concept of chaincode was introduced; chaincode refers to pieces of code that are to be deployed and registered within the blockchain through deploy transactions and can be invoked through invoke transactions. Chaincode deployment is performed within a Docker container, within which subsequent invocations of that chaincode are also accommodated. Docker provides a secured, lightweight method to sandbox chaincode execution. Note that the chaincode concept is more general than the notion of smart contracts. Namely, Chaincode can be written in any mainstream programming language, and executed in containers
186
Bitcoin and Blockchain Security
inside the Open Blockchain context layer. Chaincode provides the capability of restricting the functionality of the execution environment and the degree of computing flexibility to satisfy potential legal contractual requirements. As depicted in Figure 9.3, the Open Blockchain system consists of the following entities: • Membership management infrastructure. This denotes a set of entities that are responsible for identifying an individual user (using any form of identification considered in the system such as credit cards or identity cards). This infrastructure is also responsible for opening user accounts and issue the necessary credentials to successfully create transactions and deploy/invoke chaincodes successfully through Open Blockchain. • Peers. These are classified as validating peers and nonvalidating peers. Validating peers (also known as validators) order and process (check validity, execute, and add to the blockchain) user-messages (transactions) submitted to the network. On the other hand, nonvalidating peers (or simply peers) receive transactions on behalf of users, and after performing validity checks, they forward the transactions to their neighboring validating peers. Peers maintain an up-to-date copy of the blockchain, but in contradiction to validators, they do not validate transactions (a process also known as transaction validation). • End users of the system. These refer to the users that have registered to the membership service administration after having demonstrated ownership of what is considered identity in the system and having obtained credentials to install the client-software and submit transactions to the system. • Client-software. This refers to the software that needs to be installed at the client side for the latter to be able to complete his or her registration to the membership service and submit transactions to the system. For simplicity, in the following, we will refer to the client-software by client.
9.3.1
Membership Services
One of the main security challenges in Open Blockchain is reconciling transactional privacy with identity management in order to enable competitive institutions to transact effectively on a common blockchain (for both intra- and interinstitutional transactions). This is achieved by forming a privacy-preserving permissioned network. The permissioned nature of this system is achieved as follows:
Blockchain Beyond Bitcoin
187
• Define a policy to determine the conditions under which a new entity can participate in transaction processing, evaluation, and validation (i.e., join the set of validators or endorsers of the system). • Issue system certificates to all entities that are eligible to join the set of validators or endorsers, according to a predefined policy. • Instruct validators to accept transaction evaluation-related messages from only certified validators of the system. • Define a policy to determine the conditions under which a new entity can join the set of users of the system (i.e., submit transactions to the network of validators). That is, announce messages that would allow them to deploy Open Blockchain contracts, referred to as chaincodes, or invoke already deployed chaincodes. • Issue system certificates to all entities that request membership to Open Blockchain system and fulfill the conditions listed in the corresponding policy. • Instruct validators to only accept transactions that are authenticated to originate by properly formed certificates. Membership services registration is an off-line process through which users prove that they satisfy the membership conditions defined in the associated Blockchain’s membership policy. To facilitate needs for privacy-preserving transactions, upon proper registration users can issue two types of credentials. - Enrollment certificates (ECerts) that carry user’s identity or long-term useridentifier in the system, also referred to by enrollment identity. - Transaction certificates (TCerts) that faithfully but pseudonymously represent enrolled users. That is, transaction certificates do not carry the enrollment identity of their owner in the clear; on the contrary, enrollment identity of the owner of a transaction certificate is incorporated in the transaction certificate such that the former can only be revealed/proven with the consent of the transaction certificate owner or an auditor. During enrollment, users generate two key-pairs. The first is an ECDSA keypair that facilitates authentication of a long-term user identity. The second is an elliptic curve Diffie Hellman (ECDH) key-pair that allows users to establish secret communication channels within Open Blockchain transactions. Both key-pairs are generated at the client side. The secret signing and decryption keys are maintained in user premises and never shared with the membership service authorities. On the
188
Bitcoin and Blockchain Security
other hand, the public signature verification, and encryption keys generated during a user enrollment, are encapsulated in the user’s enrollment certificate. Transaction certificates are issued upon an enrolled user request, and as discussed previously, contain the identity of their owner in encrypted form. Transaction certificates contain public signature keys, to enable a transaction certificate owner authentication in transactions. These keys are generated using the enrollment signature key-pair as a basis, and their secret material is also solely accessible by their owner. Users can use their certificates to authenticate themselves as valid users of the system. In particular, Open Blockchain membership protocols guarantee the following security properties: • Unforgeability of proof of certificate owner. That is, a computationally bounded attacker is not able to prove ownership of a credential he or she is not owning without the collaboration of the credential’s owner. This property also requires that information endorsed by a certificate cannot be altered in such a way that the same certificate appears to have properly endorsed something else without the collaboration of the certificate’s owner. • Nonrepudiation. Users equipped with a certificate to endorse a piece of information (e.g., a transaction) cannot deny having generated the endorsement (i.e., repudiate their endorsement). • Nonframeability. A computationally bounded attacker is not able to create an endorsement to a message that appears to have been generated by a certificate that it does not own. • Anonymity. Anonymity requires that a certain transaction certificate or transaction certificate endorsement on any message does not distinguish the identity of a signer with better probability than choosing this identity at random among the members of the system. • Unlinkability. Unlinkability requires that a set of transaction certificates or certificate endorsements cannot be linked together as having been generated by the same identity. In the current version of Open Blockchain, the user and validator registration, and the associated issuing of credentials in Open Blockchain are performed by membership services that leverage a few trusted membership authorities (all of
Blockchain Beyond Bitcoin
189
which are potentially controlled by the same central authority). In particular, membership services in Open Blockchain are facilitated with the help of four (trusted) entities: • Registration authority. The registration authority evaluates candidate system participant credentials and adds these entities to the database of registered users or validators. The database is automatically shared with the other components of membership services. • Enrollment certificate authority. This authority issues enrollment certificates upon registered users’ or validators’ request. • Transaction certificate authority. This authority issues transaction certificates upon enrolled users’ requests. • TLS certificate authority. This authority issues TLS certificates to registered users/validators. It also issues TLS certificates to the other components of the membership service. At the time the time of writing, there were discussions of moving the functionality of membership services in Open Blockchain to a decentralized network.
9.3.1.1
Consensus Mechanism
At the time that Open Blockchain was conceived, existing open-source systems (e.g., Bitcoin, Ethereum) offered transaction validation rates in the order of tens of transactions per second. For instance, Bitcoin allowed for a maximum transaction validation rate of approximately seven per second. Depending on the use cases, such rates can be considered modest. Open Blockchain relies on algorithms to facilitate consensus over validated transactions that would offer much higher transaction validation rates. Namely, Open Blockchain uses the practical Byzantine fault-tolerant (PBFT) algorithm to allow validator members of the chain to agree on the total order of transactions announced throughout the chain and the global state (i.e., the entire chaincode’s state). At a high level, PBFT guarantees consensus safety, as long as at most one third of the validators in the chain can be Byzantine (i.e., not abide with the consensus protocol rule). An extension of this protocol has been in place (Sieve [12]) to allow for the exclusion of nondeterministic transactions.
190
9.3.2
Bitcoin and Blockchain Security
Transactions Life-cycle
In Open Blockchain, transactions are created on the client side. The client can be either a plain client or a specialized application (i.e., a piece of software that handles (server) or invokes (client) specific chaincodes through the blockchain). Developers of new chaincodes create a new deploy transaction by passing to the (fabric) infrastructure: 1. The confidentiality/security version or type they want the transactions associated to the new chaincode to conform with, the set of users who wish to be given access to parts of the chaincode and a proper representation of their (read) access rights. Transactions that are marked as confidential will contain a number of encrypted fields that can only be decrypted by the appropriate entities as defined in the access policy. 2. The code associated to the new chaincode. 3. Metadata associated and provided to the chaincode at execution time. This may contain configuration parameters, or, in some cases, key-material. 4. Application metadata, which corresponds to metadata attached to the transaction format that only the application can interpret and handle. Other types of transactions, such as invoke and query transactions, which require confidentiality, are also created using a similar approach. More specifically, in both cases, the issuer provides the identifier of the chaincode to be executed, the name of the function to be invoked, and the invocation arguments. Optionally, the invoker can pass to the transaction creation function and the code invocation metadata that will be provided to the chaincode at the time of its execution. Transaction metadata is another field that the application or the invoker can additionally leverage. Finally, transactions at the client side are signed by a certificate of their creator and released to the network of validators. Validators receive the confidential transactions and pass them through the following phases: • Prevalidation phase. In this phase, validators validate the transaction certificate against the accepted root certificate authority, verify the transaction certificate signature included in the transaction (statically), and check whether the transaction is a replay.
Blockchain Beyond Bitcoin
191
• Consensus phase. In this phase, the validators add this transaction to the total order of transactions (ultimately included in the ledger). • Pre-execution phase. Here, validators verify the validity of the transaction/enrollment certificate against the current validity period, decrypt the transaction (if the transaction is encrypted), and check that the transaction’s plaintext is correctly formed (e.g., invocation access control is respected, included correctly formed TCerts). Preliminary replay-attack check is also performed here within the transactions of the currently processed block. • Execution phase. In this phase, the (decrypted) chaincode is passed to a container, along with the associated code metadata, and (encrypted) updates of the chaincodes state are committed to the ledger with the transaction itself. 9.3.2.1
Confidentiality of Transactions
Transaction confidentiality requires that the plaintext of a chaincode (i.e., code or description) is not accessible or inferable (assuming a computationally bounded attacker) by any unauthorized entities (i.e., user or peer not authorized by the developer). It is thus important here that the chaincodes of both deploy and invoke transactions that remain concealed whenever confidentiality is required. In the same spirit, nonauthorized parties should not be able to associate invocations (invoke transactions) of a chaincode to the chaincode itself (deploy transaction) or associate these invocations to each other. Confidentiality mechanisms in Open Blockchain allow the deployer of a chaincode to grant (or restrict) access of an entity to any subset of the following parts of a chaincode: • Chaincode content (i.e., complete code) of the chaincode. • Chaincode function headers (i.e., the prototypes of the functions included in a chaincode). • Chaincode invocations and state (i.e., successive updates to the state of a specific chaincode) when one or more functions of its are invoked. • All of the above. Note that this design offers to the application the capability to leverage the membership service infrastructure and its public key infrastructure to build their own access control policies and enforcement mechanisms.
192
Bitcoin and Blockchain Security
At the time of the writing, the confidentiality features of Open Blockchain are restricted to ensuring confidentiality against nonauthorized users of the system. As such, validating entities are currently considered trusted to access the plaintext of all chaincode resources and not to share it with nonauthorized entities. To support read-access control of the plaintext of a chaincode, Open Blockchain transaction confidentiality protocols leverage the public encryption keys bound to user identities at enrollment time, as well as a chain-specific encryption public key. That is, for confidentiality purposes, a chain is bound to a single long-term encryption key-pair (pkchain , skchain ). This key-pair is generated and maintained within the premises of the membership service infrastructure. At the deploy time of a confidential chaincode, the payload of the transaction (i.e., chaincode code) as well as the associated metadata are encrypted using a freshly generated chaincode-specific key. The key is passed to the authorized users and validators through messages encrypted with the authorized entities associated with the public encryption key. In addition, the chaincode creator specifies at deployment time the key to be used for the encryption of the chaincode’s state each time that chaincode is invoked. Again, access to this key is given to authorized entities using their public keys. Subsequent invocations of a confidential chaincode are forced to encrypt their payload using the key-material defined at deployment time. 9.3.2.2
Auditing Capabilities
Open Blockchain offers auditability in two layers: (1) at the system level, where auditors are able to monitor all transactions a certain validator is involved in and (2) at chaincode level to allow an auditor to read and access all transactions associated to a specific chaincode. At the same time, an auditor who is authorized to query and get responses back from membership service on a certain user’s transactions is able to get proof of a certain user’s involvement in the chain. Proper key-hierarchies allow for different keys to be used in each transaction while minimizing the number of keys managed at the client-side. 9.3.3
Possible Extensions
Open Blockchain was recently renamed Hyperledger/fabric as soon as it was shortlisted as one of the candidates of becoming Linux Foundation’s Hyperledger.
Blockchain Beyond Bitcoin
193
There are currently a number of discussions within the Hyperledger community on the best ways to evolve Open Blockchain according to the consortium’s needs. Current proposals for extending Open Blockchain discussions include the decentralization of membership services, the separation of the consensus (agreement on the total order of transactions and execution of the included chaincodes), as well as extending the confidentiality guarantees to the endorsing entities.
9.4
RIPPLE
The wide success of Bitcoin has lead to a surge of a large number of alternative cryptocurrencies. These include Litecoin [13], Namecoin [14], and Ripple [15, 16], among others. Most of these currencies simply clone the Bitcoin blockchain and try to address some of the shortcomings of Bitcoin. As described in Chapter 8, Namecoin offers the ability to store data within a PoW blockchain in order to realize a decentralized open-source information registration based on Bitcoin, while Litecoin primarily differs from Bitcoin by having a smaller block generation time and a larger number of coinbases. While most of these digital currencies are based on Bitcoin, Ripple has evolved almost completely independently of Bitcoin (and of its various forks). Currently, Ripple holds the second highest market cap after Bitcoin [17]. This corresponds to almost 20% of the market cap held by Bitcoin. Ripple Labs have additionally finalized the financing of a $30 million funding round to support the growth and development of Ripple [18]. Ripple does not only offer an alternative currency, XRP, but also promises to facilitate the exchange between currencies within its network. Although Ripple is built upon an open-source decentralized consensus protocol, the current deployment of Ripple is solely managed by Ripple Labs. At the time of writing, Ripple claims to have a total network value of approximately $ 960 million with an average of almost 170 accounts created per day since the launch of the system [19]. Moreover, there are currently a number of businesses that are built around the Ripple system [20, 21]. For instance, the International Ripple Business Association currently deploys a handful of Ripple gateways [22], market makers [23], exchangers [24], and merchants [25] located around the globe. In the remainder of this section, we overview the Ripple protocol and discuss the basic differences between the current deployments of Ripple and Bitcoin.
194
9.4.1
Bitcoin and Blockchain Security
Overview of Ripple
Ripple [15] is a decentralized payment system based on credit networks [26, 27]. The Ripple code is open source and available for the public; this means that anyone can deploy a Ripple instance. Nodes can take up to three different roles in Ripples: users that make/receive payments, market makers that act as trade enablers in the system, and validating servers that execute Ripple’s consensus protocol in order to check and validate all transactions taking place in the system. Ripple users are referenced by means of pseudonyms. Users are equipped with a public/private key pair; when a user wishes to send a payment to another user, it cryptographically signs the transfer of money denominated in Ripple’s own currency, XRP, or using any other currency. For payments made in non-XRP currencies, Ripple has no way to enforce payments, and only records the amounts owed by one entity to the other. More specifically, in this case, Ripple implements a distributed credit network system. A non-XRP payment from A to B is only possible if B is willing to accept an I owe you (IOU) transaction from A (i.e., B trusts A and gives enough credit to A). Hence, A can only make a successful IOU payment to B if the payment value falls within the credit balance allocated by B to A. This may be the case, for example, if the participants know each other or if the involved amounts are rather marginal; typically however, such transactions require the involvement of market makers who act as intermediaries. In this case, enough credit should be available throughout the payment path for a successful payment. For example, a trust line can be established between market maker U 1 and A (see Figure 9.4) by A depositing an amount at U 1. In this example, A wants to issue a payment to B with the amount of 100 USD. Here, the payment is routed from A → U 1 → U 2 → U 4 → B. This is possible because available credit lines are larger than the actual payment for every atomic transactions. Note that we did not route through U 3 as there is not enough credit available between U 1 → U 3. However, we note that it is possible to break down the payment amount at U 1, route a payment below 90 USD through U 1 → U 3 → B, and transfer the rest through U 1 → U 2 → U 4 → B (extra fee at U 3 required). In typical cases, Ripple relies on a pathfinding algorithm that finds the most suitable payment path from the source to the destination. By implementing credit networks, Ripple can act as an exchange/trade medium between currencies; in case of currency pairs that are traded rarely, XRP can act as a bridge between such currencies.
Blockchain Beyond Bitcoin
9.4.1.1
195
Ripple’s Ledger
Ripple maintains a distributed ledger that keeps track of all the exchanged transactions in the system. These ledgers are similar in spirit to Bitcoin blocks but are created every few seconds and contain a list of transactions to which the majority of validating servers has agreed. This is achieved by means of Ripple’s consensus protocol [15] that is executed among validating servers. This protocol is based on a crash-tolerant consensus protocol and does not deal with Byzantine nodes. This consensus protocol implicitly assumes the presence of rational (but not Byzantine) validating servers. The main intuition is that Ripple captures a closed system of validating servers that are not anonymous. Although Ripple originally had the intention of realizing an open system where any entity can set up and connect their servers, Ripple’s use cases involve private deployments within a fixed set of (known) entities (such as banks and financial institutions). In this setting, rational servers would not risk misbehaving since all messages are authenticated and cannot be later refuted; if detected then these servers can simply be banned/sued. More specifically, Ripple claims that, in case of forks, then the entire logs of communication between servers will be exposed/analyzed to identify and isolate malicious servers. A Ripple ledger consists of the following information: (1) a set of transactions, (2) account-related information such as account settings, total balance, trust relation, (3) a time stamp, (4) a ledger number, and (5) a status bit indicating whether the ledger is validated or not. The most recent validated ledger is referred to as the last closed ledger. On the other hand, if the ledger is not validated yet, the ledger is deemed open. 9.4.1.2
Consensus and Validating Servers
We now briefly overview the consensus protocol of Ripple. Each validating server verifies the proposed changes to the last ledger; changes that are agreed by at least 50% of the servers are packaged into a new proposal that is sent to other servers in the network. This process is reiterated with the vote requirements increasing to 60%, 70%, and 80%, after which the server validates the changes and alerts the network of the closure of the last ledger. At this point, any transaction that has been issued in the network but did not appear in the ledger is discarded and can be considered as invalid by Ripple users. Such transactions typically need then to be rebroadcasted in the network in order to be included in subsequent ledgers. Each validating server maintains a list of trusted servers known as Unique Node List (UNL); servers only trust the votes issued by
196
Bitcoin and Blockchain Security
Payment of 100$
Trust 130$
Trust 120$
0$
10
10 0$
U2
100$
A
Trust 250$
100$
U1
Trust 90$
U4
Trust 180$
B
Trust 210$
U3
Ripple P2P Network
Figure 9.4 Example of IOU payments in Ripple. Here, A wants to pay 100 USD to B.
other servers which are contained in their UNL. More detail on Ripple’s consensus protocol can be found in [28]. By doing so, Ripple enables different institutions (e.g., banks that run their own servers) to reach a consensus with respect to the fate of financial transactions. For instance, recently, Ripple Labs sealed a partnership agreement with a number of banks that agreed to adopt Ripple’s open-source distributed transaction infrastructure [29].
9.5
COMPARISON BETWEEN BITCOIN, RIPPLE, ETHEREUM, AND OPEN BLOCKCHAIN
In what follows, we briefly discuss the security and privacy provisions of Ripple, Ethereum, and Open Blockchain in relation to the well-investigated Bitcoin system.
Blockchain Beyond Bitcoin
9.5.1
197
Security
Similar to Bitcoin, Ripple, Ethereum, and Open Blockchain relies on ECDSA signatures to ensure the authenticity and nonrepudiation of transactions in the system. Furthermore, since Ripple and Ethereum are open systems (like Bitcoin), all transactions and their orders of execution are publicly available. This ensures the detection of any double-spending attempt (and of malformed transactions). Open Blockchain, on the other hand, is a closed, permissioned enterprise blockchain, where transactions can only be seen by the registered participants. Consensus in Ripple and Open Blockchain are achieved by requiring that the validating servers check the log of all transactions in order to select and vote for the correct transactions in the system. In this way, these systems adopt a voting scheme across all validating servers (one vote per each validating server). As mentioned earlier, Open Blockchain relies on the PBFT consensus protocol, which ensures security even if 33% of the validators are Byzantine. On the other hand, the consensus layer in Ripple is closer requires that the transactions for which (80% of) the validators agree upon are considered to be valid [30]. Ripple Labs claims that it is easy to identify colluding validators and recommend that users choose a set of heterogenous validators that are unlikely to collude. Note that if validators in Ripple’s consensus protocol; this protocol has recently received some criticism [31, 32]. In [28], Armknecht et al. show that the current choice of parameters does not prevent the occurrence of forks in the system. In contrast, transaction security in Bitcoin and Ethereum is guaranteed by means of proof-of-work which replaces the vote per validating server notion of Ripple and Open Blockchain, with a vote per computing power of the miners that are solving the PoW. Unlike Ripple and Open Blockchain, once transactions are confirmed in the global ledger (i.e., once transactions receive six confirmation blocks), it is computationally infeasible to modify these transactions [33] as long as the majority of the computing power in the network is honest. In contrast, in Ripple and Open Blockchain, if at any instant in time the majority of the validating servers becomes malicious, then they can rewrite the entire history of transactions in the system. For instance, at the time of writing, there are only a handful of Ripple
198
Bitcoin and Blockchain Security
validating servers that are mostly maintained by the Ripple Labs; if these servers are compromised, then the security of Ripple is at risk. 9.5.2
Consensus Speed
In Bitcoin, payments are confirmed by means of PoW in Bitcoin blocks every 10 minutes on average. A study in [34] (see Chapter 4); a besteffort countermeasure has also been included in the Bitcoin client [34]. Although Ethereum relies on PoW to achieve consensus, blocks are generated in Ethereum every 12 seconds, on average. This ensures considerably faster convergence on consensus when compared to Bitcoin. Recent studies have however shown that by relying on a faster convergence interval, Ethereum offers weaker security guarantees when compared to Bitcoin [35]. On the other hand, Ripple and Open Blockchain inherently support fast consensus; almost all ledgers are closed within few seconds. This also suggests that payments in Ripple can be verified after few seconds from being executed. 9.5.3
Privacy and Anonymity
Ripple, Ethereum, and Bitcoin are instances of open-payment systems. In an open-payment system, all transactions that occur in the system are publicly announced. Here, user anonymity is ensured through the reliance on pseudonyms and/or anonymizing networks, such as Tor [36]. Users are also expected to have several accounts (corresponding to different pseudonyms) in order to prevent the leakage of their total account balance. Note that in Bitcoin, transactions can take different inputs, which originate from different accounts. This is not the case in Ripple, in which payments typically have a single account as input. Although user identities are protected in Ripple, Ethereum, and Bitcoin, the transactional behavior of users (i.e., time and amount of transactions) is leaked in the process since transactions are publicly announced in the system. In this respect, several studies (see Chapter 5) have shown the limits of privacy in open-payment systems [37–39]. There are also several proposals for enhancing user privacy in
Blockchain Beyond Bitcoin
199
these systems; recently, a secure privacy-preserving payment protocol for credit networks that provides transaction obliviousness has been proposed [27]. Open Blockchain, on the other hand, is a permissioned system, where nodes need to register in order to participate in the system. Open Blockchain also relies on an identity manager to authenticate and authorize the participation of nodes and validators in the process. However, Open Blockchain can support encrypted transactions—effectively achieving transaction unlinkability and protecting the privacy of participants from other participants. At the time of writing, Open Blockchain does not offer transaction unlinkability or confidentiality against curious validators who are typically equipped with the appropriate secret material to decrypt all transactions. Moreover, anonymity is not supported in Open Blockchain by design in order to cater for a permissioned Enterprise deployment. 9.5.4
Clients, Protocol Update, and Maintenance
Ripple, Ethereum, Open Blockchain, and Bitcoin are currently open source, which allows any entity to build and release its own software client to interface with either systems. The official clients for Ripple, Ethereum, and Bitcoin are however maintained and regularly updated by the Ethereum foundation, the Bitcoin foundation, and Ripple Labs, respectively. Open Blockchain has been mainly an effort initiated by IBM, but now evolves with the help of the Hyperledger community. Bitcoin and Ethereum clients can also run on resource-constrained devices such as mobile phones—owing to their support for simple payment verification. As far as we are aware, there exists no secure lightweight version of Ripple and Open Blockchain. Note that all changes to the official Bitcoin client are publicly discussed in online forums, well justified, and voted among Bitcoin developers [40]. This process is however less transparent in Ripple and Ethereum. 9.5.5
Decentralized Deployment
Ripple, Ethereum, Open Blockchain, and Bitcoin leverage completely decentralized protocols. Similar to Bitcoin, we argue that the current deployment of Ethereum and Ripple is also centralized. Similar to Bitcoin, only a handful of entities can control the security of all Ethereum transactions. More specifically, a quick look at the distribution of computing power in Ethereum shows that currently the top three (centrally managed) mining pools control more than 55% of the computing power in the network. On
200
Bitcoin and Blockchain Security
the other hand, Open Blockchain aims for an enterprise deployment and credits control to the operator(s) of the validators. Finally, in the case of Ripple, most validating servers are run by Ripple Labs at the time of writing. Although there are a few other servers that are run by external entities, the default list of validating servers for all clients points to the ones maintained by Ripple Labs. This also suggests that Ripple Labs can control the security of all transactions that occur in the Ripple system. Moreover, Ripple Labs and its founders retain a considerable fraction of XRPs; this represents the largest holdback of any cryptocurrency [17] and suggests that Ripple Labs can currently effectively control Ripple’s economy. We contrast this to Bitcoin, where the current system deployment is not entirely decentralized, yet the entities that control the security of transactions, the protocol maintenance and update, and the creation of new coins are distinct [40]. In Ripple, the same entity, Ripple Labs, controls the fate of the entire system. In [28], it was shown that—although it has been introduced almost 2 years ago—Ripple is still far from being used as a trade platform. Ripple advertises a large number of active accounts [19]. However, there is no strong evidence that users are active in Ripple; most accounts contain a small number of XRPs—which users could have received from one of the many giveaways organized by Ripple Labs [41]. Moreover, although the number of transactions in Ripple seems to be considerably increasing over time, the number of actual payments in the system is only marginally increasing over time and is dominated by direct XRP payments. Finally, although there are a number of currency exchanges performed via Ripple— some of which deal with huge amounts—it is hard to tell whether those transactions have been actually concluded since the Ripple system has no way to enforce IOU transactions.
References [1] Diego Ongaro and John Ousterhout. In search of an understandable consensus algorithm. In 2014 USENIX Annual Technical Conference (USENIX ATC 14), pages 305–319, Philadelphia, June 2014. USENIX Association. [2] The elements project. available from. [3] Liquid.. [4] Blockstream. available from.
Blockchain Beyond Bitcoin
201
[5] Adam Back, G Maxwell, M Corallo, Mark Friedenbach, and L Dashjr. Enabling blockchain innovations with pegged sidechains, 2014. available from. org/bitstream/handle/21/406/2014_Back_Enabling_blockchain_ innovations_with_pegged_sidechains.pdf?sequence=1. [6] Merged mining specification. specification.
[7] Ethereum Homestead Release. available from. [8] Gavin Wood. Ethereum: A secure decentralised generalised transaction ledger. Ethereum Project Yellow Paper, 2014. [9] Ethereum homestead 0.1 documentation. mining.html. Accessed: 2016-05-11.
[10] Yonatan Sompolinsky and Aviv Zohar. Secure high-rate transaction processing in bitcoin. In Financial Cryptography and Data Security, pages 507–527. Springer, 2015. [11] Linux Foundation. available from. [12] Christian Cachin, Simon Schubert, and Marko Vukolic. Non-determinism in byzantine faulttolerant replication. CoRR, abs/1603.07351, 2016. [13] Litecoin: Open source P2P internet currency.. [14] Namecoin: A trust anchor for the internet.. [15] David Schwartz, Noah Youngs, and Arthur Britto. The Ripple protocol consensus algorithm., 2014. [16] Ripple: Opening access to finance.. [17] Ripple.. [18] Ripple labs circling 30m$ in funding.. [19] Ripple Labs Inc. Ripple charts.. [20] Coinist Inc. Ripple gateways.. [21] International Ripple Business Association. Listed businesses. listed-businesses.html. [22] International Ripple Business Association. Ripple gateways. gateways.html. [23] International Ripple Business Association. Ripple market makers. market-makers.html.
202
Bitcoin and Blockchain Security
[24] International Ripple Business Association. Ripple exchangers. exchangers.html. [25] International Ripple Business Association. Ripple merchants. merchants.html. [26] Arpita Ghosh, Mohammad Mahdian, Daniel M. Reeves, David M. Pennock, and Ryan Fugger. Mechanism design on trust networks. In Proceedings of the 3rd International Conference on Internet and Network Economics, WINE’07, pages 257–268, Berlin, Heidelberg, 2007. SpringerVerlag. [27] Pedro Moreno-Sanchez, Aniket Kate, Matteo Maffei, and Kim Pecina. Privacy preserving payments in credit networks: Enabling trust with privacy in online marketplaces. In Network and Distributed System Security (NDSS) Symposium, 2015. [28] Frederik Armknecht, Ghassan O. Karame, Avikarsha Mandal, Franck Youssef, and Erik Zenner. Ripple: Overview and outlook. In Trust and Trustworthy Computing - 8th International Conference, TRUST 2015, Heraklion, Greece, August 24-26, 2015, Proceedings, pages 163–180, 2015. [29] US banks announce Ripple protocol integration.. [30] Ripple Labs Inc. Why is Ripple not vulnerable to Bitcoin’s 51% attack? https: //wiki.ripple.com/FAQ#Why_is_Ripple_not_vulnerable_to_Bitcoin. 27s_51.25_attack.3F. [31] Vitalik Buterin. Bitcoin network shaken by blockchain fork.. com/3668/bitcoin-network-shaken-by-blockchain-fork/. [32] Kim Joyes. Safety, liveness and fault tolerance - the consensus choices. available from consensus_choice/. [33] S. Nakamoto. Bitcoin: A Peer-to-Peer Electronic Cash System, 2009. [34]. [35] Arthur Gervais, Ghassan O. Karame, Karl Wust, Vasileios Glykantzis, Hubert Ritzdorf, and Srdjan Capkun. On the security and performance of proof of work blockchains. Cryptology ePrint Archive, Report 2016/555, 2016.. [36] Roger Dingledine, Nick Mathewson, and Paul Syverson. Tor: The second-generation onion router. In Proceedings of the 13th Conference on USENIX Security Symposium - Volume 13, SSYM’04, pages 21–21, Berkeley, CA, USA, 2004. USENIX Association. [37].
Blockchain Beyond Bitcoin
203
[38] Dorit Ron and Adi Shamir. Quantitative analysis of the full Bitcoin transaction graph. In Financial Cryptography and Data Security - 17th International Conference, FC 2013, pages 6–24, 2013. [39] Micha Ober, Stefan Katzenbeisser, and Kay Hamacher. Structure and anonymity of the Bitcoin transaction graph. Future Internet, 5(2):237–250, 2013. [40] Arthur Gervais, Ghassan O. Karame, Vedran Capkun, and Srdjan Capkun. Is Bitcoin a decentralized currency? IEEE Security & Privacy, 12(3):54–60, 2014. [41] Ripple Labs Inc. giveaways/.
Giveaways - XRPtalk.-
Chapter 10 Concluding Remarks In this book, we analyzed in detail the security and privacy provisions of Bitcoin and its underlying blockchain. In addition to discussing existing vulnerabilities of Bitcoin and its various related altcoins, we also discussed and proposed a number of effective countermeasures to deter threats and information leakage within the system—some of which have already been incorporated in Bitcoin client releases. Note that proof-of-work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital currencies. As far as we are aware, this book offers the most comprehensive and detailed analysis of the security and privacy provisions of existing PoW-based blockchains, and of related clones/variants. Given that Bitcoin emerges as the most successful PoW blockchain instantiation to date, this book extracts essential lessons learned in security and privacy from eight years of research into the system with the aim to motivate the design of secure and privacy-preserving next-generation blockchain technologies. We now summarize the main observations that we made throughout the book.
SUMMARY For a long time, the notion of blockchain was tightly coupled with the well-known proof-of-work hash-based mechanism of Bitcoin. For most of its lifetime, it was believed that the security of Bitcoin’s blockchain relies on the underlying security of the utilized hash function and on the assumption of an honest computing majority. Many users believed that as long as no mining pool operator can harness 50% computing power in the network, then Bitcoin was secure; miners would actively
205
206
abandon pools to ensure that this threshold was not reached. Recent research has, however, shown that Bitcoin does not properly incentivize miners to abide by the protocol; selfish mining—in which miners selectively release mined blocks in the network—proves to be a profitable strategy for miners to increase their mining advantage in the system. Even worse, the proof-of-work mechanism of Bitcoin is vulnerable to network-layer attacks, allowing resource-constrained adversaries to selectively eclipse Bitcoin nodes from receiving information from the network. When combined with selfish-mining, such attacks are detrimental for the security of the system; as a result, a mining pool that harnesses as little as 32% of the computing power in the network can effectively control the security of the entire system. Moreover, securing Bitcoin transactions additionally depends on the ability of users to protect their private keys. Namely, since Bitcoin transactions basically consist of transferring the outputs of unspent previous transactions to a new public key, the compromise or loss of a private key means that peers can no longer redeem any transaction sent to the corresponding public key. Bitcoin stores these private keys in a nonprotected user-specific structure, the wallet. There are a number of proposals/start-ups that offer to secure digital wallets on behalf of Bitcoin users; most of these proposals require users to offload trust to a limited number of entities in order to protect their wallets. Other proposals—such as those requiring support from multisig transactions, external cloud storage, and/or trusted computing— reduce the reliance on such trust assumptions but require support from additional hardware/functionality. These observations motivate the need to understand and analyze the security of blockchains using a holistic approach covering the security of cryptographic primitives, network-layer and system-layer attacks, as well as the storage of private keys and secrets prior to any large-scale deployment. Note that even if there is a lower bound on the fraction of an honest majority to ensure the security of Bitcoin (which remains unknown up to now), the Bitcoin network requires considerable time to reach consensus. Such a consensus is essential to resist double-spending attacks in the network (and other misbehavior). Namely, Bitcoin requires six block confirmations for each transaction in the network—a process which consumes 60 minutes on average. This forces a number of Bitcoin merchants to bypass the network’s consensus protocol and accept unconfirmed payments—a move which clearly weakens the security of payments in the system. A number of studies have shown that unconfirmed transactions can be easily double-spent by resource-constrained adversaries without being noticed in the network. Although there were a number of attempts to secure unconfirmed payments in the system (e.g., Bitcoin XT), there is still no silver-bullet solution that can resist network-layer attacks. It was recently shown that network attacks can
Concluding Remarks
207
easily circumvent most adopted countermeasures in the system. Up to the time of writing, the best countermeasures to prevent attacks on unconfirmed transactions consist of: (1) waiting a considerable amount of time before accepting the payment, or (2) installing several (e.g., five) machines running the Bitcoin client at various locations across the globe and ensuring that these machines are located behind a NAT or a firewall to prevent targeted eclipse attacks. This shows the need for nextgeneration blockchains to achieve fast consensus by design and to plan for realistic use cases and deployment settings as most users/vendors expect digital currencies to realize secure and fast payments at low costs. In terms of privacy and anonymity, studies have shown that Bitcoin leaks considerable information about its users, since all transactions (including the timing and amounts exchanged) are public. As we explained in this book, this is a mandatory requirement to ensure the security of transactions within Bitcoin. This information leakage motivated considerable research to enhance the privacy of the system, and a number of proposals for mixing coins, such as Mixcoin and CoinJoin have been proposed. These proposals offer privacy by offloading trust to one (or more) entities/participants in the system—which suggests a clear departure from the decentralized trust model of Bitcoin. To remedy this, a number of cryptographic extensions of Bitcoin, such as ZeroCoin, Extended ZeroCoin, and ZeroCash, propose the reliance on dynamic accumulators and zero-knowledge proofs of knowledge to enhance user privacy in the system. While some of these proposals can achieve unprecedented levels of privacy in Bitcoin by preventing coin expenditure in the network, and hiding transaction amounts (and address balances), they result in an unacceptable performance penalty that effectively hinders their adoption within the Bitcoin system. This demonstrates the need to incorporate privacy-by-design mechanisms in next-generation blockchain technologies. The Bitcoin experience clearly shows that the sole reliance on pseudonyms and network-based protection is not enough to ensure an acceptable level of user privacy. The lack of privacy offered by the current Bitcoin system can however be seen as an enabler for accountability measures in the system. Incorporating accountability measures in Bitcoin is essential to deterring misbehavior, especially given the lack of workable mechanisms to ban/punish Byzantine nodes. Recall that, at the time of writing, Bitcoin nodes locally ban the IP address of the misbehaving user for 24 hours. Clearly, such an approach is not sufficient to deter misbehavior, since malicious peers can, for example, modify/spoof their IPs or even try to connect to and attack other peers who have not blacklisted their IP address. We argue that if any blockchain technology is to sustain decades of service, then it must incorporate accountability measures in order to ensure that a misbehaving user is indeed
208
punished. In this respect, one possible solution would be to enforce Bitcoin address blacklisting. Here, the idea would be that those Bitcoin addresses that have been found to misbehave (e.g., double-spend) are added to a public blacklist. Ideally, the BTCs of the blacklisted addresses will not be accepted by Bitcoin peers, and will therefore lose their value. Besides the concerns/issues related to the management and maintenance of such lists, this approach is not sufficient—when used alone—to deter misbehavior since misbehaving users can be equipped with many addresses each containing low balances. Therefore, one obvious question that emerges is whether it is possible to link different Bitcoin addresses of the same (misbehaving) user (address linkability). Since such linking is possible, misbehaving users could receive harsher punishment for their misbehavior by not being able to spend a large fraction of their funds. The security and privacy of lightweight clients (operating under the socalled SPV mode) for blockchains is also of outmost importance. For instance, Bitcoin requires peers in the system to verify all broadcasted transactions and blocks. Clearly, this comes at the expense of storage and computational overhead. To remedy that, most users rely on lightweight client implementations that only perform a limited amount of verifications, such as the verification of block difficulty and the presence of a transaction in the Merkle tree, and offload the verification of all transactions and blocks to the full Bitcoin nodes. Lightweight clients need only to receive a subset of network transactions that are relevant to the user’s wallet. This, however, allows Bitcoin nodes to learn information about the addresses owned by the client simply by observing the transactions forwarded to the lightweight clients. Studies show that this information leakage cannot be fully deterred by existing solutions, such as the reliance on anonymizing networks, or leverage false positives of Bloom filters. On the other hand, the reliance on bullet-proof solutions such as private set intersection and/or private information retrieval techniques incur considerable computational overhead that cannot be tolerated by most lightweight clients. Given that most blockchain users no longer run full client implementations, these observations motivate the need to design lightweight and efficient SPV client modes that are privacy-preserving. Finally, one must pay special attention to the practical deployment of blockchain technologies in the real world. For instance, while the original design of Bitcoin aims at a fully decentralized Bitcoin, recent events in Bitcoin are revealing several aspects of centralization within the system. Namely, a large number of centralized services currently support Bitcoin (e.g., Bitcoin banks, Bitcoin mining pools, Bitcoin market exchanges, Bitcoin online wallets) and control a considerable share in the Bitcoin market. For instance, a quick look at the distribution of
Concluding Remarks
209
computing power in Bitcoin reveals that the power of dedicated miners far exceeds the power that individual users dedicate to mining, allowing a few parties to effectively control the currency; currently the top three (centrally managed) mining pools control more than 50% of the computing power in Bitcoin. Indeed, while mining and block generation in Bitcoin was originally designed to be decentralized, these processes are currently largely centralized. On the other hand, other Bitcoin operations, like protocol updates and incident resolution, are not designed to be decentralized and are controlled by a small number of administrators whose influence does not depend on the computing power that they control but is rather derived from their function within the system. Bitcoin users do not have any direct influence over the appointment of the administrators—which is somewhat ironic since some of the Bitcoin users opt for Bitcoin in the hope of avoiding the centralized control typically exercised over national currencies. Furthermore, we note that Bitcoin introduces a level of transparency in terms of coin mining and spending since the transaction logs in Bitcoin are public and available for inspection by any interested party. However, it is not clear how any potential disputes would be resolved in Bitcoin since this would then require appropriate regulatory frameworks—a move that clearly goes against the very nature of Bitcoin. We further observe that the existence of public logs in Bitcoin can have some negative effects on this currency that extend beyond known privacy concerns. Bitcoin users can, for example, decide not to accept coins that appear to have originated from a particular address (i.e., that were mined by the owner of that address). Since the use of any coin (or its fraction) can be traced back to its origin, this decision by the users will practically devalue these coins because other users will become reluctant to accept these coins as payments. These observations motivate the need to learn from the various caveats in the current deployment of Bitcoin and consider decentralization in all deployment aspects of next-generation blockchains.
OUTLOOK Irrespective of our opinion of Bitcoin and of speculations on Bitcoin’s future, we argue that Bitcoin has provided a considerable number of relevant lessons for system designers, researchers, and blockchain enthusiasts. In terms of outlook, Bitcoin’s blockchain fueled innovation, and a number of innovative applications have already been devised by exploiting the secure
210
and distributed provisions of the underlying blockchain. Prominent applications include secure time stamping, secure commitment schemes, secure multiparty computations, and smart contracts. Note that some of these extensions cannot be deployed without changing the code base of Bitcoin (i.e., via a hard fork). These are referred to as altcoins and require some measures to initiate currency allocation (e.g., via pegged sidechains) and preserve mining power by leveraging the already established Bitcoin community. Recently, IBM proposed the notion of Device Democracy with the goal to support consensus across a fully meshed network of IoT devices based on the blockchain technology. Other blockchain technologies were also proposed almost independently from Bitcoin. Many of these propose to replace Bitcoins proof-ofwork in order to offset its energy waste and scalability limits. For instance, a number of contributions propose the reliance on memory-based consensus protocols or virtual mining such as proof-of-stake. Other proposals resort to the classic Byzantine fault-tolerant consensus protocols in the hope of increasing the ledger closure efficiency and achieve high transactional throughput. Moreover, in contrast to the unspent transaction output model used in Bitcoin and its altcoins, some blockchains adopt models such as credit networks or account-based models in which transactions directly link to the issuer accounts instead of pointing to the output of previous transactions. Among these alternative blockchains, Ripple, the current holder of the second largest market capitalization after Bitcoin, maintains a distributed ledger that keeps track of all the exchanged transactions and account states in the system. Ledgers are created every few seconds and contain a list of transactions to which the majority of validating servers has agreed. This is achieved by means of Ripple’s proprietary consensus protocol, which is an iterative process and is executed among validating servers. Ripple has its own currency, called XRP; it also accepts credit-based payments (IOU transaction model) if a trust path between the sender and receiver exists. Ripple has been recently criticized for its centralized deployment (as most of the validation nodes are maintained by Ripple Labs); the underlying consensus protocol of Ripple has also received considerable criticism. Stellar shares a similar model as Ripple, but relies on a federated Byzantine agreement protocol in order to resolve the various issues faced by Ripple. Ethereum brings a new dimension to the blockchain, as it expands the standard application of blockchains from the mere public bulletin board approach to a general-purpose peer-to-peer decentralized platform for executing smart contracts. Namely, Ethereum enables any entity to create and deploy novel applications by writing decentralized contracts. The contract itself is a small program that maintains its own key-value store through transaction
Concluding Remarks
211
calls. Therefore, multiple application services can run on the shared Ethereum platform, whose role is to maintain consensus in the network. The current consensus protocol used in Ethereum is GHOST, which is a variant of proof-of-work. The next-generation Ethereum release, however, is expected to adopt a more efficient security-deposit proof-of-stake consensus protocol. IBM’s Open Blockchain (OBC) is mainly inspired by Ethereum and also provides a general-purpose application platform. In addition, OBC introduces membership services to provide authorization for participation and offers confidentiality for transactions. Nevertheless, in spite of considerable work in this area, there are still many challenges with respect to system scalability and performance that need to be overcome in order to ensure a large-scale adoption of the blockchain paradigm. More specifically, existing blockchain technologies cannot match the performance or transactional volume of conventional payment methods (e.g., Visa can handle tens of thousands of transactions per second). Moreover, experience from Bitcoin has shown that even the modest currently deployed scalability measures often come at odds with the security of the system. It remains still unclear how to devise blockchain platforms that can effectively scale to a large number of participants without compromising the security and privacy provisions of the system. We only hope that the findings, observations, and lessons contained in this book can solicit further research in this area.
About the Authors GHASSAN KARAME Dr. Ghassan Karame is a chief researcher at NEC Laboratories Europe. He received his master of science in information networking from Carnegie Mellon University (CMU) in December 2006, and his Ph.D. degree in computer science from ETH Zurich, Switzerland, in 2011. Between 2011 and 2012, he worked as a postdoctoral researcher at the Institute of Information Security of ETH Zurich. Dr. Karame is interested in all aspects of security and privacy with a focus on cloud security, SDN/network security, and Bitcoin/blockchain security. Dr. Karame is a member of the IEEE and of the ACM and has served on the program committees of a number of prestigious computer security conferences. More information about Dr. Karame can be found at.
ELLI ANDROULAKI Dr. Elli Androulaki obtained her undergraduate degree with distinction from the Electrical and Computer Engineering school of the National Technical University of Athens and received both her Ph.D.
213
214
and M.Sc. degrees from Columbia University, New York, under the supervision of Prof. Steven Bellovin. Her thesis involved the design and analysis of protocols for privacy-preserving and accountable ecommerce operations and resulted in the protocol-oriented construction of a centralized identity management architecture. In 2011, Dr. Androulaki joined the Systems Security group at ETH Zurich as a postdoctoral researcher under the supervision of Prof. Srdjan Capkun, where she first started investigating bitcoin and blockchain security. Dr. Elli Androulaki joined the IBM Research Zurich Laboratory in May 2013 as a member of the Cloud Storage and Security group. She has been leading the IBM contribution to security aspects in the Hypeledger/fabric (former Open Blockchain) project. Dr. Androulaki has served on the program committees of several prestigious network and computer security conferences.
Bitcoin pooled mining, 152 Bitcoin transactions, 33 Bitcoin wallets, 146 BitPay, 145 Bits of Proof, 166 BitStamp, 146 block generation, 69 blockchain, 163 Blockstream, 181 Bloom filter, 128 BTC, 144 BTProof, 169
Index Account state, 182 accountability, 15, 91, 207 accumulator, 35 acquirer, 11 activity unlinkability, 87, 101 addr, 50 addr message, 50 addr messages, 33, 62 address unlinkability game, 88 alert messages, 52 altcoins, 163 anonymity, 101 anonymizing networks, 130 anonymous tax reporting, 26 attack, 64, 73, 79 attacks, 61, 67, 69 authenticated storage, 169 authentication, 15 authorization, 15 availability, 14, 15
cash-like payments, 12 centralized payments, 19 Chaincode, 185 chaincodes, 190 change outputs, 42 check-like payments, 12 coin tainting, 91 Coinbase, 146 coinbase transaction, 45 Coinify, 145 coinjoin, 99 CoinKite, 145 collision resistance, 35 commitment schemes, 101 completeness, 105 confidential transactions, 181 confidentiality, 14 confirmations, 47 contract accounts, 182 countermeasures, 63, 64, 74, 79 credit card payments, 23 credit networks, 210 crime, 174
balance, 15, 116 bandwidth optimization, 53 BIP, 156 Bitcoin address, 38 Bitcoin addresses, 33 Bitcoin blockchain, 34, 43 Bitcoin blocks, 43 Bitcoin exchanges, 146 Bitcoin Improvement Proposals, 156 Bitcoin P2P, 49
215
216
criminal smart contracts, 175 cryptographic accumulators, 103 cryptographic has functions, 35 DAP, 114 deanonymization, 94 decentralized anonymous payment, 115 decentralized identity management, 171 decentralized mining pools, 153 decentralized payments, 19 decentralized storage, 166 dedicated relay networks, 52 default number of connections, 50 denial-of-service, 37 deposit, 12, 172 digital assets, 165 digital cash, 21 dispute mediation, 173 DNS seeds, 49 Dogecoin, 164 double geometric method, 152 double-spending, 60, 62, 64, 69, 73, 79 double-spending attacks, 18 ECDSA, 33, 35, 36 Eclipse attacks, 61 electronic cash, 21 Elements projects, 181 elliptic curve cryptography, 36 Elliptic Curve Digital Signature Algorithm, 36 enrollment identity, 187 entry nodes, 50, 94 EOA, 182 equalized shared maximum pay per share, 152 escrow, 18 Ether, 182 Ethereum, 181, 210 Ethereum transactions, 182 Extended ZeroCoin, 100, 108 EZC, 108 fabric, 192 fairness, 15 false positive rate, 129 false positives, 129 fast payments, 164
Index
fees, 34 figures, 37, 48, 60, 61, 65, 70, 71, 128, 129, 167, 173, 180 Finney attack, 73 fork, 44 forks, 34, 79 full node, 49 gas, 182 genesis block, 51 getaddr message, 51 getdata request, 63 GHOST, 184 hardware wallets, 147 hash functions, 35 headers first synchronization, 51 heading chapter, 11, 59, 85, 125, 163, 179, 205 overview, xiii, 1, 33, 143, 213 section, 5, 11, 13, 17, 29, 33, 35, 36, 47, 53, 59, 69, 79, 86, 97, 125, 128, 148, 157, 163, 166, 180, 181, 185, 193 subsection, 5–9, 14–17, 20, 26, 35–38, 43, 48, 53, 55, 56, 60, 61, 63, 65, 69, 74, 79, 80, 91, 93, 98–100, 107, 125, 127–129, 139, 144, 146, 148– 151, 154, 155, 164–166, 169, 171, 172, 182–184, 186, 190, 192, 194– 196 subsubsection, 17–20, 23, 25–28, 38, 41– 44, 48, 49, 51, 52, 62, 66, 67, 73, 87, 89, 90, 94, 101, 105, 108, 114, 156, 169, 172–174, 189, 191, 192, 195, 197–199 HolyTransactions, 148 Hyperledger, 165, 185, 192 IBM Micropayments, 28 indistinguishability, 116 instant conversion, 144 integrity, 14 interactive payments, 12 internal reputation system, 56 inv message, 53, 63 inv messages, 64
217
Index
issuer, 11 lightweight clients, 49, 125 linkability, 87 Liquid, 181 listening period, 74 Litecoin, 164 Local Bitcoins, 146 longest chain, 66 loss, 148 M-Pesa, 28 mediator-based payments, 12 mediator-based systems, 18 merged mining, 181 Merkle root, 36 Merkle tree, 35, 126 micropayments, 19, 25 miners, 34, 44, 48 mining, 43 mining pool, 48, 151 minipay, 25, 26 mixers, 85, 97 mixing services, 97 multi-input transactions, 89 multi-sig transactions, 40 multicloud storage, 150 multicurrency wallets, 147 multiple Bloom filters, 137 multipool, 152 multisig, 147, 149, 172 Namecoin, 165 NAT, 94 new tables, 62 nLockTime, 43 nLocktime, 172 node restart, 63 noninteractive payments, 12 nonmalleability, 116 nonoutsourceable proof-of-work, 154 nonrepudiation, 15 OBC, 211 observers, 74 off-line payments, 17
one-wayness, 35 OneName, 171 online payments, 17 online wallets, 147 open blockchain, 185 orphan, 45 orphan blocks, 44 P2P, 33 P2PKH transaction, 39 P2SH transaction, 39 pay on target, 152 pay per last N shares, 152 pay per share, 151 payee, 11 payer, 11 payment processor, 144 payment systems, 11 PayPal, 12 paypal, 27 Paystand Bitcoin Merchants, 145 PBFT, 189 Pedersen commitments, 102 pegged sidechain, 180 penalty, 56 Peppercoin, 28 perfect zero knowledge, 105 Permacoin, 169 person to person payment, 144 point of sale solution, 145 pool hopping, 152 POR, 169 PoW, 34, 46 privacy, 13, 15, 16, 85 privacy quantification, 132 privacy-preserving payments, 17, 85 probability, 68 proof of existence, 169 proof of knowledge, 105 proof of membership, 126 proof of stake, 184, 210 Proof of work, 34, 43 proofs of knowledge, 102 proportional reward, 152 pseudonyms, 33 public randomness beacon, 171
218
rational adversary, 16 recent shared maximum pay per share, 152 redeemScript, 40 request management system, 63 resistance to impersonation attacks, 15 responsible nodes, 50 Revel Systems, 145 Ripple, 193, 194, 210 Ripple wallet, 148 Ripple’s consensus, 195 Ripple’s ledger, 195 risks, 93 Satoshi coin, 38 SatoshiDICE, 155 SCORE scheme, 152 script execution, 41 Scripts, 37 scrypt, 164 secp256k1 curve, 36 security, 13, 14, 125, 127 Segregated witnesses, 181 selfish mining, 62, 66 shadow address, 42, 87, 90 shared maximum pay per share, 152 shifted geometric distribution, 69 Sidechains, 180 signature of knowledge, 102 simple payment verification, 125 smart contracts, 172, 181 SNARK, 104 SPV, 49, 125, 147 SPV mining, 127 SPV proof, 180 Stellar, 210 succinctness, 105 supported transaction types, 38 synchronous, 61 tables, 45, 46, 73, 75–77, 137, 151 tamper-resistant hardware, 20 TigerDirect, 143 time-dependent source of randomness, 171 time-out, 64 time-outs, 55 Tor exit node, 95
Index
Tor networks, 95 transaction, 38 transaction anonymity, 16 transaction confidentiality, 191 transaction confirmation, 67 transaction output, 34 transaction unlikability, 16 transaction verification, 65 transactions, 59 trickle node, 51 trickling, 51 tried tables, 62 trusted computing, 150 tumblers, 98 two Bloom filters, 134 uncles, 183 UNL, 195 UTXO, 38 verack message, 51 version messages, 51 wallets, 148 XBTerminal, 145 XRP, 193 zero knowledge proofs of knowledge, 102 zero-confirmation transactions, 69 zero-knowledge systems, 27 ZeroCash, 100, 114 ZeroCoin, 100, 105 ZK-SNARKs, 104, 105, 114 zk-SNARKS, 118 | https://issuu.com/nadirchine/docs/bitcoin-blockchain-security | CC-MAIN-2018-26 | refinedweb | 66,352 | 52.39 |
Update: Autohosted apps are no more. You should check out my new article on creating a Provider-hosted app. Alternatively, read on, but be careful to use Provider-hosted features instead of Autohosted. Thanks!
SharePoint apps are not at all like the SharePoint solutions we used to deal with. With the App model, your code is a completely self-contained web application running on a web server far away from the SharePoint server. This presents a number of downsides – gone are the days of using the Server object model and elevating privileges to hack in a solution to whatever problem faces you. But this article isn't about dwelling on the past (much…); it's about how to write a cleanly architected app based on MVC 5 using Visual Studio 2013. And that in itself will make programming for SharePoint much faster and cleaner (in theory!).
You can still write a traditional SharePoint solution. But that may not be a good idea:
So assuming you've decided to write an app, what's next? A big, initial architectural decision you'll need to make is around the hosting model: there are three options and they depend on the functionality you're planning to provide, and the target audience for your app:
Check out this MSDN article for more in-depth coverage of the hosting options.
This article is going to focus on the creation of an Autohosted app for SharePoint 2013 online.
You will need:
We're going to create a pretty silly app called AnimalApp. This is going to allow our users to manage a list of animals. So useful! Features are:
Now, it would probably be sensible to store the list in a SharePoint list. But I wanted to show how the database connection string is handled in autohosted apps.
Let's get started! Click File -> New -> Project and choose App for SharePoint 2013:
Choose autohosted and fill in the address of your developer site:
On the next screen, select ASP.NET MVC Web Application and click finish. This creates two projects for you:
Take a look in the AnimalAppWeb project. A lot of code is generated for us! The main areas we're concerned with are:
Without doing anything, you can hit F5 and deploy your app. You may need to login to your developer site. Once it's deployed a browser window will be opened and SharePoint will request that you approve your app:
Only the basic permissions are requested by default. Click Trust It, and with any luck you'll be redirected to this:
Before we write any code, let's see how SharePoint security "happens". There are SharePointContext.cs and TokenHelper.cs files (see image above) that contain all of the security-related code – but how do they get called? The app authentication mechanism happens something like this:
Up-to-and-including-installation:
Following installation, the flow is as follows:
SPHostUrl
1 Point 3 was a little wooly. You don't really need to write any code to store off the authentication data – the project template handles this out of the box. Take a look in Controllers\HomeController.cs:
public class HomeController : Controller
{
[SharePointContextFilter]
public ActionResult Index()
{
return View();
}
...
}
The Index method above (i.e., your app home page) has a helpful annotation – SharePointContextFilter. This is defined in Filters\SharePointContextFilterAttribute.cs. Its job is to call into that SharePointContext code we mentioned and handle user checks. This is where the OAuth calls live – it handles the app access token validation and stores it off into the user session. Each time it's called it combines what is stored in the user session with the SPHostUrl and redirects to the login page if necessary.
Index
SharePointContextFilter
SharePointContext
Give the login mechanism a go – try loading the default page in a Chrome Incognito (or Internet Explorer InPrivate) tab and the login window is presented. Now open the 'About' page in a new tab – it won't require login, since it doesn't have the SharePointContextFilter attribute on its Controller method by default.
You can stick this attribute on any controller method you like! If you leave it out, then anyone can access that page. So if it's for users' eyes only, or you're interacting with SharePoint using the CSOM, then you need to add this attribute in.
So now that you know the user is logged in successfully, you can create a SharePoint context and make CSOM calls. This is pretty simple:
var spContext = SharePointContextProvider.Current.GetSharePointContext(httpContextBase);
using (var clientContext = spContext.CreateUserClientContextForSPHost())
{
if (clientContext != null)
{
//CSOM code
}
}
I mentioned that the SPHostUrl (the URL of the SharePoint site you're currently on, in our case, the developer site) is passed to every page. This is pretty much true: the script in Scripts\spcontext.js is loaded into each page (How? See App_Start\BundleConfig.cs and Global.asax.cs and connect the dots…). Once the script is loaded onto the page it performs a little bit of a hack – every URL on the page is appended with SPHostUrl parameter which is later read-in by the SharePointContextProvider code. And the cycle begins again!
SharePointContextProvider
Now, this presents a slight complication – if you have AJAX requests in your app, or a form in which you post data back to the server, then the SPHostUrl isn't going to be automatically included in those requests. The solution is simply to add it in manually. We're going to see this "bug in action when we add some functionality to our AnimalApp.
As described earlier, our app is database powered – let's see how that all works!
Click File -> Add -> New Project and select Other Languages -> SQL Server Database Project .
Once it's added, you need to point your app project at the database project. Do this by clicking on the AnimalApp project, and in the properties window, select AnimalDatabase under the SQL Database property.
Visual Studio will helpfully offer to update your SQL Server project to target Azure SQL. Click Yes. Now your SQL Azure instance will be automatically configured along with the rest of your app upon deployment. Sweet!
Now we'll add data to the database. Right click the AnimalDatabase project, click Add -> Table. Call it Animals and click Add.
Add a Name column and a UserId column:
Name
UserId
This will store the animal name, and the User Id who inserted it.
If you want the Id to be automatically incremented (and we do), change the script for the Id column to contain the IDENTITY keyword.
Id
IDENTITY
OK, so this isn't really SharePoint app related, and it's not MVC related, but it's cool so I'm mentioning it!
In ye olden times, you'd start writing a data access layer in your C# project to access your database. We're not going to do that: instead, we're going to use Entity Framework to generate all that code for us. You could alternatively use the Entity Framework "Code First" functionality to write your C# classes and have it generate the appropriate SQL table schema; I prefer writing the SQL myself.
You'll need to add Entity Framework Power Tools to Visual Studio at this point if you haven't already.
Click Tools -> Extensions and Updates and search for Entity Framework Power Tools. As of writing, the current version is Beta 4.
Once that's installed, you can use it to generate a C# class for each database table. Note that if you'd added multiple tables, along with relationships (foreign keys and the like), the resultant C# classes will be created with members and collections as appropriate.
Before generating that code, you'll need to deploy your database locally, so right click on AnimalDatabase and click Publish.
Click Edit for the Target database connection and set it to "(localdb)\Projects. This corresponds to SQL Express on your developer machine but obviously you can use whatever SQL Server you have available.
Click Publish. The Data Tools Operations window should tell you it's been published successfully. Now we can invoke Entity Framework – right click on the AnimalAppWeb project (the project we'll be accessing the database from) and click Reverse Engineer Code First.
Again, enter the server name as (localdb)\Projects. Under 'Connect to a database', the AnimalDatabase should be present, so choose that. Click OK.
Now, once it's finished generating all the loveliness, you'll notice that it has helpfully put all your data classes under the Models folder – right where they belong!
At this point I'd like to point out that the classes generated are marked as partial. This is very helpful because you will probably want to extend them. For example, let's add a new constructor to the Animal class. Add a file in the same folder as Animal.cs and call it Animal_Partial.cs. Edit the code as follows:
partial
Animal
public partial class Animal
{
public Animal() { }
public Animal(string name, int userId)
{
this.Name = name;
this.UserId = userId;
}
}
Now when we re-run the Entity Framework code generation, our changes to the Animal class won't be overwritten.
Now we've got to sort out our connection string. SharePoint autohosted apps use a specific convention for connection strings: you define it in the web.config with the key SqlAzureConnectionString. This means that when your app is deployed and the database is installed to an Azure instance, the installer will automatically update the connection string to point to the dynamically deployed database. Clever! So add this setting to your web.config appSettings node:
SqlAzureConnectionString
appSettings
<add key="SqlAzureConnectionString"
value="Data Source=(localdb)\Projects;Initial Catalog=AnimalDatabase;Integrated
Security=True;Connect Timeout=30;Encrypt=False;TrustServerCertificate=False" />
All you need to do here is change the Initial Catalog to be the name of your database.
Initial Catalog
Now, Entity Framework has created another class for us which we need to look at: the AnimalsDatabaseContext class which sets up the database connection for us. The base of this class, DbContext, accepts the connection string as a constructor argument, so we'll just add a new constructor and read in the value from the web.config. Add a new file, AnimalDatabaseContext_Partial and mark it partial like we did last time:
AnimalsDatabaseContext
DbContext
I've also added a convenience function here for creating a new instance of the class by reading the connection string out of the web.config.
We're going to quickly add some pages to read/create/update/delete animals. Visual Studio will generate these for you based on your entity framework models that you created earlier.
Right click on the Controllers folder and go to Add Controller. Select "MVC5 Controller with views, using Entity Framework:
Fill in the Add Controller dialog like this (it should be fairly self-explanatory):
Once you click Add, lots of files will be generated and added to your solution:
AnimalController
The only thing remaining for us to do is to add links to our new CRUD pages on the main navigation, which lives in Views/Shared/_Layout.cshtml.
Open that file and look for the section where the navbar is rendered:
Add in links for your Animal pages:
The three arguments for the ActionLink method are:
ActionLink
title
actionName
controllerName
Hit F5 and you'll be presented with some lovely CRUD pages. How easy was that??
You may notice that you can use your new animal pages anonymously. To secure them, we'll apply the SharePointContextFilter to them.
Open the new controller file, AnimalController, and annotate each method with SharePointContextFilter:
Now those pages are secured.
The complication with adding security to the new Animal CRUD pages is that they're going to need the SPHostUrl passed to them during post back (for example, when you add, edit, or delete an Animal). If you try and add a new animal, it'll be inserted successfully but then you'll be presented with this message:
"Unknown User: Unable to determine your identity. Please try again by launching the app installed on your site".
Why is that? The reason is that (as you can see from the URL) the SPHostUrl parameter hasn't been passed along and therefore authentication has failed. This line is responsible (inside the Create method of AnimalController):
Create
What's happening is that the animal is created and inserted, but then we're redirecting back to the index page – and without the vital SPHostUrl parameter. We can fix that very simply by adding the parameter to the redirection:
return RedirectToAction("Index",
new { SPHostUrl = SharePointContext.GetSPHostUrl(HttpContext.Request).AbsoluteUri });
Here we're simply adding the SPHostUrl as a URL parameter to the request for the Index page. You should add this parameter to any RedirectToAction call where the target page performs SharePoint authentication.
RedirectToAction
We're going to configure our pages so that users can only see their own animals.
Let's first do a little bit of tidy up. Remove the UserId column from the views:
Above is the relevant code to remove from the Create.cshtml page. You should locate and remove the relevant UserId code from Delete.cshtml and Edit.cshtml too. We'll leave Details.cshtml and Index.cshtml for now; we're going to display the UserId for administrators in a later step.
Now, we're going to manually set the User Id, on creation, to the current SharePoint user. Firstly we need to retrieve the User Id – the place that makes most sense to do this is in the Filters\SharePointContextFilterAttribute.cs class – it is, after all, already executed for any secured page. And this is the same point at which we know a user is successfully logged in.
Add the following method:
private void GetSPUserDetails(HttpContextBase httpContextBase, dynamic viewBag)
{
var spContext = SharePointContextProvider.Current.GetSharePointContext(httpContextBase);
using (var clientContext = spContext.CreateUserClientContextForSPHost())
{
if (clientContext != null)
{
User spUser = clientContext.Web.CurrentUser;
clientContext.Load(spUser, user => user.Title, user => user.Id);
clientContext.ExecuteQuery();
viewBag.UserName = spUser.Title;
viewBag.UserId = spUser.Id;
}
}
}
This simply requests user details from SharePoint using the CSOM, and applies a couple of properties – UserName and UserId to the supplied viewbag. You can call this method from the OnActionExecution method like this:
UserName
OnActionExecution
We're passing in the current HttpContext, so that we can create a SharePoint context, and also the ViewBag – the ViewBag is used as a convenient location for caching data which is accessible both from the View and the Controller.
HttpContext
ViewBag
View
Controller
Then open AnimalController and update Index() to filter Animals for the current user ID:
Index()
public ActionResult Index()
{
int userId = ViewBag.UserId;
return View(db.Animals.Where(a => a.UserId == userId).ToList());
}
Next, update the Create method to set the user id on creation:
Note that we've also removed UserId from the bound properties; we removed it from the HTML page earlier. You should also remove it from the Edit method.
Edit
Run the solution again – now users can only see their own animals.
SharePoint apps generally match the styling of the site on which they're installed. In this step, we'll add that styling and remove some of our MVC-default styling.
Firstly, we're going to add the SPHostUrl to the ViewBag. The reason will become apparently in a minute. Add it within the GetSPUserDetails method we created earlier:
GetSPUserDetails
Note that we're trimming the final slash off, as we're going to concatenate another URL portion on to it.
Open up Views\Shared\_Layout.cshtml and add this code to the header:
<link href='@ViewBag.SPHostUrl/_layouts/15/defaultcss.ashx' type='text/css' rel='stylesheet' />
<script src='@ViewBag.SPHostUrl/_layouts/15/SP.UI.Controls.js'></script>
This will pull in the CSS and controls script from your SharePoint server. This is cool because now your app will match the styling of your SharePoint site! The goal is to make your app blend in as much as possible.
If you run your app now you can actually already see that the fonts have changed to match your SharePoint site.
Again in the _Layout.cshtml file, we're going to start by importing jQuery:
@Scripts.Render("~/bundles/jquery")
jQuery is actually already imported, but at the bottom of the file. So remove that. Alternatively, you can just ensure that any reference to jQuery is after the import at the bottom.
Then add the script to render the top bar whenever the page loads:
$(function () {
var options = {
appHelpPageUrl: '@Url.Action("About","Home")',
appIconUrl: "AppIcon_Blue.png",
appTitle: "MVC5 app",
settingsLinks: [
{
linkUrl: '@Url.Action("Contact","Home")',
displayName: "Contact"
},
]
};
var nav = new SP.UI.Controls.Navigation("chrome_ctrl_container", options);
nav.setVisible(true);
});
Notice here that we're rendering the Contact page as an option under the Settings menu. This is really just for illustration; contact isn't a setting, I know!! I've also added an Img folder to my project, and a 96x96 pixel image to use as my app icon.
Next, add the div tag that will indicate where to render the navigation bar. This should be the first element under body.
div
<body>
<div id="chrome_ctrl_container"></div>
Now, at this stage we're into the realm of CSS. You might want to get a designer involved! We are essentially combining the MVC5 default CSS with the SharePoint CSS and app navigation bar. To make it look semi-decent, I had to do the following:
padding-top: 50px
body
navbar-header
navbar-inverse
navbar-fixed-top
navbar-collapse
collapse
Your body HTML should begin something like this:
<body>
<div id="chrome_ctrl_container"></div>
<div class="navbar">
<div class="container">
<ul class="nav navbar-nav">
<li>@Html.ActionLink("Create Animal", "Create", "Animal")</li>
<li>@Html.ActionLink("View Animals", "Index", "Animal")</li>
</ul>
</div>
</div>
...
</body>
With your app looking like this:
This article is getting too long already, so I'm going to stop here! However, there are a few things that I'd love to include:
Event receivers in apps are a pretty tough concept; I mean, Microsoft will tell you they're easy, but I haven't had a lot of luck. You need to implement a WCF web service (hence they are called remote event receivers) which SharePoint calls when the event occurs. You register that service as an event receiver during app installation. Click here for details on creating an app event receiver.
This isn't really too difficult and isn't really related to MVC so I've not included it. You can add a custom list that gets created when your app is installed (right-click your app project, Add -> New Item -> List), and interacting with it is a CSOM matter.
A web part in the app world is simply a web page inside an iFrame – which is placed on a SharePoint page. So the best idea may be to write a controller specifically for your web part and then create an associated view.
Click here to download a zip file containing the sample code. It's pretty big (16MB), because of the plethora of dependencies it includes!
There are quite a few little caveats that you need to overcome to make a proper SharePoint 2013 app out of an MVC application. I hope it's helpful - please let me know in the comments! Happy coding!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
public static AnimalDatabase_1Context Create()
{
return new AnimalDatabase_1Context(WebConfigurationManager.AppSettings["SqlAzureConnectionString"]);
}
X-Frame-Options = SAMEORIGIN
Application_Start
AntiForgeryConfig.SuppressXFrameOptionsHeader = true;
using (var clientContext = spContext.CreateUserClientContextForSPHost())
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://codeproject.freetls.fastly.net/Articles/695161/Walkthrough-Creating-a-O-SharePoint-App-wi?msg=4796012#xx4796012xx | CC-MAIN-2021-49 | refinedweb | 3,289 | 55.64 |
FAQs
Search
Recent Topics
Flagged Topics
Hot Topics
Best Topics
Register / Login
Win a copy of
The Journey To Enterprise Agility
this week in the
Agile and Other Processes
forum! And see the welcome thread for 20% off.
Roberto Hernandez
Ranch Hand
32
6
Threads
0
Cows
since Apr 29, 2009
I'm a Java Developer currently working in San Antonio TX.
I was born and raised in Mexico. My family migrated to the U.S. in 1998.
I graduated from the University of Texas - Pan American.
San Antonio
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
Ranch Hand Scavenger Hunt
Number Posts (32 (32/10)
Number Threads Started (6/10)
Number Likes Received (0/3)
Number Likes Granted (0/3)
Set bumper stickers in profile
Set signature in profile
Set a watch on a thread
Save thread as a bookmark
Create a post with an image (0/1)
Recent posts by Roberto Hernandez
Generate javadoc for multiple projects using the javadoc ant task
I have tried the suggestion posted above and it didn't work for me. I get the following
BUILD FAILED C:\projects\SRS_WTW\SJSAS_HANDHELD_2_BUILD\build.xml:258: Javadoc failed: java.io.IOException: Cannot run program "C:\Sun\SDK\jdk\bin\javadoc.exe": CreateProcess error=87, The parameter is incorrect
show more
8 years ago
Other Build Tools
How Can I return the generated key with Hibernate?
Hi All,
This is the code that I'm trying to convert to Hibernate. I'm fairly new to Hibernate so please bear with me.
public void insertApplicationDetails(Application app) throws SQLException{ try { String appKey = dao.insertApplication(app); Module module = new Module(); module.setAppId(Integer.valueOf(appKey)); module.setModuleName(app.getAppName()); module.setModuleOverview(app.getOverview()); module.setModuleSwitch(CommonsConstants.YES); String moduleKey = dao.insertAppModuleInfo(module); ArrayList<Function> functions = app.getFunctions(); dao.insertModuleFunctionality(Integer.valueOf(moduleKey), Integer.valueOf(appKey), functions); dao.insertModuleBusAreas(Integer.valueOf(moduleKey), Integer.valueOf(appKey), app.getBusAreas()); } catch (NamingException e) { e.printStackTrace(); throw new SQLException(e); } }
the dao.insertApplication() returns the generated key as a String (appKey) because it will be a foreign key in the next insert of this transaction (dao.insertAppModuleInfo), and the generated key (moduleKey) will also serve as a foreign key for dao.insertModuleFunctionality() and dao.ModuleBusArea(). Am I making myself clear? There's probably a better way to do this in hibernate, I just don't know how. I'm hoping you all can help.
Thanks,
show more
8 years ago
Object Relational Mapping
How Can I return the generated key with Hibernate?
I'm inserting records in the database but I need to know how I can return the generated key using Hibernate. I'm currently using prepared statement to return the key as a string:
ps = conn.prepareStatement(sql, PreparedStatement.RETURN_GENERATED_KEYS);
After the query is executed, I use this code to return back the generated key for the record I just inserted
ps.executeUpdate(); rs = ps.getGeneratedKeys(); if(rs.next()){ key = rs.getString(1); }else{ logger.warn("There are no generated keys."); }
I need to be able to do the same thing with Hibernate but I don't know how. Can anyone please help???
show more
8 years ago
Object Relational Mapping
Unable to control logging levels in Struts2 using log4j.properties
Thanks David, I thought about that too but I'm not really sure I've messed around with GlassFish settings. I'll investigate even further and will post again once I find the solution
show more
8 years ago
Struts
Unable to control logging levels in Struts2 using log4j.properties
Ok, here's my dilema,
I have an Struts2 application running on a GlassFish server. Everything runs just fine, but the problem that I have is configuring the logging levels using log4j. Every time I submit a request, my console gets a a ton of messages and the server.log file is getting rotated frequently because the logs get full rapidily. I've tried to decrease the log levels for some of the packages in struts 2 but nothing seems to work
Here's my log4j.properties
# Set root logger level to WARN and append to stdout log4j.rootLogger=ERROR, stdout=%d %5p (%c:%L) - %m%n # Print only messages of level ERROR or above in the package noModule. log4j.logger.noModule=ERROR # Struts2 log4j.logger.freemarker=ERROR log4j.logger.org.apache.struts2=WARN log4j.logger.org.apache.struts2.components=ERROR log4j.logger.org.apache.struts2.dispatcher=ERROR log4j.logger.org.apache.struts2.convention=ERROR # OpenSymphony Stuff log4j.logger.com.opensymphony=ERROR log4j.logger.com.opensymphony.xwork2.ognl=ERROR # Spring Stuff log4j.logger.org.springframework=INFO
Does anyone know what I can do to stop all these messages from appearing in my logs. I just want error messages not bunch of debugs and info, warnings.
I've tried a lot of things but now I don't know what to do
Does the fact that I have both commons-logging.jar and lo4j.jar in my lib have anything to do with this? As far as I know Struts2 uses commons-logging but I would think that they can both work independently???
Please help
Here's a snippet from my console
WARNING: 10/01/21 11:09:23 DEBUG impl.InstantiatingNullHandler: Entering nullPropertyValue [target=[com.opensymphony.xwork2.DefaultTextProvider@578b06], property=struts] WARNING: 10/01/21 11:09:23 DEBUG impl.InstantiatingNullHandler: Entering nullPropertyValue [target=[com.opensymphony.xwork2.DefaultTextProvider@578b06], property=struts] WARNING: 10/01/21 11:09:23 DEBUG xwork2.DefaultActionProxy: Creating an DefaultActionProxy for namespace /protected/mac and action name LoadMacHome WARNING: 10/01/21 11:09:23 DEBUG interceptor.I18nInterceptor: intercept '/protected/mac/LoadHome' { WARNING: 10/01/21 11:09:23 DEBUG interceptor.I18nInterceptor: applied invocation context locale=en_US WARNING: 10/01/21 11:09:23 DEBUG interceptor.I18nInterceptor: before Locale=en_US WARNING: 10/01/21 11:09:23 DEBUG impl.InstantiatingNullHandler: Entering nullPropertyValue [target=[com.heb.apps.mac.actions.DisplayMACHome@b55b71, com.opensymphony.xwork2.DefaultTextProvider@578b06], property=struts] WARNING: 10/01/21 11:09:24 DEBUG interceptor.FileUploadInterceptor: Bypassing /protected/mac/LoadMacHome WARNING: 10/01/21 11:09:24 DEBUG interceptor.StaticParametersInterceptor: Setting static parameters {} WARNING: 10/01/21 11:09:24 DEBUG interceptor.ParametersInterceptor: Setting params NONE WARNING: 10/01/21 11:09:24 DEBUG interceptor.ParametersInterceptor: Setting params
show more
8 years ago
Struts
Newbie: How to get fields after Struts2 web form is submitted
I'm not very familiar with the struts2 framework yet, so I need your help on this scenario.
I have a form in a jsp with 15 fields. Once I submit the form, the request is passed on to the action and my question is "How do you get a hold of those fields (values).
I have read in many tutorials that you have to use getters and setters for each field and that works, but I think it's kind of cumbersome to do this for each form that I have because the same form fields will be used in multiple forms throughout the application and I don't want to have the same getters and setters declared in multiple places.
Is there a way to avoid this?
Please please help!!!
Thanks,
show more
8 years ago
Struts
How to Make Dojo wait until request finishes parsing data
Hi, I'm using the Struts 2 Dojo Plugin and I'm trying to expand a tree widget with dojo. After the user saves the data, the tree widget is destroyed and re-loaded in it's pane, but I'm trying to expand the node the user was working with after it finishes loading.
I've tried publishing and listening to an event among other things but can't figure it out.
Is it possible to make dojo wait until a widget is fully loaded? I've tried dojo.addOnLoad() but nothing. Please help!!!
show more
8 years ago
HTML Pages with CSS and JavaScript
Rich text box type control in struts
If you're using struts2 you could use the struts2 dojo plugin and then you'll just have to declare it using <sx:textarea>. It's very simple and straight forward.
show more
9 years ago
Struts
Submitting a struts form outside of the form body
You would have to invoke a javascriipt function to submit your form
<script type="text/javascript"> function submitform() { document.myform.submit(); } </script>
then in your button, declare an onclick attribute:
<input type="button" onclick="submitform();" value="Submit" />
make sure you give your form a "Name" and "Id" that way you can get the form element
show more
9 years ago
Struts
JSTL : Retrieve Param Values Using Loop
Try building a collection in your Servlet or Action
String[] valuesArray = request.getParameterValues("tier"); ArrayList<String> tiers = new ArrayList<String>(10); if(valuesArray != null){ for(int i = 0; i < valuesArray.length; i++){ tiers.add(valuesArray[i]); System.out.println("Adding : " + valuesArray[i]); } } request.setAttribute("tiers", tiers);
Now, in your JSP, add the forEach loop
<c:forEach ${tier} </c:forEach>
show more
9 years ago
JSP
JSTL : Retrieve Param Values Using Loop
Try something like this:
<c:forEach
${tier}
</c:forEach>
I don't know if param.tier will work as a collection. If it doesn't, you could use something like ${requestScope.tier}. Give it a try
show more
9 years ago
JSP
Passing JSP variable to Javascript
Use JSTL instead of scriptlets
Try the following:
var str = "${crid}";
make sure JSTL taglib is declared before using it inside <script> tags
show more
9 years ago
JSP
dojo prevent cache error+struts 1.2
How is preventCache the problem here? That message doesn't even mention any error produced by using preventCache.
preventCache is just a parameter passed in the url.
Use Firebug to see what it prints out, that should help you.
show more
9 years ago
Struts
Resize TextBox automatically in struts
As far as I know, there's not an easy way to do that once the page has been rendered. What you could do if you really want to have large textboxes is to get the size of your string and then pass that number as a parameter to the JSP, that way your textbox will always be big enough for your string:
For instance:
<input type="text" name="mystring" size="${requestScope.myCustomSize}" />
Your number could be smaller/bigger than the text length and you can play around with it until you get it right. But I'm pretty sure you can figure that out.
Cheers.
"Sorry, I misunderstood your problem. You're saying you want to resize the textbox as you type the string. That's more complex. The example above only applies when you already have a string coming from the request."
show more
9 years ago
Struts
replacing scriplet in struts2 jsp page
I agree with David that all complex operations should be handled in Java classes. Also, don't use scriptlets in JSP, use JSTL tags for cleaner code. If you really need to catch an exception in a JSP, you can use JSTL c:catch tag.
For example:
<c:catch <fmt:parseDate </c:catch> <c:if <jsp:forward <jsp:param </jsp:forward> </c:if>
show more
9 years ago
Struts | https://coderanch.com/u/206916/Roberto-Hernandez | CC-MAIN-2018-39 | refinedweb | 1,895 | 56.66 |
Java expert Geoff Friesen shows how to perform image scaling by using one of the drawImage methods in the Abstract Windowing Toolkit's Graphics class.
Download a zip containing the source files for this article.
Several drawImage methods can be called to perform scaling prior to drawing an image. To demonstrate how this works, Listing 1 presents source code to an ImageScale applet. This applet draws an original image along with a scaled-down version of the image.
Listing 1 The ImageScale applet source code
7// ImageScale.java import java.awt.*; import java.applet.Applet; import java.awt.image.ImageObserver; public class ImageScale extends Applet { Image im; public void init () { im = getImage (getDocumentBase (), "twain.jpg"); } public void paint (Graphics g) { if (g.drawImage (im, 0, 0, this)) { int width = im.getWidth (this); int height = im.getHeight (this); g.drawImage (im, width, 0, width + width / 2, height / 2, 0, 0, width, height, this); } } }
ImageScale takes advantage of drawImage returning a Boolean true value after the original image is completely loaded. After it's loaded, the width and height of this image are obtained by calling Image's getWidth and getHeight methods. These methods take an ImageObserver argument—an object that implements the ImageObserver interface—and return -1 until the producer has produced the width/height information. Because they're not called until drawImage returns true, getWidth and getHeight are guaranteed to return the image's width and height. A second version of drawImage (with 10 arguments) is called to load and draw the scaled image.
Scaling is achieved by dividing the target image's lower-right corner coordinates by a specified value. ImageScale divides these coordinates by 2. Figure 1 shows the result.
ImageScale shows original and scaled-down images of Mark Twain (an early American author and humorist).
The Image class provides the getScaledInstance method for generating a prescaled version of an image. Instead of calling a drawImage method to scale and then draw, you can call getScaledInstance to prescale and then a drawImage method to only draw. This is useful in situations in which calling drawImage to scale and then draw results in a degraded appearance (because scaling takes time).. | http://www.informit.com/articles/article.aspx?p=21112 | CC-MAIN-2017-26 | refinedweb | 361 | 57.77 |
Writing Clean Code
Jason McCreary
Aug 14 '17
I recently started a new job. With every new job comes a new codebase. This is probably my twentieth job. So I've seen a lot of codebases.
Unfortunately they all suffer from the same fundamental issue - inconsistency. Likely the result of years of code patching, large teams, changing hands, or all of the above.
This creates a problem because we read code far more than we write code. As I read a new codebase these inconsistencies distract me from the true code. My focus shifts to the mundane of indentation and variable tracking instead of the important business logic.
Over the years, I find I boy scout a new codebase in the same way. I apply three simple practices to clean up the code and improve its readability.
To demonstrate, I’ll apply these to the following, real-world code I read just the other day.
function check($scp, $uid){ if (Auth::user()->hasRole('admin')){ return true; } else { switch ($scp) { case 'public': return true; break; case 'private': if (Auth::user()->id === $uid) return true; break; default: return false; } return false; } }
Adopt a code style
I know I’m the 1,647th person to say, “format your code”. But it apparently still needs to be said. Nearly all of the codebases I’ve worked on have failed to adopt a code style. With the availability of powerful IDEs, pre-commit hooks, and CI pipelines it requires virtually no effort to format a codebase consistently.
If the goal is to improve code readability, then adopting a code style is the single, best way to do so. In the end, it doesn’t matter which code style you adopt. Only that you apply it consistently. Once you or your team agrees upon a code style, configure your IDE or find a tool to format the code automatically.
Since our code is PHP, I chosen to adopt the PSR-2 code style. I used PHP Code Beautifier within PHPCodeSniffer to automatically fix the code format.
Here's the same code after adopting a code style. The indentation allows us to see the structure of the code more easily.
function check($scp, $uid) { if (Auth::user()->hasRole('admin')) { return true; } else { switch ($scp) { case 'public': return true; break; case 'private': if (Auth::user()->id === $uid) { return true; } break; default: return false; } return false; } }
Naming things
properly clearly
Yes, something else you’ve heard plenty. I know naming things is hard. One of the reasons it’s hard is there are no clear rules about naming things. It’s all about context. And context changes frequently in code.
Use these contexts to draw out a name. Once you find a clear name, apply it to all contexts to link them together. This will create consistency and make it easier to follow a variable through the codebase.
Don't worry about strictly using traditional naming conventions. I often find codebases mix and match. A clear name is far more important than
snake_case vs
camelCase. Just apply it consistently within the current context.
If you’re stuck, use a temporary name and keep coding. I’ll often name things
$bob or
$whatever to avoid getting on stuck on a hard thing. Once I finish coding the rest, I go back and rename the variable. By then I have more context and have often found a clear name.
Clear names will help future readers understand this code more quickly. They don’t have to be perfect. The goal is to boost the signal for future readers. Maybe they can incrementally improve the naming with their afforded mental capacity.
After analyzing this code, I have more context to choose clearer names. Applying clear names not only improves readability, but boosts the context making the intent of the code easier to see.
function canView($scope, $owner_id) { if (Auth::user()->hasRole('admin')) { return true; } else { switch ($scope) { case 'public': return true; break; case 'private': if (Auth::user()->id === $owner_id) { return true; } break; default: return false; } return false; } }
Avoid Nested Code
There are some hard rules regarding nested code. Many developers believe you should only allow one nesting level. In general, I tend to ignore rules with hard numbers. They feel so arbitrary given code is so fluid.
It's more that nested code is often unnecessary. I have seen the entire body of a function wrapped in an
if. I have seen several layers of nesting. I have literally seen empty
else blocks. Often adding guard clauses, inverting conditional logic, or leveraging
return can remove the need to nest code.
In this case, I'll leverage the existing
return statements and flip the
switch to remove most of the nesting from the code.
function canView($scope, $owner_id) { if ($scope === 'public') { return true; } if (Auth::user()->hasRole('admin')) { return true; } if ($scope === 'private' && Auth::user()->id === $owner_id) { return true; } return false; }
In the end, coding is writing. As an author you have a responsibility to your readers. Maintaining a consistent style, vocabulary, and flow is the easiest way to ensure readability. Remove or change these and maintain readability you will not.
Want to see these practices in action? I'm hosting a free, one-hour workshop where I'll demo each of these practice, and more, through live coding. Sign up to secure your spot.
What do you think of the new Go logo?
I got a hilarious message from my friend about Go's new look & logo announc...
Nice Article,
I think that example can be shorter (avoiding unnecessary declarations and/or sentences) also is mixing
snake casewith
camel caseand is not a good practice.
Thanks for share!
I don't consider a single
returnwith a compound condition readable. In general, simply having less lines doesn't improve code.
I agree with Marco. To me "if (boolean-expression) return true" is a definite code smell. I would format the compound boolean expression so that each part has a line of its own though.
That's interesting you consider it a code smell. I have some examples for future articles where I explore this raw boolean
returnstatements.
In this case, I still wouldn't pack them down into one. At least not without some intermediate steps to improve the readability beyond just line breaks.
I agree with Marco, even we can do more functions:
That way is easy to read. Obviously, that can be refactored.
I'll explore this in a future post.
As a tangent, I never understood developers need to wrap the entire compound condition in parenthesis.
I'd agree with carlos.js' solution, albeit each of the conditions being on separate lines. This supports the functions being just return statements, and giving clearer meaning (context) to the conditions you're checking. Also, it completely eliminates the branch logic in a function designed purely for information, a good nod to readable code.
The wrapping parenthesis must be a typo as there's no corresponding closing outer parenthesis. But that is an example of why you shouldn't wrap the condition in parentheses. There is another issue with Carlos.js' example, $scope is no longer used which means this must be a function in a class which means all those nice function calls are missing
$this->
100% agree that conditional returns of booleans is a smell, generally one of the first things I look for in code review.
Exactly, packing conditionals into a single statement just to reduce the number of lines doesn't necessarily make the code more readable. I avoid compound conditions as much as possible. If I have to absolutely use them, I try to wrap them in a function with a name that better documents its purpose. Thanks for the great article.
I agree with you. I also avoid compound conditions as much as possible. They can easily introduce silent bugs into your code and there's the mental Overhead that comes with them When reading the code Compared to simple if Statements
Also the code with if statements is more maintainable and open to new features/improvements since each decision has its own block. For example you could easily add a Log message for each condition... Can't say the same for a single return statement.
Any programmer. (Even a newbie) can easily grok the if statements in one glance
At least for this example, is a good option use a single
returnstatement, in case that we have more comparisons or complex ones would be better split on several statements.
The code is readable also I checked the code using these pages: phpcodechecker.com/ and piliapp.com/php-syntax-check/, it doesn't have issues.
In general concise code can be unreadable. But this example has the same content as the code in the blog post. There are more lines, brackets and boilerplate, but there is no additional information.
And I'd prefer getting rid of mixed cases, too. "Mix cases but be consistent" is not an option for me ;)
For me, I strongly agree with you, having 15 clean lines is much better than one condensed line. Consider someone new reading the both codebases: one look at the 15 lines will be enough to read and understand it. While they will struggle to evaluate all the booleans in their heads in the second option. Most probably will take longer to grasp.
I think that all these attempts to reduce the character count reduce the readability.
Making a compound like this, split across several lines, means that the function as a whole will take someone longer to scan.
There is no benefit to it.
The initial example really separates the 3 ways you can "view".
The shorter example still does that, but the condensing makes it harder to separate the content IMO. For instance if the 3rd line was a bit longer that the if check needed two lines, it might require some more mental power to keep things separate in your head. First way wouldn't have had this problem as much.
That said, to support at a glance, what does this do, I may opt to pull the conditions into their own functions such that the check becomes:
return IsPublic(...) || IsAdmin(...) || IsOwner(...);
At a glance I can tell what my conditions are for when I canView. It captures essentially the requirements in very plain english. If I care about how those rules are defined I can look into each one explicitly to find what I might be looking for.
Put it this way - I can read and understand the verbose example on my phone. Your condensed version takes longer for me to parse, partly because it's too dense to easily separate out the parenthesised conditions and partly because I have to scroll horizontally back and forth to even read it.
because i don't trust frameworks blindly:
taking advantages of boolean shortcuts : assuming that the owner of content is always allowed to view whatever $scope it is
To my eyes this is totally unreadable. The whole point was to make it easy to read and comprehend, not to just make it short for the sake of shortness.
Ungly code 😡
?
You are totally right!
Coding style is a pretty big thing, but most developers get this right in the first year. Some languages even come with their own tools to enforce style guides. Using Pettier in my JavaScript projects really helped me to stop worrying.
Naming things is generally a bit harder, I think, but many developers really use bad names for things. Strange abbreviations or single character variables just because they think a 10 char name would be ugly... because they want to reduce package size, etc. So while I think naming is sometimes really hard, 80% of all code can really benefit from simply writing out everything.
Avoiding nesting is really important and often not followed by even experienced developers. Probably because, as you get better, you think you can handle things better, haha. I think it should be kept in mind over the whole process, not only for conditionals or functions, but also for objects and general software architecture.
I think this depends heavily on company culture. I know plenty of PHP developers who have been at it for well over a decade who don't take coding style seriously.
Ah yes, I saw some 'nice' PHP codebases in my time too :)
If you use C#, you can add the StyleCop.Analyzers nuget package and it highlights inconsistencies. It also comes with code fixes that allow you to fix all instances of a violation in the entire solution with one click.
Then there's static code analysis. Between the two, all that is really left to the programmer is models, appropriate layering of your code, security, performance and business logic... well... on the backend anyway. article brings up one of my Pet Peeves - inheriting or resurrecting projects that have very poor coding style and confusing or non-existent naming conventions. I quit one job because of it, and have completely reorganized a project in the 2nd job. At some point you just have to come to terms with either making wholesale changes to make it readable or abandoning the project because it can never be maintained. If a the sourcecode doesn't have a good, solid style and use understandable naming conventions, the code is crap, full of bugs, and a maintenance nightmare. Always!
Inline conditions inside if statements discard the opportunity for a name, as a maintainer I care more about the intent of the code at the time of creation than its implementation, please keep telling me intent through names at every opportunity
By breaking everything down I wonder if $is_private is even required?
Hi Jason, great article congratulations.
By the way, i did not see anything commenting about this so there it go, you can clean even more replacing the 3
ifswith a single
switchusing multiple case validation like this;
`
let me know what you think.
The original code (from Part 1) was a
switchstatement. Personally, I don't find
switchstatements very readable. Especially with conditions in the
case.
Cool!
I prefer "switch" because i like to write
return trueonly one time. 😁
I don't think a return before the function ends is good code. A function or method should only have one return.
function canView($scope, $ownerId) {
$canView = $scope === 'public';
if (! $canView) {
$canView = Auth::user()->hasRole('admin');
}
if (! $canView) {
$canView = $scope === 'private' && Auth::user()->id === $ownerId;
}
return $canView;
}
As noted in the article, I don't believe in only one return. I admit there are trade-offs on both sides, in this case you are tracking
$canViewwhich carries it's own complexity.
BTW: can somebody explain which keyword are starting and ending code snippets?
Yes! I Totally agree with you. The code you provide is a perfect example.
I also consider variables should be relevant, short doesn't mean better, even with database tables and columns. On my last work they use "a_1, a_2, b_c" for clientes names!! D:
always consider that at some point on time you will forget about the code and you will need to back and read it again, so keep it clear. Also for new co-workers.
Excellent article, and very good points!
In some languages, the nesting is about more than just style. Branch prediction and instruction cache misses (from jump instructions) are very real things, and they can completely tank performance when structured incorrectly.
When possible, I prefer to use conditionals to catch
off-cases, rather than the most frequent case, simply because that takes the best advantage of the performance gains from the branch prediction. The computer can get used to the condition evaluating
false, which it will most of the time; if that prediction misses, I'm already going to have to deal with additional jump instructions and whatnot to handle the off-case, so performance can reasonably be considered (somewhat) shot at that point anyway. :wink:
On an unrelated subject, I also advocate commenting intent as a part of clean code. We should always be able to tell what the code does, but why is not something the code naturally expresses. (I discuss that more in my article, Your Project Isn't Done Yet).
Nice job, I've been liking these kind of posts you've been doing lately, and your final function is way better.
However, I feel like playing along, too. How about this? :)
I don't know the context, but I'm pretty sure it doesn't even matter if the
$scopeis
'private', if the current user is the owner.
Consistently formatted code is surely ideal. But previous efforts I've been a part of have suffered from some combination of the following:
So I've basically landed here: to all who are driven to format new or legacy code, go nuts! People tend to fall in line if there's already momentum.
However: make any formatting-only changes In. Their. Own. Commit. If someone has to review a commit where the whole file changed to figure out why a bug was introduced, that will be a huge fail...
Agreed about momentum and separate commit.
However, most of your points are moot when automating this process with a formatter/linter as suggested.
If it's a new code base or new file, agreed. But what about modifying a legacy file to introduce/change actual functionality -- the automated formatter/linter may obscure the actual changes with a complete reformat, correct?
To your point, do it in a separate commit. Run the formatter/linter one time manually across the entire codebase. This will provide a basis (momentum) and from then on only changed files need to be formatted/linted (which could be automated).
Seems like you don't agree with any of the post then…
I agree with the idea of clean code in general. And I agree that code example at the end of an article became much better than it was before. And I just added some more things I assume as "clean code" principles.
Fair. Nevertheless, they contradict mine. One of the contributing factors of messy code is that everyone disagrees on "clean code".
For clarity, I don't believe in a having single return statement. And there is nothing wrong with temporarily naming a variable something obviously incomplete in the spirit of maintaining momentum.
Couldn't agree more. Was very close to writing my own article emphasizing the need to be clear when writing code.
To be more specific, the guidelines you iterated, help read code as if it was a story, or a self explanatory recipe. Bad naming often helps find bad code separation patterns, while nesting tends to make code hard to follow. Very good post!
Sorry but your final "avoid nested code" alters the original method:
1) "owner_id" was never compared to anything in the original code, but you compared with "user()->id" field. In the original code it was just checked to be Valid/NonNull.
2) you never checked if "Auth::user()" returns a NOT NULL value before use it, so your code could give a NullPointer Exception if it returns NULL and you directly access to "->id" field.
You're right. I corrected the initial examples.
As far as the
nullcheck, it is indeed missing. But to your point, adding it would alter the existing behavior. I have left it off for simplicity.
I agree that writing a clean code is necessary. But I have something to add:
1) Decide whether you're using a camel case or not and stick to it, don't mix it up (I personally prefer using camel case in my projects).
2) Only one return statement per function - makes it much easier to read and debug, less likely to have hidden bugs.
3) I don't agree that naming a variable $bob or $whatever to rename it later is a good solution, you could easily forget to do this or be to "tired and lazy" after finishing with the feature. I always name variables clearly from the moment I create it, yes, it could be not easy at first, but you get used to it and it becomes automatic. And if later you decide that you don't like the name, you could always change it, but even if you don't - it still would be meaningful.
4) Also, it doesn't feel right when I see OOP mixed up with functional programming.
Here's my variation of this code:
To get too granular for anyone's good- can you say why the break in the private case of the switch is indented but not in the public case? Guessing it has something to do with the one liner vs the logic, but curious to see a professional's reasoning?
you're right, this makes the programmer to code more and even make them fall in love and easy to spot the error and the understanding of the code will be ease for other programmer to testify it ! .
And naming the variable is toughest job in the world .ahaha. seriously though ,many are lazy to naming the lengthy variable. specially in upperCamel letter . I practiced to naming the variable in upperCamel even-though it was long ,its hard to follow first after being practiced with that now its easy to naming the variable ,I don't worry how much the length of the variable name because u wont forgot during the middle of the code ,I dont need to go check the top of the code to refer our variable name.
I disagree that formatting, and formatting consistency, is so important. I've seen enough perfectly formatted garbage code and poorly formatted clean code to doubt the major significance of formatting. I will agree that it makes legible code even easier to read, but I disagree that on its own it brings anything.
Too often I see projects start to go overboard with their style guidelines. They can get in the way and even consume a project.
Defintily good names help. I don't think we should blame people for having bad names though. It's a very contextual thing. While working on a piece of code one name seems totally appropriate, but later, when missing context, it seems a bit lacking. But we can't just add full context to all names, we have to rely on people understanding the surrounding code. Otherwise we'd end up with ridiculously long names for everything -- and they still wouldn't be enough.
Avoiding nesting code is good. But alas, there are a camp of people that are, for some reason, utterly opposed to early returns and break statements. Make sure you show that piece of code during an interview to avoid hiring those people! :)
1 week I read something about clean code in python , and the PEP8 style guide , i learn about it and start practice now i'm using Pylinter for sublime text and found it very cool it help me to write clean and readable code and discover that the code i have wrote before was very dirty and ugly,.....
I think that example can be shorter too!
function canView($scope, $ownerId) {
return ($scope === 'public' ||
Auth::user()->hasRole('admin') ||
($scope === 'private' && Auth::user()->id === $ownerId));
}
As noted in other comments, it's not about shorter, it's about more readable.
Also, adding a space before and after some arguments inside parentheses increases visibility.
This is a beautiful post. Thanks. I'd keep this in mind.
Simple and easy tips to create maintainable code.
I would say that we should think about 'write code as a document'
Great read. This might push me over the edge to actually start doing it. Been thinking about it for a while to have 1 style. | https://dev.to/gonedark/writing-clean-code | CC-MAIN-2018-30 | refinedweb | 3,979 | 64.2 |
spalte1 spalte2
wert11 wert12
wer21 "wer
t22"
[download]
tested with MS Excel 2007
What about accessing Excel directly?. I'm using WIN32::OLE, and the following code gives me a multi-line-entry in an Excel-cell:
$WPsheet->Range("B2")->{'Value'} = "This is\na test.";
[download]
You could upgrade to a CSV file (possibly even using a tab as the column delimiter), which allows you to quote cell contents and thereby including newlines.
You could also try to just use a standalone return ("\r") in the tab delimited file. No garantees, it could work, but it's an extremely dangerous hack — you never know if somebody editing the file in a text editor could ruin it for you.
One world, one people
Line 1"\n"Line 2"\n"Line 3......
[download]
foo_a1 foo_b1
foo_a2 fo_b2
foo_a3 "foo_b3
foo_b3_2"
foo_a4 foo_b4
[download]
All, got this sorted out today, looks like years late but ok.
Correct syntax to make this work is as follows:
Print this out to a csv file, open the csv with excel, auto adjust the columns, then auto adjust the row height and you will see the results.
This works every time for Perl v5.10.1 and Excel 2007
the problem I have found in trying to add Alt+Enter into my worksheets from perl is that when you open the worksheet, Excel does not always auto expand the character. | http://www.perlmonks.org/?node_id=895249 | CC-MAIN-2016-18 | refinedweb | 233 | 66.47 |
Learn how to build a web server with the ESP8266 NodeMCU to display sensor readings in gauges. As an example, we’ll display temperature and humidity from a BME280 sensor in two different gauges: linear and radial. You can easily modify the project to plot any other data. To build the gauges, we’ll use the canvas-gauges JavaScript library.
We have a similar tutorial for the ESP32 board: Web Server – Display Sensor Readings in Gauges
Project Overview
This project will build a web server with the ESP8266 that displays temperature and humidity readings from a BME280 sensor. We’ll create a linear gauge that looks like a thermometer to display the temperature, and a radial gauge to display the humidity.
Server-Sent Events
The readings are updated automatically on the web page using Server-Sent Events (SSE).
To learn more about SSE, you can read:
Files Saved on the Filesystem
To keep our project better organized and easier to understand, we’ll save the HTML, CSS, and JavaScript files to build the web page on the board’s filesystem (LittleFS).
Prerequisites
Make sure you check all the prerequisites in this section before continuing with the project.
1. Install ESP8266 Board in Arduino IDE)
2. Filesystem Uploader Plugin
To upload the HTML, CSS, and JavaScript files to the ESP8266 filesystem (LittleFS), we’ll use a plugin for Arduino IDE: LittleFS Filesystem Uploader. Follow the next tutorial to install the filesystem uploader plugin:
If you’re using VS Code with the PlatformIO extension, read the following tutorial to learn how to upload files to the filesystem:
3. Installing Libraries
To build this project, you need to install the following libraries:
- Adafruit_BME280 (Arduino Library Manager)
- Adafruit_Sensor library (Arduino Library Manager)
- Arduino_JSON library by Arduino version 0.1.0 (Arduino Library Manager)
- ESPAsyncWebServer (.zip folder);
- ESPAsyncTCP (.zip folder).
You can install the first three libraries using the Arduino Library Manager. Go to Sketch > Include Library > Manage Libraries and search for the libraries’ names.
The ESPAsyncWebServer and ESPAsynTCP libraries aren’t available to install through the Arduino Library Manager, so you need to copy the library files to the Arduino Installation Libraries folder. Alternatively, download the libraries’ .zip folders, and then, in your Arduino IDE, go to Sketch > Include Library > Add .zip Library and select the libraries you’ve just downloaded.
Installing Libraries (VS Code + PlatformIO)
If you’re programming the ESP8266 using PlatformIO, you should add the following lines to the platformio.ini file to include the libraries, change the Serial Monitor speed to 115200, and use LittleFS for the filesystem:
monitor_speed = 115200 lib_deps = ESP Async WebServer arduino-libraries/Arduino_JSON @ 0.1.0 adafruit/Adafruit BME280 Library @ ^2.1.0 adafruit/Adafruit Unified Sensor @ ^1.1.4 board_build.filesystem = littlefs
Parts Required
To follow this tutorial you need the following parts:
You can use any other sensor, or display any other values that are useful for your project. If you don’t have the sensor, you can also experiment with random values to learn how the project works.
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
Schematic Diagram
We’ll send temperature and humidity readings from a BME280 sensor. We’re going to use I2C communication with the BME280 sensor module. For that, wire the sensor to the default ESP8266 SCL (GPIO 5) and SDA (GPIO 4) pins, as shown in the following schematic diagram. web server responses, events, create the gauges, etc..
<!DOCTYPE html> <html> <head> <title>ESP IOT DASHBOARD</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/png" href="favicon.p"> <script src=""></script> </head> <body> <div class="topnav"> <h1>ESP WEB SERVER GAUGES</h1> </div> <div class="content"> <div class="card-grid"> <div class="card"> <p class="card-title">Temperature</p> <canvas id="gauge-temperature"></canvas> </div> <div class="card"> <p class="card-title">Humidity</p> <canvas id="gauge-humidity"></canvas> </div> </div> </div> <script src="script.js"></script> </body> </html>
The HTML file for this project is very simple. It includes the JavaScript canvas-gauges library in the head of the HTML file:
<script src=""></script>
There is a <canvas> tag with the id gauge-temperature where we’ll render the temperature gauge later on.
<canvas id="gauge-temperature"></canvas>
There is also another <canvas> tag with the id gauge-humidity, where we’ll render the humidity gauge later on.
<canvas id="gauge-humidity"></canvas>
CSS File
Copy the following styles to your style.css file. It styles the web page with simple colors and styles.
html { font-family: Arial, Helvetica, sans-serif; display: inline-block; text-align: center; } h1 { font-size: 1.8rem; color: white; } p { font-size: 1.4rem; } .topnav { overflow: hidden; background-color: #0A1128; } body { margin: 0; } .content { padding: 5%; } .card-grid { max-width: 1200 }
JavaScript File (creating the gauges)
Copy the following to the script.js file.
// Get current sensor readings when the page loads window.addEventListener('load', getReadings); //(); // Function to get current readings on the webpage when it loads for the first time('new_readings', function(e) { console.log("new_readings", e.data); var myObj = JSON.parse(e.data); console.log(myObj); gaugeTemp.value = myObj.temperature; gaugeHum.value = myObj.humidity; }, false); }
Here’s a summary of what this code does:
- initializing the event source protocol;
- adding an event listener for the new_readings event;
- creating the gauges;
- getting the latest sensor readings from the new_readings event and display them in the corresponding gauges;
- making an HTTP GET request for the current sensor readings when you access the web page for the first time.
Get Readings
When you access the web page for the first time, we’ll request the server to get the current sensor readings. Otherwise, we would have to wait for new sensor readings to arrive (via Server-Sent Events), which can take some time depending on the interval that you set on the server.
Add an event listener that calls the getReadings function when the web page loads.
// Get current sensor readings when the page loads window.addEventListener('load', getReadings);
The window object represents an open window in a browser. The addEventListener() method sets up a function to be called when a certain event happens. In this case, we’ll call the getReadings function when the page loads (‘load’) to get the current sensor readings.
Now, let’s take a look at the getReadings function. Create a new XMLHttpRequest object. Then, send a GET request to the server on the /readings URL using the open() and send() methods.
function getReadings() { var xhr = new XMLHttpRequest(); xhr.open("GET", "/readings", true); xhr.send(); }
When we send that request, the ESP will send a response with the required information. So, we need to handle what happens when we receive the response. We’ll use the onreadystatechange property that defines a function to be executed when the readyState property changes. The readyState property holds the status of the XMLHttpRequest. The response of the request is ready when the readyState is 4, and the status is 200.
- readyState = 4 means that the request finished and the response is ready;
- status = 200 means “OK”
So, the request should look something like this:
function getStates(){ var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { … DO WHATEVER YOU WANT WITH THE RESPONSE … } }; xhr.open("GET", "/states", true); xhr.send(); }
The response sent by the ESP is the following text in JSON format (those are just arbitrary values).
{ "temperature" : "25.02", "humidity" : "64.01", }
We need to convert the JSON string into a JSON object using the parse() method. The result is saved on the myObj variable.
var myObj = JSON.parse(this.responseText);
The myObj variable is a JSON object that contains the temperature and humidity readings. We want to update the gauges values with the corresponding readings.
Updating the value of a gauge is straightforward. For example, our temperature gauge is called gaugeTemp (as we’ll see later on), to update a value, we can simply call: gaugeTemp.value = NEW_VALUE. In our case, the new value is the temperature reading saved on the myObj JSON object.
gaugeTemp.value = myObj.temperature;
It is similar for the humidity (our humidity gauge is called gaugeHum).
gaugeHum.value = myObj.humidity;
Here’s the complete getReadings() function.(); }
Creating the Gauges
The canvas-charts library allows you to build linear and radial gauges to display your readings. It provides several examples, and it is very simple to use. We recommend taking a look at the documentation and exploring all the gauges functionalities:
Temperature Gauge
The following lines create the gauge to display the temperature.
//();
To create a new linear gauge, use the new LinearGauge() method and pass as an argument the properties of the gauge.
var gaugeTemp = new LinearGauge({
In the next line, define where you want to put the chart (it must be a <canvas> element). In our example, we want to place it in the <canvas> HTML element with the gauge-temperature id—see the HTML file section.
renderTo: 'gauge-temperature',
Then, we define other properties to customize our gauge. The names are self-explanatory, but we recommend taking a look at all possible configurations and changing the gauge to meet your needs.
In the end, you need to apply the draw() method to actually display the gauge on the canvas.
}).draw();
Special attention that if you need to change the gauge range, you need to change the minValue and maxValue properties:
minValue: 0,
maxValue: 40,
You also need to adjust the majorTicks values for the values displayed on the axis.
majorTicks: [ "0", "5", "10", "15", "20", "25", "30", "35", "40" ],
Humidity Gauge
Creating the humidity gauge is similar, but we use the new RadialGauge() function instead and it is rendered to the <canvas> with the gauge-humidity id. Notice that we apply the draw() method on the gauge so that it is drawn on the canvas.
//();
Handle events
Update the readings on the gauge when the client receives the readings on the new_readings event
Create a new EventSource object and specify the URL of the page sending the updates. In our case, it’s /events.
if (!!window.EventSource) { var source = new EventSource('/events');
Once you’ve instantiated an event source, you can start listening for messages from the server with addEventListener().
These are the default event listeners, as shown here);
Then, add the event listener for new_readings.
source.addEventListener('new_readings', function(e) {
When new readings are available, the ESP8266 sends an event (new_readings) to the client. The following lines handle what happens when the browser receives that event.
source.addEventListener('new_readings', function(e) { console.log("new_readings", e.data); var myObj = JSON.parse(e.data); console.log(myObj); gaugeTemp.value = myObj.temperature; gaugeHum.value = myObj.humidity; }, false);
Basically, print the new readings on the browser console, convert the data into a JSON object and display the readings on the corresponding gauges.
Arduino Sketch
Copy the following code to your Arduino IDE or to the main.cpp file if you’re using PlatformIO.
You can also download all the files here.
/********* Rui Santos Complete instructions at Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. *********/ #include <Arduino.h> #include <ESP8266WiFi.h> #include <ESPAsyncTCP.h> #include <ESPAsyncWebServer.h> #include "LittleFS.h" #include <Arduino_JSON.h> #include <Adafruit_BME280.h> #include <Adafruit_Sensor.h> // Replace with your network credentials const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD"; // Create AsyncWebServer object on port 80 AsyncWebServer server(80); // Create an Event Source on /events AsyncEventSource events("/events"); // Json Variable to Hold Sensor Readings JSONVar readings; // Timer variables unsigned long lastTime = 0; unsigned long timerDelay = 30000; // Create a sensor object Adafruit_BME280 bme; // BME280 connect to ESP32 I2C (GPIO 21 = SDA, GPIO 22 = SCL) // Init BME280 void initBME(){ if (!bme.begin(0x76)) { Serial.println("Could not find a valid BME280 sensor, check wiring!"); while (1); } } // Get Sensor Readings and return JSON object String getSensorReadings(){ readings["temperature"] = String(bme.readTemperature()); readings["humidity"] = String(bme.readHumidity()); String jsonString = JSON.stringify(readings); return jsonString; } // Initialize LittleFS void initFS() { if (!LittleFS.begin()) { Serial.println("An error has occurred while mounting LittleFS"); } Serial.println("LittleFS mounted successfully"); } // Initialize WiFi void initWiFi() { WiFi.mode(WIFI_STA); WiFi.begin(ssid, password); Serial.print("Connecting to WiFi .."); while (WiFi.status() != WL_CONNECTED) { Serial.print('.'); delay(1000); } Serial.println(WiFi.localIP()); } void setup() { Serial.begin(115200); initBME(); initWiFi(); initFS(); // Web Server Root URL server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(LittleFS, "/index.html", "text/html"); }); server.serveStatic("/", LittleFS, "/"); // Request for the latest sensor readings server.on("/readings", HTTP_GET, [](AsyncWebServerRequest *request){ String json = getSensorReadings(); request->send(200, "application/json", json); json = String(); });); // Start server server.begin(); } void loop() { if ((millis() - lastTime) > timerDelay) { // Send Events to the client with the Sensor Readings Every 30 seconds events.send("ping",NULL,millis()); events.send(getSensorReadings().c_str(),"new_readings" ,millis()); lastTime = millis(); } }
How the code works
Let’s take a look at the code and see how it works to send readings to the client using server-sent events.
Including Libraries
The Adafruit_Sensor and Adafruit_BME280 libraries are needed to interface with the BME280 sensor.
#include <Adafruit_BME280.h> #include <Adafruit_Sensor.h>
The ESP8266WiFi, ESPAsyncWebServer, and ESPAsyncTCP libraries are used to create the web server.
#include <ESP8266WiFi.h> #include <ESPAsyncTCP.h> #include <ESPAsyncWebServer.h>
We’ll use LittleFS to save the files to build the web server.
#include "LittleFS.h"
You also need to include the Arduino_JSON library to make it easier to handle JSON strings.
#include <Arduino_JSON.h>
Network Credentials
Insert your network credentials in the following variables, so that the ESP8266 can connect to your local network using Wi-Fi.
const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD";
AsyncWebServer and AsyncEventSource
Create an AsyncWebServer object on port 80.
AsyncWebServer server(80);
The following line creates a new event source on /events.
AsyncEventSource events("/events");
Declaring Variables
The readings variable is a JSON variable to hold the sensor readings in JSON format.
JSONVar readings;
The lastTime and the timerDelay variables will be used to update sensor readings every X number of seconds. As an example, we’ll get new sensor readings every 30 seconds (30000 milliseconds). You can change that delay time in the timerDelay variable.
unsigned long lastTime = 0; unsigned long timerDelay = 30000;
Create an Adafruit_BME280 object called bme on the default ESP I2C pins.
Adafruit_BME280 bme;
Initialize BME280 Sensor
The following function can be called to initialize the BME280 sensor.
// Init BME280 void initBME(){ if (!bme.begin(0x76)) { Serial.println("Could not find a valid BME280 sensor, check wiring!"); while (1); } }
Get BME280 Readings
To get temperature and humidity from the BME280 temperature, use the following methods on the bme object:
- bme.readTemperature()
- bme.readHumidity()
The getSensorReadings() function gets the sensor readings and saves them on the readings JSON array.
// Get Sensor Readings and return JSON object String getSensorReadings(){ readings["temperature"] = String(bme.readTemperature()); readings["humidity"] = String(bme.readHumidity()); String jsonString = JSON.stringify(readings); return jsonString; }
The readings array is then converted into a JSON string variable using the stringify() method and saved on the jsonString variable.
The function returns the jsonString variable with the current sensor readings. The JSON string has the following format (the values are just arbitrary numbers for explanation purposes).
{ "temperature" : "25", "humidity" : "50" }
setup()
In the setup(), initialize the Serial Monitor, Wi-Fi, filesystem, and the BME280 sensor.
void setup() { // Serial port for debugging purposes Serial.begin(115200); initBME(); initWiFi(); initFS();
Handle Requests
When you access the ESP8266 IP address on the root / URL, send the text that is stored on the index.html file to build the web page.
server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(LittleFS, "/index.html", "text/html"); });
Serve the other static files requested by the client (style.css and script.js).
server.serveStatic("/", LittleFS, "/");
Send the JSON string with the current sensor readings when you receive a request on the /readings URL.
// Request for the latest sensor readings server.on("/readings", HTTP_GET, [](AsyncWebServerRequest *request){ String json = getSensorReadings(); request->send(200, "application/json", json); json = String(); });
The json variable holds the return from the getSensorReadings() function. To send a JSON string as response, the send() method accepts as first argument the response code (200), the second is the content type (“application/json”) and finally the content (json variable).
Server Event Source
Set up the event source on the server.);
Finally, start the server.
server.begin();
loop()
In the loop(), send events to the browser with the newest sensor readings to update the web page every 30 seconds.
events.send("ping",NULL,millis()); events.send(getSensorReadings().c_str(),"new_readings" ,millis());
Use the send() method on the events object and pass as an argument the content you want to send and the name of the event. In this case, we want to send the JSON string returned by the getSensorReadings() function. The send() method accepts a variable of type char, so we need to use the c_str() method to convert the variable. The name of the events is new_readings.
Usually, we also send a ping message every X number of seconds. That line is not mandatory. It is used to check on the client side that the server is alive.
events.send("ping",NULL,millis());
Uploading > ESP8266 LittleFS Data Upload and wait for the files to be uploaded.
When everything is successfully uploaded, open the Serial Monitor at a baud rate of 115200. Press the ESP8266 EN/RST button, and it should print the ESP8266 IP address.
Demonstration
Open your browser and type the ESP8266 IP address. You should get access to the web page that shows the gauges with the latest sensor readings.
You can also check your gauges using your smartphone (the web page is mobile responsive).
Wrapping Up
In this tutorial you’ve learned how to create a web server to display sensor readings in linear and radial gauges. As an example, we displayed temperature and humidity from a BME280 sensor. You can use those gauges to display any other values that may make sense for your project.
You might also like reading:
- ESP32/ESP8266 Plot Sensor Readings in Real Time Charts – Web Server
- ESP8266 Web Server using Server-Sent Events (Update Sensor Readings Automatically)
Learn more about the ESP8266 with our resources:
Thank you for reading.
53 thoughts on “ESP8266 NodeMCU Web Server: Display Sensor Readings in Gauges”
I just was done converting the esp32 code tp esp8266 …and then your article popped up 🙂
Anyway great project and an interesting library. I need to check that library out a bit more as for now it seems to have problems with negative value. Yes, it is possible to add a ‘major tick’ of say “-10”, but then a value of +3 degrees, starts counting from ‘-10’ and is shown on the thermometer at the “-7” tick. I am pretty sure it can be done but I need to dive in it a bit more
found it. need to adapt minValue
Hi.
Yes, that’s right.
It’s explained in the tutorial how to change the range.
I’m glad you found it!
Regards,
Sara
Tnx. Probably went over that too fast
Not all BME280 sensors work for me with this sketch.
With the BME280I2C.h lib they are recognized on the 8266 or esp32 but
with Adafruit_BME280.h not.
All sensors I have have the same i2c address (0x76) but still
the 6 pin BME280 sensor is not recognized ( VIN-GND-SCL-SDA-CSB-SDO).
But they do work on an Arduino uno, for example.
In your sketch only the model with ( VIN-GND-SCL-SDA) connections works.
That is kinda odd. Does the adafruit libraru maybe accedentally expect an SPI device or is your BME SPI only perhaps?
On a different note, the adafruit BME280 library is in fact very inefficient. It reads the temperature, humidity and pressure with 3 seperate readings,but in order to read humidity and pressure one needs to start a conversion by reading the temperature. So if you read all 3, the library does 5 readings and discards 2. There are some advantages in that coz if you just want the humidity, one always has a fresh reading but imho it would be better to have a ‘start conversion’ command and then pick whatever fresh value you want from the results
Thanks
Thank you for another great tutorial. I live in North America where we do not use the metric system. Are these the lines I need to change for frahenheit readings? Are there any other lines that need changing?
Thanks
// Get Sensor Readings and return JSON object
String getSensorReadings(){
readings[“temperature”] = String(bme.readTemperature());
readings[“humidity”] = String(bme.readHumidity());
String jsonString = JSON.stringify(readings);
return jsonString;
}
Hi.
Yes, you need to convert to Fahrenheit as follows:
readings[“temperature”] = String(1.8*bme.readTemperature()+32);
Then, you need to change the script.js file to adjust the temperature range. It is explained in the tutorial how to change the range.
Regards,
Sara
// Get Sensor Readings and return JSON object
String getSensorReadings(){
readings[“temperature”] = String(bme.readTemperature());
readings[“humidity”] = String(bme.readHumidity());
String jsonString = JSON.stringify(readings);
return jsonString;
}
I would like to use AHT10 instead of the expensive BME, but I get lost in the “Get Sensor Readings and return JSON object” section. Could I count on help?
I am ot sure what library you use for the AHT10 but it will probably do a reading like:
hum=aht10.readHumidity(AHTXX_USE_READ_DATA).
The BME280 has a similar instruction:
hum=BME280.readHumidity, so it is just a matter of substitution
Thank you! For those in need:
String getSensorReadings(){
sensors_event_t humidity, temp;
aht.getEvent(&humidity, &temp);
readings[“temperature”] = String(temp.temperature);
readings[“humidity”] = String(humidity.relative_humidity);
String jsonString = JSON.stringify(readings);
return jsonString;
For me it was impossibble di see any data in Webserver, while I see in the serial monitor.
Now i followed the suggestion of Wijnand November 22, 2019 and downgraded ESP8266 to the 2.6.0 version.
Now I see data in Webserber, on the smatphone too, but I dont see guages.
Any suggestion.
Thanks
Renzo
Hi.
That’s weird.
Did you upload all the necessary files to the filesystem?
What web browser are you using?
Regards.
Sara
thanks for the amazing project
whether a switch can be connected to this project so that I can control it through the page, say turn on the led.
thank you,
Hi.
Yes.
You can add a switch. I recommend using websocket for that:
Regards,
Sara
Hello, thanks for the tutorial. I hope I get a really fast response… I’m using 4 ultrasonic sensors, on arduino uno board and trying to send the mean value to node mcu which I would love to display in place of the humidity gauge above, but its been a week without luck. Need help please…. The data displays on my nodemcu serial monitor but the gauge doesn’t respond on the webpage.
heres what my code looks like;
String getSensorReadings(){
readings[“Distance”] = String(Serial.read());
String jsonString = JSON.stringify(readings);
return jsonString;
Hi.
Don’t forget that you need to change the function getReadings() on the Javascript file to get the distance, as well as the addEventListener.
Regards,
Sara
Thank you very much… That worked.
Now, the new problem is that… I’m getting either “-1” or flunctuating readings like “13 to 58 to 10 to 208” and so on reading on my gauge.
On my arduino sketch, using software serial I have;
SoftwareSerial espSerial(2, 3);
Void setup(){
espSerial.begin(115200);
}
Vood loop(){
distance = ultrasonic.read();
espSerial.println(distance);
}
While nodemcu has;
SoftwareSerial readerSerial(D6, D5)
String getSensorReadings(){
readings[“Distance”] = String(readerSerial.read());
String jsonString = JSON.stringify(readings);
return jsonString;
Hi !
i try to modifi this for BMP280 sensor.
This is modified code:
The code is uploading succesfully on my esp8266, but in browser i have this:
In serial monitor after reboot the ESP8266 i have this:
How can i fix this ?
Hi.
You probably didn’t upload the files to the filesystem.
Follow all the instructions carefully and it will work:
Regards,
Sara
i don’t have this option in arduino ide
Trying this but not have this in my Arduino Ide
Hi.
You need to install the previous version of Arduino IDE (1.8.19) via ZIP file and then install the Filesystem image uploader.
Regards,
Sara
I succeeded with Arduino IDE version 1.8.19!
You are the best !
With version Arduino ide 2.0.0-rc3 I failed.
have esp32 version?
Hi! I have a ESP01 with 512KB. When I try to upload any file on data folder, clicking on “ESP8266 LittleFS Data Upload”, I have this report “LittleFS Not Defined for Generic ESP8266 Module, 80 MHz, Flash, Disabled (new aborts on oom), Disabled, All SSL ciphers (most compatible), 32KB cache + 32KB IRAM (balanced), Use pgm_read macros for IRAM/PROGMEM, dtr (aka nodemcu), 26 MHz, 40MHz, DOUT (compatible), 512KB (FS:none OTA:~246KB), 0, nonos-sdk 2.2.1+100 (190703), v2 Lower Memory, Disabled, None, Only Sketch, 115200
“
Hi.
In that error, you can see that the board doesn’t allocate any memory for the filesystem: (FS:none
Go to Tools > Flash size and select one of the 512KB options with some memory for FS.
I hope this helps.
Regards,
Sara
Hummm. Thank you very much! I have read something about that, but do not understood fully. In “Flash size”, we select how the space be divided between FileSystem and Over The Air. I lost 5 days with that…How that is the simple thing, I not finded the answer. Thank you very much Sara!
Hummm. Muito obrigado! Li algo sobre isso, mas não entendi completamente. Em “Flash Size”, selecionamos como o espaço será dividido entre FileSystem e Over The Air. Perdi 5 dias com isso… Como isso é básico, não encontrei a resposta. Muito obrigado Sara!
Forgot to write that it worked. I changed the option and it worked.
Esqueci de escrever que funcionou. Mudei a opção e funcionou.
Great!
I’m glad it helped.
Regards,
Sara
Hi there,
this is a very good example for enviroment-testings. We use a MQ-2 and a SHT-30 to get 3 values with temp, humidity and gas. All works fine, startup ist OK and LittleFS working well with all 3 files from your tutorial.
When we startup the system it will booting perfect and start reading the sensor-values. We opened the serial-monitor for checking and placed few serial.print() to check if sensor readings are good. Up to this point there comes a Error as an exeption error “Panic core_esp8266_main.cpp:137 __yield” with a near endless stack-listing. Every time we reboot the system it starts up correctly. At the moment we start a browser (PC, Tablet or SmartPhone) an type in the given IP of the board (or type F5 refreshing) the whole system will crash again.
The only way we found in endless tries is as follow:
– First start up the browser
– then startup the board – plugin power-supply
– startup will be OK – then F5 on Browser – crasing again
A realy sporadic situation. Only (often plugin/plugout) the power-supply will sometimes (maybe all 20-30 tries) produce a startup with reaction in the opened browser. This is a realy big and frustating situation.
Does anyone have an idea or solution for this problem? How could the Browser-Start produce such a exeption on a 8266-12F?
Have some screenprints to show Arduino-IDE an Serial-Monitor outputs but can’t added to this comment.
Summary:
System will work fine in Arduino-IDE and serial-monitor. Specific serial.prints() show that sht-30 and mq-2 works well an supply values that could sown in serial-output.
This time i start up a browser (fireox, edge, chrome) and log in to ID-Adress of the board the system will crash and startup again. But at this time there a no longer values from the sensors. Only at reconnect power-supply will produce a clear startup in serial-monitor.
With an opened browser and many tries to plugi/plugout power will produce in a random case a situation where the opened browser shows up the sensor-values. If then is a second client-connect e.g. with smartphone on the same ip the system creases again.
Thanks a lot for readings this and we hope there is some help on the line.
Thanks, best regrads and stay healthy,
Manfred
Hi.
It may be the case that the libraries you use to read from your sensors are messing up with the AsyncWebServer library because of delays in the library.
So, what’s probably messing up your project are the following lines:
server.on(“/readings”, HTTP_GET, [](AsyncWebServerRequest *request){
String json = getSensorReadings();
request->send(200, “application/json”, json);
json = String();
});
The easiest workaround is to remove that part.
Regards,
Sara
Hi Sara,
thanks for this fast reply. I tried to comment out this code-part (server.on….) but dosent effect in any way. I think this is not a time-problem with the sensor librarys. A few days ago i played arround with a project on your tut-site that using “one” SendHTML()-Method to build up a webpage to show up some sensor-values (but i cant find it again – please help to find it). In this method is some ajax-scripting refreshing the
– Elements for temperature and humidity. This example works realy well and straight.
Now i stay in trouble with the SSE-Version of reading the SHT-30 and the MQ-2. If i work “only” with the serial monitor all parts acting straight and correct. The output shows all sensor-values at any time. When i open a browser and type the ip-adress to start gauges-view the system crashes with exception, restart and no more values shown in serial-monitor and gauges webview.
What the hell breaks the server down to reboot in case of an exception? In some cases the system act correct but this is not reproduceable (after 50 sheets a4-paper with notices ;-). I hope there is a chance to solve this problem. Any suggestions to test the getSensorReading() or the setup() befor calling server.begin()? Maybe the fault is in loop().
The only question left is: What breaks down the server and throw an exception which breaks down the whole system and cut off any further sensor readings. I thought the readings should work in any way.
Thanks for further thoughts and help….
Best regards,
Manfred
Hi again.
I do think it is an issue with timers with the libraries you’re using. But I can’t be sure because I don’t have a way to test it.
This tutorial doesn’t use server-sent events, but it refreshes the web page every 10 seconds to update the readings:
Regards,
Sara
Hello, I have an encoder on interrupt pins of one ESP8266.
It correctly give me back the angle values form 0 to 360. I store it in a variable int called pos.
I draw a gauge by myself using canvas, I have this code for drawing the hand:
function drawHand(ctx, pos, length, width) {
ctx.beginPath();
ctx.lineWidth = width;
ctx.lineCap = “round”;
ctx.moveTo(0,0);
ctx.rotate(pos);
ctx.lineTo(0, -100);
ctx.stroke();
ctx.rotate(-pos);
}
I can’t pass the pos value from esp8266 code to script.js.
I tryed with this tutorial but you use
gaugeTemp.value = temp;
how can I set drawHand.pos for my hand?
Thank you
Massimo
Hi there,
your workshops are realy great and some of them could be installed with no problems – thanks for that good stuff.
We have one great question to WiFi-System on the ESP8266 / ESP32 as follow:
To use WiFi the SOC uses the 2.4GHz Band to operate. The question is if this 2.4GHZ-Frequency could be changed? We think that this frequency is generated by PLL-Oscillator in the SOC and driven by operating-system values.
Could this be changed to a lower frequency band width for example with an OS-Update?
Thanks for reply to this question.
Best regards,
Manfred
Hi again,
as we ask for the timer-problem few days ago we now found the problem. In setup() there is a function-call getSensorReadings() to update the values for the first time. This call is embedded in the server.on() call within an request.send() method.
This method-call crashes the function call and the whole setup()-call. By the way we think this is not a timing-problem at all, only the function call (for testing we cleared the getsensorreadings-method to deliver a simle “Hello” so that it couldn’t have timing-delays) kill the setup.
Our solution is to initiate the ESP setup() with hardcoded startup-values that updated with the first (and following) loop() call at all. Mess but simple and clear.
Thanks,
Manfred
Hi Manfred,
I’ve got the same problem: The thing crashes as soon as the Website request an update.
It seem that you found the solution. Could you post the piece of code you changed / inserted ?
Would be a great help!
Hi.
Can you better describe your issue?
What happens exactly to the board? Do you get any errors on the Serial Monitor?
Regards,
Sara
Hi Sara, sorry for the delay in answering your post.
I did get the usual messages after an ESP restart and a very long stack list.
Restart gave 2 as Reset cause and 3,6 as boot mode.
Worked fine when the following lines were commented out:
// Request for the latest sensor readings
// server.on(“/readings”, HTTP_GET, [](AsyncWebServerRequest * request) {
// String json = getSensorReadings();
// request->send(200, “application/json”, json);
// json = String();
// });
One had to wait for the first values to be display for max. 30 seconds but that would be okay.
I did try to reproduce the error yesterday. But, funny thing, now it works fine with the above lines uncommented. Maybe one of the many recent updates fixed the problem?
There is still a minor thing. After some time (about an hour or so), the gauges are displayed with an offset (still within there little windows) to the top and the left and are only partially visible. Only the gauge with the offset is updated. After F5 / refresh display everything is fine again for a while. Tested with Firefox (Windows), might try others browsers (on Windows and Android) later. I’d like to add a screenshot, but now idea how 🙁
Hi.
To share an image, upload the image to google drive, Imgur, or dropbox, and then, share a link to the image source.
Regards,
Sara
Hi Sara, I tried again over the weekend and the crash-effect is back. I have not changed a thing but to add a few serial outputs to get infos about the SDK, reset reason etc.
As long as one doesn’t refresh the browser window (F5), everything is fine. But after a refresh there is this output :
User exception (panic/abort/assert)
————— CUT HERE FOR EXCEPTION DECODER —————
Panic core_esp8266_main.cpp:137 __yield
ctx: sys
sp: 3fffec70 end: 3fffffb0 offset: 0000
3fffec70: 00000000 3ffe85d4 3fff0d64 40205c98
3fffec80: 00000001 000000fe 00000000 00000000
3fffec90: 00000000 00000000 00000000 00000000
3fffeca0: 00000000 00000000 4bc6a7f0 00000000
3fffecb0: 000002ed 0000be0c 3ffef20c 40214516
3fffecc0: 0000be0c 00000000 00000100 40214583
3fffecd0: 3f000039 0000be0c 3ffef20c 40213d55
3fffece0: 000002ed 0000be0c 3ffef20c 4021151d
3fffecf0: 3fffed54 3ffef20c 3ffef20c 40211570
3fffed00: 3fffed54 3ffef0f0 3ffef13c 4020662a
… more stack listings…
————— CUT HERE FOR EXCEPTION DECODER —————
ets Jan 8 2013,rst cause:2, boot mode:(3,6)
load 0x4010f000, len 3460, room 16
tail 4
chksum 0xcc
load 0x3fff20b8, len 40, room 4
tail 4
chksum 0xc9
csum 0xc9
v00059660
~ld
Sdk version: 2.2.2-dev(38a443e)
Core Version: 3.0.2
Boot Version: 31
Boot Mode: 1
CPU Frequency: 80 MHz
Reset reason: Software/System restart
Hi again.
What sensor are you using?
Instead of getting the readings inside the on() method, call the getSensorReadings outside the function and save the results in a global variable. Then, use that variable inside the on() method instead of calling the function. I believe it’s that that is crashing your board.
Regards,
Sara
Hi Greece2001,
your description of the boot-message (Reset-code 2 – BootMode 3,6) is still OK an means in what case the ESP restarted. Best way is to use directly ESP.restart() before using any messaging-loops or -calls.
The visual mess with the gauges are also on our project. At this time we have no solution where this “visual error” comes from. This could be a problem from the gauges code itself. With all browsers (firefox, chrome, edge …) we have the same problem. We think this is a timing-problem in drawing-routines at the gauges-lib. Seems that the redraw-refresh-time is longer as the message-respone-time and therefore comes the destroyed drawing.
Hope someone find the problem soon. This time the gauges-drawing-fault is not for production systems.
Best regards,
Manfred
Hello
Is there a way to replace the BNE280 with DS18B20 to measure the temperature of a swimming pool and stay only with the web page that has the thermometer?
I know you have a project with the DS18B20, but this website is nicer and I wanted to keep the image of the thermometer
Hi.
Yes, you can use only the thermometer part.
Make sure you replace all BME280 parts with the DS18B20.
In the JSON string, send only the temperature reading.
Regards,
Sara
Is it possible to have the gauge show the value of the ADC pin as a value on 0-100% on the ESP8266?
Hello
All codes on this site and other sites are designed for on and off switches.
I want a code that can control the output instantly with a button.
This means that when my finger is on the button, the output is active and when I remove my finger from the button, the output is disabled.
Please guide me.
Hi.
There are many different ways to achieve that.
We have this example:
Regards,
Sara | https://randomnerdtutorials.com/esp8266-web-server-gauges/ | CC-MAIN-2022-40 | refinedweb | 6,378 | 65.93 |
Thank you helloworld922 and oyekunmi, I figured it out now.
Thank you helloworld922 and oyekunmi, I figured it out now.
This is my code:
import javax.swing.JOptionPane ;
import java.util.Date ;
public class blahblah {
// *************************************************************************main
...
Yeah, that's what I was doing before but it was not working properly for some reason. I figured out what I was doing wrong now though. Thanks, anyways.
Okay, the code below works fine, it prompts the user to enter a sentence, makes sure it is a proper sentence, then checks how many words there are. What I want it to do is prompt the user over and...
I just started programming with Java the other day, so I don't really know all too much. I am making a program that asks you for a day of the month, month, and year, and then tells you what day of... | http://www.javaprogrammingforums.com/search.php?s=a69d8fc409ecc94a4ca0cb0a049aa252&searchid=1361442 | CC-MAIN-2015-06 | refinedweb | 147 | 83.86 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.