text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
With many older computers and their peripherals, the firmware code is stored inside a ROM chip of some sort. These were unlike the later EEPROMs and Flash chips which could be electrically erased and reprogrammed. Instead, these chips were often either one-time programmable (OTP) ROM chips which were permanent (meaning that a firmware upgrade required a chip swap), or an EPROM.
EPROMs are eraseable, however, to do this requires a relatively intense source of UV light. These EPROMs are typically made in a ceramic DIP (or PLCC) package (sometimes CERDIP for short) and are no longer widely manufactured to my knowledge. These packages are made by epoxying two halves of ceramic, with the top part containing a quartz or fused silica window that can admit UV light into the chip underneath. In most applications, post-programming, the window is covered with a silvered self-adhesive label to prevent any stray light from getting in and potentially slowly erasing the chip.
These chips are relatively unique, as they allow you to look inside the chip itself. As a result, these are some of the most beautiful chips to see, as you can literally “see” the patterns which form the memory array, and bits and pieces around it which form the row/column matrix decoders etc. Not all chips are of the same capacity, and depending on the vintage and technology, they can have markedly different die sizes and designs. The chips also have a hefty weight. The macro picture above is of the Signetics 256kbit chip which is from 1988 (one year older than I am).
Sources of UV light that will erase such EPROMs in a timely manner are relatively limited, with the most common and most recommended source being one that uses a UV germicidal low-pressure (254nm) lamp. As this wavelength of UV-C is very dangerous for direct exposure, such devices are typically rare and expensive. Other alternatives such as using sunlight often have slow and inconsistent results.
But Wait. There’s a Cheap Version?
To my surprise, when I went looking on eBay, I found a cheap and nasty eraser for about AU$19. In fact, this particular unit seems to have flooded the market for UV EPROM erasers and is resold by umpteen sellers. It’s not very expensive at all. I promptly ordered one, as I would love to be able to erase my existing EPROMs and overwrite them, but the unit that arrived had a bung power switch that was stuck in the OFF position.
I nagged the seller, and all I got back was a lousy AU$5. I suppose that’s better than nothing, so I set off to repair it myself, and while I was doing that, I thought I’d give it a review as well, so here goes.
This unit is often sold without a brand, with the words “durable, fast” attached to the name, but it’s branded AY in my case. The unit is made of a recycled plastic body, of the same teal green weirdness that was the recycled plastic rulers they used to give away to students. It’s light, feels hollow, and features faded printing on the top and a broken mains power switch. It also has a timer, stolen from a box fan.
The erasing chamber is accessible from the front, as a plastic drawer with no handle, but a hole where a handle might be. This hole goes right through to the inside, which isn’t a particularly good idea as UV-C is harmful to skin and eyes. Even though the instantaneous dose for a few seconds of accidental exposure won’t immediately blind you, it is going to cause some damage still. As a result, I patched the hole on both sides with black electrical tape just to be safe.
The drawer itself is not particularly big, but enough to squeeze a few chips side by side if you should have the luxury of having a few chips. The drawer itself is actually “melted” to the front lid to attach it, another sign of quality construction. The non-proven ESD properties of recycled plastic does concern me somewhat.
The unit has plastic feet, with no rubber, and a QC passed sticker which makes me wonder what sort of QC was passed in the first place.
Suspiciously thin flex is used, although not thin enough to pose a major issue, and of course, comes fitted with a two parallel prong plug as it’s a non-compliant piece of “imported” equipment.
Repair Teardown
Taking the unit apart was no big hassle – four Philips screws later and I’m in.
The insides were, as expected, remarkably simple but also remarkably dusty. I wondered if the plastic and factory conditions may have contributed. Anyway, they decided to save every penny they could when making this device – for example:
- They reused a timer from a box fan as a timer for the eraser.
- They used a power switch in series with the timer switch that is somewhat redundant, as the timer switch has a permanent ON setting as well. The switch was so cheap as to break in shipment.
- They have no proper strain relief for the cable, and instead, it’s just knotted inside the unit.
- The lamp holders are made of plastic, with the pins directly soldered to wires because they didn’t want to pay for the connectors.
- The electronic ballast itself is made on a cheap single-sided PCB by hand in an electronics 101 style fashion, with a design that’s vaguely similar to that of most CFL bulbs, minus any fuse protection.
- There is no effective UV lamp reflector or shielding that stops the other components from being exposed to UV-C.
- The UV-C lamp appears to be the cheapest unbranded 4W tube they could get, with score marks on the glass.
- The PCB is secured by three, rather than the four screws it was designed for.
As UV-C is known to degrade plastics, this design is a ticking time bomb. The plastics will become brittle, flaky and craze due to exposure. Worse still, the PVC insulation on the cables can get brittle, meaning that the insulation may fail and cause a short circuit. I suppose for occasional usage, this isn’t likely to be a major issue, but for actual heavy use, it’s likely that the unit wouldn’t survive too long.
After cleaning the tube of external dust, it was discovered that the glass inside has specks which appear to be the thermionic emission mix on the filaments already “falling” off, suggesting a poor quality tube. I suspect the whole unit will never last long enough for it to be replaced anyway, hence the “disposable” sort of construction.
Removing the PCB also showed that the front drawer was on a slide which did not actuate any interlocks whatsoever, so it is possible to open the draw during operation and get exposed to UV-C. Not a very safe arrangement.
Even the PCB showed issues – namely a lack of terminal points for connections of wires which had to be soldered directly to the underside of the PCB. Again, I will reiterate that there is no primary side fuse – so those diodes will probably have to become the fuse when something goes wrong. Cheap capacitors are used, as expected, and the underside shows that the PCB is so cheap that even the solder resist isn’t applied consistently. It seems to be a design from 27th October 2008, so it’s been on the market for a while.
The bad switch was dissected (not pictured) and showed that the switching element had fallen out of place to the side thus blocking the actuation of the switch. As I couldn’t dissect it without damaging the switch, I opted to replace it with a slightly better quality enclosed dual pole switch (utilizing only one pole). As it’s a standard size, it fits in the original cut-out, but the AU$5 I got back from the seller definitely doesn’t cover the cost of parts and labour.
Testing
A commonly asked question is how long an EPROM should be erased for? A common answer is to run tests to find out. In order to test the effectiveness of the eraser, I decided to:
- Erase the EPROM for 10 minutes, verify that it is all 0xFF.
- Program all cells with 0x00 (as the unit erases to 0xFF).
- Erase for 30s
- Check all cells for errors
- Repeat erase cycle until all cells show 0xFF.
At each 30 second cycle, the cells were checked for errors. A count of number of byte errors (i.e. at least one bit of the byte was corrupted) was recorded, as well as a device clean result (i.e. all device bits are 1’s).
Two independent runs showed slightly different results, possibly due to the difference in positioning and warm-up of the lamp. In both cases, device clean was achieved after 330s of exposure, with the first corruptions happening at the 150-180s mark.
Ultimately, it seems that 330s is just enough to reliably erase this particular chip provided it is placed roughly in the middle and the window is very clean. A safety margin should be added in regular usage, with some people recommending anywhere from 2x to 5x the amount of time that it takes for the device to register all 1’s. This means that in-practice erase times should range from 11 to 27.5 minutes. It is possible to damage EPROMs by over erasing, so exposing them to more than half-hour of radiation under the eraser is probably unwise unless there is good reason for it.
How does the erasure actually happen across the cells? I wanted to find out, so on the second run, I dumped the EPROM data as well. The data was then put through a small C program that took each bit, and created a whole byte from it – if the bit was 1, then it would output 0xFF, otherwise 0x00. This allowed for Photoshop to import it as a RAW image made of 8-bits greyscale mode of 512×512 pixels, each pixel representing a bit in the 256kbit device. From that, the following artful animated GIF was produced.
There are recurring patterns which likely represent different localized regions of the device, which may be closer to the edge or under a thicker piece of quartz. The erasure is not uniform, as cells are not always uniform in manufacture. I found this relatively intriguing, so I will probably repeat this with some other chips when they arrive.
Conclusion
The unit is cheap, and it does work, but its design is pretty basic and it’s very much a “timebomb” and safety hazard in many ways. For occasional light-duty use, it’s better than nothing, but I wouldn’t rely on one of these in any high or medium-volume circumstance, as it’s literally killing itself while in operation.
Appendix: Shoddy C-program to Convert BIN dumps into RAW for Photoshop
#include <stdio.h> int main (void) { int tempchar=getchar(); int mask=0x80; while(tempchar!=EOF) { while(mask) { if(tempchar&mask) { printf("%c",0xFF); } else { printf("%c",0x00); } mask=mask>>1; } tempchar=getchar(); mask=0x80; } return(0); }
That gunk inside the tube is probably mercury or a mercury amalgam used to generate the UV light.
Ah yes, quite true and indeed possible, although I was more expecting to see it in one blob rather than dispersed at a cold state. At least the whole eraser (AU$18) is not much more expensive than the unbranded replacement tube (AU$12), so ultimately, it seems to be a bare minimum cost design.
– Gough
I’d be interested to see the same sort of experiment (with longer time frames) for a chip sat in full sun to see what happens there! You might get a nicer animation.
I’m yet to get an eraser for this type of chips, it’s good to know which kind of one to avoid. Although the question remains if any other erasers are any better.
Sparcie
Sadly, the only real “quality” units seem to be vintage ones, which you really don’t want to try since replacement tubes can be a pain to obtain (or not available at all). Instead, I suspect it’s still best to buy one of these “cheap and nasty” units and use them with caution with an expectation you might need to replace the unit after a while.
As for erasure under the sun – it seems some guys had experimented with that and concluded that it takes weeks to months to achieve an inconsistent erasure. The reason is that the sunlight doesn’t contain enough high energy UV that would work reliably to erase the chip. I could probably run this experiment with the sun, but a peek outside at the weather today tells me I’m not going to get anywhere :). Besides, it’s also heading towards winter as well, which would only lengthen the time required. I will definitely consider it, should I have the time, as I will be getting a few batches of “new old stock” arriving with multiple chips of the same type, which I can afford to “sacrifice” (e.g. should it get blown away, stolen by the neighbour’s cat, etc.)
Thanks for the comments as usual :).
– Gough | http://goughlui.com/2016/04/22/teardown-repair-test-aygeneric-uv-eprom-eraser/ | CC-MAIN-2018-26 | refinedweb | 2,257 | 67.38 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to encrypt a field before to record it?
I would like record a field (wich is a password) encrypted in my database. I have to encrypt it directly inside my module not in my PostgreSQL database. I did the following :
import md5 import os from openerp.osv import osv, fields ###### Encrypton of passwords ###### CRYPT64 = './0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' def make_salt(): sbytes = os.urandom(8) return ''.join([CRYPT64[ord(b) & 0x3f] for b in sbytes]) def md5crypt(password, salt, magic='$1$'): if salt is None: salt = make_salt() m = md5.new() m.update(password + magic + salt) mixin = md5.new(password + salt + password).digest() for i in range(0, len(password)): m.update(mixin[i % 16]) # Then something really weird... # Also really broken, as far as I can tell. -m i = len(password) while i: if i & 1: m.update('\x00') else: m.update(password[0]) i >>= 1 final = m.digest() return final md5crypt(pass_pid) ###### Creation of password table ###### class password(osv.osv): _inherit = 'res.partner' _columns = { 'passwords': fields.one2many('password.table','name_pid','user_pid') } password() class password_table(osv.osv): _name="password.table" _columns = { 'name_pid':fields.many2one('res.partner', 'name', required=True), 'user_pid':fields.char("UserName", size=128, required=True), 'host_pid':fields.char("IP/HostName", size=128, required=True), 'pass_pid':fields.function(md5crypt, type="text", string="password", size=128, required=True), 'com_pid':fields.text("Comment") } password_table()
The error is about my parameters but I don't know at all how to modify that!
This is a python programming question, not an openerp issue/ error/ question. I will try to point you in the right direction, but you are going to have to get someone that knows how to program python to do this programming for you, this forum is for openERP questions.
First you should move the encryption function into
class password_table(osv.osv):
Second, you want to use this function to encode and store the password, so you have to create a function that will encode when saving the field and another for viewing it.
read how to use functions here:
'pass_pid':fields.function( _view_function, fnct_inv=_store_function, type="text", string="password", size=128, required=True),
Third you cannot just cut and paste code from other sources, you have to massage it into openERP so you will have to rewrite the function accordingly. Your functions will have to start something like this:
def _store_function(self, cr, uid, id, name, value, args=None, context=None):
If you need help beyond this you should contact an openERP partner to write the module for you. Custom coding is a service they are happy to post the error you are getting? That would help give you a better answer.
I'm getting this error: "File "/home/integration/oe7_1/addons/password_module/password.py", line 59, in <module> md5crypt(pass_pid) NameError: name 'pass_pid' is not defined" | https://www.odoo.com/forum/help-1/question/how-to-encrypt-a-field-before-to-record-it-24653 | CC-MAIN-2017-04 | refinedweb | 493 | 60.72 |
21 September 2011 07:37 [Source: ICIS news]
GUANGZHOU (ICIS)--?xml:namespace>
Sinopec started building the yuan (CNY) 10.1bn ($1.6bn) facility in September 2010 and is the company’s first LNG terminal in
The producer is building the terminal to enter the LNG market, the source said.
The company is building three storage tanks, each with a capacity of 160,000 cubic metres (cbm), and a dock to support future operations at the terminal, the source said.
In addition, Sinopec is planning to expand the capacity at the terminal in the second phase of the construction.
However, the company has not decided its capacity or when it will start building the second phase, according to the source.
($1 = CNY6.39) | http://www.icis.com/Articles/2011/09/21/9493841/chinas-sinopec-to-start-up-qingdao-lng-terminal-in-november.html | CC-MAIN-2014-10 | refinedweb | 122 | 52.19 |
I; ...}
However, rtiddsgen only appears to generate code and a makefile for the last idl file passed to it, e.g. if I give it header.idl and message1.idl, I will not get code generated for for the header.
I could run rtiddsgen on header and message1 separately, but would then need to hand craft a makefile for compiling and linking the two.
The alternative is to replicate the idl of the header inside each of message1 and message2, which would make maintenance difficult.
Am I missing something, or is this really the case? If so, are there better ways of achieving what I want.
Hello,
What Language are you generating code for? This answer assumes C/C++ if it was Java maybe some of the problems I mention would not manifest themselves.
Yes, you need to run
rtiddsgenseparately on each IDL file. As you noted,
rtiddsgendoes not automatically generate code for the included IDLs. The reason is that if we did this it would be easy to get in a situation where code for the same IDL would be generated multiple times or in multiple files. This would happen for example if the same IDL file was included from multiple other IDLs.
It would not be a good idea to replicate the typs in
header.idlinto both
message1.idland
message2.idl. Aside from being messy and hard to maintain you can easily run into "duplicate symbol" problems.
The model is that each IDL has its own associated generate code that should be managed along with that IDL. Typically the result of this is also a library that contains the associated generated type-specific generated files (but not the example files). Other IDLs files that include that IDL would also have to link the library that was generated from the IDL.
In your example: where
header.idlis included both from
message1.idland
message2.idlthe approach would be.
Depending on the types used by your final application you would link different subsets of these libraries. For example:
Note that if we had generated the code for the types in header.idl as a result of running rtiddsgen on message1.lib, then these "header-types code" would also appear inside message2.lib and when you tried to link both message1.lib and message2.lib you would get duplicated symbols.
Unfortunately this means you need to manually edit any auto-generated makefiles/projects. The autogenerated makefiles/projects are intended for simple examples it would hard to make them generic without making them complicated which would confict with their main purpose as examples. So the more "real world" scenarios you describe require manual modification of the makefiles/projects.
Hope this helps,
Gerardo
Many thanks Gerado,
Yes it is C++ we are using and you give a very comprehensive answer that is the same conclusion I was coming to.
Is this possible with Managed C++/CLI and can someone point me to an example? I'm having trouble attempting to use the example code generated with rtiddsgen.
Visual Studio 2013
Windows 10 64 bit
RTI Connext DDS 5.2.0 (Professional if that matters)
Here is what I have:
C:\test\foo\foo.idl#ifndef FOO_IDL
#define FOO_IDL
struct Foo { long data; };
#endif
C:\test\bar\bar.idl#ifndef BAR_IDL
#define BAR_IDL
#include "foo.idl"
struct Bar { Foo foo; };
#endif
I first run
rtiddsgenon
foo.idland then compile the generated foo-type project (and the publisher/subscriber coincidentally) using the following:
C:\test\foo\>rtiddsgen -language C# -example x64Win64VS2013 foo.idlINFO com.rti.ndds.nddsgen.Main Running rtiddsgen version 2.3.0, please wait ...
rtiddsgen20_7341046294903150386.cc
INFO com.rti.ndds.nddsgen.Main Done
C:\test\foo>devenv foo-64-csharp.sln /build release
<skipping most of the visual studio output for brevity>
1> foo_type-64-dotnet4.5.1.vcxproj -> C:\test\foo\bin\x64\Release-VS2013\foo_type.dll
2> foo_publisher-64-csharp -> C:\test\foo\bin\x64\Release-VS2013\foo_publisher.exe
3> foo_subscriber-64-csharp -> C:\test\foo\bin\x64\Release-VS2013\foo_subscriber.exe
========== Build: 3 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
Running
foo_publisher.exeand
foo_subscriber.exeworks as expected.
I repeat for
bar.idl, while passing in the path to
foo.idlas an additional argument:
C:\test\bar>rtiddsgen -language C# -example x64Win64VS2013 -I ..\foo bar.idl
INFO com.rti.ndds.nddsgen.Main Running rtiddsgen version 2.3.0, please wait ...
rtiddsgen20_4442684579607075862.cc
INFO com.rti.ndds.nddsgen.Main Done
From here, I open the generated bar-64-csharp.sln and try the following:
1) Compile bar_type project. Compile error is it cannot find "foo.h" in the project.
2) I add the path to foo.h via: bar_type > Properties > C/C++ > General > Additional Include Directories > Add C:\test\foo
3) Compile again and it works this time. Now, I get linking errors, specifically error LNK2001: unresolved external symbol "struct DDS_TypeCode * __cdecl Foo_get_typecode(void)" (?Foo_get_typecode@@$$FYAPEAUDDS_TypeCode@@XZ)
4) I add a reference to foo_type.dll via: bar_type > Properties > Common Properties > Add New Reference > point to foo_type.dll
5) Compile once more and I get various errors:
Error 1 error C2011: 'Foo' : 'class' type redefinition C:\test\foo\foo.h 19 1 bar_type
Error 2 error C2011: 'FooSeq' : 'class' type redefinition C:\test\foo\foo.h 48 1 bar_type
Error 12 error C2664: 'bool BarPlugin::serialize(DDS::TypePluginDefaultEndpointData ^,Bar ^,DDS::CdrStream %,bool,unsigned short,bool,System::Object ^)' : cannot convert argument 2 from 'Foo ^' to 'Bar ^' C:\test\bar\barPlugin.cpp 87 1 bar_type
Error 14 error C2664: 'bool BarPlugin::deserialize_sample(DDS::TypePluginDefaultEndpointData ^,Bar ^,DDS::CdrStream %,bool,bool,System::Object ^)' : cannot convert argument 2 from 'Foo ^' to 'Bar ^' C:\test\bar\barPlugin.cpp 132 1 bar_type
Error 19 error C2664: 'unsigned int BarPlugin::get_serialized_sample_size(DDS::TypePluginDefaultEndpointData ^,bool,unsigned short,unsigned int,Bar ^)' : cannot convert argument 5 from 'Foo ^' to 'Bar ^' C:\test\bar\barPlugin.cpp 287 1 bar_type
Error 24 error C2027: use of undefined type 'Foo' C:\test\bar\bar.cpp 28 1 bar_type
and more
Am I missing something obvious? I have no trouble doing this with C++.
Thanks for any help!
Hi,
Your issue seems to be that Visual Studio is trying to compile the same files that are in the Foo library. You can avoid this situation by using a macro suffix in all your base types (in this case foo). Please, try with the following steps:
1) Generate the Foo library with a macro suffix:
C:\test\foo>rtiddsgen -language c# -example x64Win64VS2013 -dllExportMacroSuffix FOO foo.idl
C:\test\foo>devenv foo-64-csharp.sln /build release
2) Generate the Bar type files:
C:\test\bar>rtiddsgen -language c# -example x64Win64VS2013 -I ..\foo bar.idl
3) Edit the bar_type project properties:
3.1) In "Configuration Properties -> C/C++ -> General -> Additional Include Directories" add the path to the Foo type files:
3.3) In "Configuration Properties -> Linker -> General -> Additional Library Directories" add the Foo library path: C:\test\foo\bin\x64\Release-VS2013
3.4) In "Configuration Properties -> Linker -> Input -> Additional Dependencies" add the Foo library: foo_type.lib
3.5) In "Common Properties -> References -> Add New Reference..." add the Foo DLL.
4) Add the Foo library as a reference in the C# Publisher and Subscriber project too.
Regards,
Benito | https://community.rti.com/forum-topic/nested-idl | CC-MAIN-2020-29 | refinedweb | 1,200 | 51.44 |
So I wrote a function that reads in a .csv file and creates a dictionary of abbreviations and their meaning. Now I'm looking to create a main function that calls the create dictionary function and then prompts the user for keys, but I'm not sure how to access the created dictionary.
import csv
def CreateDictionary(fileName):
with open(fileName, 'r') as f:
reader = csv.reader(f)
newDict = {}
for x, y in reader:
newDict.setdefault(x, []).append(y)
return newDict
def main():
CreateDictionary('textToEnglish.csv')
key = input("Please enter a text abbreviation")
for key, value in newDict:
You need to assign the return value of the
CreateDictionary function to some variable like this:
newDict=CreateDictionary('textToEnglish.csv')
this way you can access the items in the dictionary like this:
newDict["etc"] | https://codedump.io/share/mnliww5pxV9E/1/passing-a-dictionary-or-its-keys-as-values-to-a-function | CC-MAIN-2018-09 | refinedweb | 132 | 56.76 |
NAME
keyboard_init, keyboard_init_return_fd - initialize the keyboard to raw mode
SYNOPSIS
#include <vgakeyboard.h> int keyboard_init(void); int keyboard_init_return_fd(void);
DESCRIPTION
These routines initialize the keyboard to raw mode. No ASCII codes are produced, instead svgalib keeps track of each single keypress or depress. The return_fd version returns the file descriptor of the console device to allow you to do further tricks with it (though it is unclear which). The other version just returns 0 if successful. Both return -1 on error. keyboard_close(3) returns to normal operation. When in raw keyboard mode you can no longer use vga_getch(3) or vga_getkey(3) but you must use keyboard_getstate(3) or keyboard_keypressed(3). Depending on the setting of keyboard_translatekeys(3) even in raw mode a <Ctrl>-C will cause a SIGINT. In any case svgalib will check for <Alt>-F1 - <Alt>-F10 and perform console switches accordingly.
SEE ALSO
svgalib(7), vgagl(7), libvga.config(5), keytest(6), eventtest(6), keyboard_seteventhandler(3), keyboard_close(3), keyboard_update(3), keyboard_waitforupdate. | http://manpages.ubuntu.com/manpages/precise/man3/keyboard_init_return_fd.3.html | CC-MAIN-2015-18 | refinedweb | 165 | 50.84 |
#include "core/or/or.h"
#include "app/config/config.h"
#include "core/or/reasons.h"
#include "feature/nodelist/node_select.h"
#include "lib/tls/tortls.h"
Go to the source code of this file..
Definition in file reasons.c.
Convert a numeric reason for destroying a circuit into a string for a CIRCUIT event.
Definition at line 324 of file reasons.c.
References END_CIRC_AT_ORIGIN, and END_CIRC_REASON_FLAG_REMOTE.
Given a RELAY_END reason value, convert it to an HTTP response to be send over an HTTP tunnel connection.
Definition at line 457 of file reasons.c.
References END_STREAM_REASON_MASK.
Given an errno from a failed ORConn connection, return a reason code appropriate for use in the controller orconn events.
Definition at line 287 of file reasons.c.
Referenced by connection_handle_read_impl().
Given an errno from a failed exit connection, return a reason code appropriate for use in a RELAY END cell.
Definition at line 177 of file reasons.c.
Referenced by connection_edge_end_errno().
Convert the reason for ending a stream reason into the format used in STREAM events. Return NULL if the reason is unrecognized.
Definition at line 28 of file reasons.c.
References END_STREAM_REASON_MASK.
Translate reason (as from a relay 'end' cell) into an appropriate SOCKS5 reply code.
A reason of 0 means that we're not actually expecting to send this code back to the socks client; we just call it 'succeeded' to keep things simple.
Definition at line 100 of file reasons.c.
References END_STREAM_REASON_MASK. | https://people.torproject.org/~nickm/tor-auto/doxygen/reasons_8c.html | CC-MAIN-2019-39 | refinedweb | 241 | 53.37 |
So basically this exercise is not actually part of a fcc exercise but from the python crash couse book… Im working on a game that works like space-invaders, my issue arises when i try to create the bullets, because the ship and the bullets are connected were supposed to use the pygame.sprite module and import Sprite (I think ) I have done everything by the book but i am still getting a “ImportError: cannot import name ‘sprite’ from ‘pygame.sprite’”… if anyone could help me out with this error i would be greatly appreciative… Here is the module for the bullets:
import pygame from pygame.sprite import Sprite class Bullet(Sprite): """A class to manage bullets fired from the ship""" def __init__(self, ai_game): """Create a bullet object at the ships current location""" super().__init__() self.screen = ai_game.screen self.settings = ai_game.settings self.color = self.settings.bullet_color #Create a bullet rect at (0, 0) and then set correct position. self.rect = pygame.Rect(0, 0, self.settings.bullet_width, self.settings.bullet_height) self.rect.midtop = ai_game.ship.rect.midtop #Store the bullets value as a decimal value. self.y = float(self.rect.y) def update(self): """Move bullet up the screen""" #update decimal value of the bullet. self.y -= self.settings.bullet_speed #update rect position. self.rect.y = self.y def draw_bullet(self): """Draw the bullet to the screen""" pygame.draw.rect(self.screen, self.color, self.rect)
(please let me know if i need to add more of my code in order to be assisted…) | https://forum.freecodecamp.org/t/please-could-someone-be-of-assistance-probably-not-a-difficult-one-but-i-sure-am-baffled/481180 | CC-MAIN-2022-40 | refinedweb | 257 | 51.78 |
How do i use wild cards to route when a non functional route is given in the new Router... like the ones .otherwise in 1.X router or /** in previous beta router ? Any example / plnkr code is highly appreciated
import { provideRouter, RouterConfig } from '@angular/router'; const routes: RouterConfig = [ { path: '**', component: PageNotFoundComponent } ]; export const appRouterProviders = [ provideRouter(routes) ];
The ** denotes a wildcard path for our route. The router will match this route if the URL requested doesn't match any paths for routes defined in our configuration. This is useful for displaying a 404 page or redirecting to another route.
Take a look at the router docs | https://codedump.io/share/iwHaGoHKld0Y/1/otherwise-or--in-the-new-angular2-router-to-route-non-route-through-wild-cards | CC-MAIN-2017-17 | refinedweb | 105 | 67.15 |
NAME
mp4h - Macro Processor for HTML Documents
VERSION
This documentation describes mp4h version 1.3.1.
INTRODUCTION
The mp4h software is a macro-processor specifically designed to deal with HTML documents. It allows powerful programming constructs, with a syntax familiar to HTML authors. This software is based on Meta-HTML "<URL:>", written by Brian J. Fox, Even if both syntaxes look similar, source code is completely different. Indeed, a subset of Meta-HTML was used as a part of a more complex program, WML (Website Meta Language "<URL:>") written by Ralf S. Engelschall and which I maintain since January 1999. For licensing reasons, it was hard to hack Meta-HTML and so I decided to write my own macro- processor. Instead of rewriting it from scratch, I preferred using another macro- processor engine. I chose GNU m4 "<URL:>", written by Rene Seindal, because of its numerous advantages : this software is stable, robust and very well documented. This version of mp4h is derived from GNU m4 version 1.4n, which is a development version. The mp4h software is not an HTML editor; its unique goal is to provide an easy way to define its own macros inside HTML documents. There is no plan to add functionalities to automagically produce valid HTML documents, if you want to clean up your code or validate it, simply use a post-processor like tidy "<URL:>".
COMMAND LINE OPTIONS
Optional arguments are enclosed within square brackets. All option synonyms have a similar syntax, so when a long option accepts an argument, short option do too. Syntax call is mp4h [options] [filename [filename] ...] Options are described below. If no filename is specified, or if its name is "-", then characters are read on standard input. Operation modes --help display an help message and exit --version output mp4h version information and exit -E --fatal-warnings stop execution after first warning -Q --quiet --silent suppress some warnings for builtins -S --safety-level="NUMBER" disable risky functions; 0 means no filtering, 1 disable "execute" and 2 disable this one too plus all filesystem related functions: "file-exists", "real-path", "get-file-properties", "directory-contents" and "include". Preprocessor features -I --include="DIRECTORY" search this directory for includes and packages -D --define="NAME"[=VALUE]"" set variable NAME to VALUE, or empty -U --undefine="COMMAND" delete builtin COMMAND -s --synclines generate `#line NO "FILE"' lines Parser features -c --caseless="NUMBER" set case sensitiveness according to the bits of "NUMBER". A null bit means symbol is case sensitive, and bits are defined as followed: 0 for tags, 1 for variables and 2 for entities. Default value is 3, i.e. only entities are case sensitive. -e --encoding="NAME" specify document encoding. Valid options are `8bit' (default) or `utf8'. -X --expansion="NUMBER" set parser behaviour according to the bits of "NUMBER" NUMBER is a combination of 1 do not parse unknown tags 2 unknown tags are assumed being simple 4 trailing star in tag name do not make this tag simple 8 an unmatched end tag closes all previous unmatched begin tags 16 interpret backslashes as printf 32 remove trailing slash in tag attributes 64 do not remove trailing star in tag name 128 do not remove leading star in tag name 256 do not add a space before trailing slash in tag attributes 1024 suppress warnings about bad nested tags 2048 suppress warnings about missing trailing slash In version 1.3.1, default value is 3114=2+8+32+1024+2048. Limits control -H --hashsize="PRIME" set symbol lookup hash table size (default 509) -L -nesting-limit="NUMBER" change artificial nesting limit (default 250) Debugging -d --debug="FLAGS" set debug level (no FLAGS implies `aeq') -t --trace="NAME" trace NAME when it will be defined -l --arglength="NUMBER" restrict macro tracing size -o --error-output="FILE" redirect debug and trace output Flags are any of: t trace for all macro calls, not only debugging-on'ed a show actual arguments e show expansion c show before collect, after collect and after call x add a unique macro call id, useful with c flag f say current input file name l say current input line number p show results of path searches m show results of module operations i show changes in input files V shorthand for all of the above flags
DESCRIPTION
The mp4h software is a macro-processor, which means that keywords are replaced by other text. This chapter describes all primitives. As mp4h has been specially designed for HTML documents, its syntax is very similar to HTML, with tags and attributes. One important feature has no equivalent in HTML: comments until end of line. All text following three colons is discarded until end of line, like ;;; This is a comment Function Macros Note: All examples in this documentation are processed through mp4h with expansion flags set to zero (see a description of possible expansion flags at the end of document), it is why simple tags contain a trailing slash. But mp4h can output plain HTML files with other expansion flags. The definition of new tags is the most common task provided by mp4h. As with HTML, macro names are case insensitive, unless "-c" option is used to change this default behaviour. In this documentation, only lowercase letters are used. There are two kinds of tags: simple and complex. A simple tag has the following form: <name [attributes] /> whereas a complex tag looks like: <name [attributes]> body </name> Since version 0.9.1, mp4h knows XHTML syntax too, so your input file may conform to HTML or XHTML syntax. In this manual, we adopt the latter, which is why simple tags have a trailing slash in attributes. If you want to produce HTML files with this input file, you may either choose an adequate "--expansion" flag or use a post-processor like tidy "<URL:>". When a simple tag is defined by mp4h, it can be parsed even if the trailing slash is omitted, because mp4h knows that this tag is simple. But it is a good practice to always append a trailing slash to simple tags. In macro descriptions below, a slash indicates a simple tag, and a V letter that attributes are read verbatim (without expansion) (see the chapter on macro expansion for further details). o define-tag "name" "[attributes=verbatim]" "[endtag=required]" "[whitespace=delete]" This function lets you define your own tags. First argument is the command name. Replacement text is the function body. Source: <define-tag foo>bar</define-tag> <foo/> Output: bar Even if spaces have usually few incidence on HTML syntax, it is important to note that <define-tag foo>bar</define-tag> and <define-tag foo> bar </define-tag> are not equivalent, the latter form contains two newlines that were not present in the former. "whitespace=delete" Some spaces are suppressed in replacement text, in particular any leading or trailing spaces, and newlines not enclosed within angle brackets. "endtag=required" Define a complex tag Source: <define-tag foo>bar</define-tag> <foo/> Output: bar Source: <define-tag bar endtag=required>;;; body is: %body</define-tag> <bar>Here it is</bar> Output: body is: Here it is "attributes=verbatim" By default attributes are expanded before text is replaced. If this attribute is used, attributes are inserted into replacement text without expansion. Source: <define-tag foo>quux</define-tag> <define-tag bar attributes=verbatim endtag=required> Body: %Ubody Attributes: %Uattributes </define-tag> <bar txt="<foo/>">Here we go</bar> Output: Body: Here we go Attributes: txt=<foo/> o provide-tag "name" "[attributes=verbatim]" "[endtag=required]" "[whitespace=delete]" This command is similar to the previous one, except that no operation is performed if this command was already defined. o let "S" "new=old" Copy a function. This command is useful to save a macro definition before redefining it. Source: <define-tag foo>one</define-tag> <let bar=foo /> <define-tag foo>two</define-tag> <foo/><bar/> Output: twoone o undef "S" "name" Delete a command definition. Source: <define-tag foo>one</define-tag> <undef foo /> <foo/> Output: <foo /> o set-hook "name" "[position=before|after]" "[action=insert|append|replace]" Add text to a predefined macro. This mechanism allows modifications of existing macros without having to worry about its type, whether it is complex or not. Source: <let foo=add /> <set-hook foo position=before> Before</set-hook> <set-hook foo position=after> After</set-hook> <foo 1 2 3 4 /> Output: Before10 After o get-hook "S" "name" "[position=before|after]" Print current hooks of a macro. Source: Text inserted with position=before:<get-hook foo position=before />! Text inserted with position=after:<get-hook foo position=after />! Output: Text inserted with position=before: Before! Text inserted with position=after: After! o attributes-quote "S" "%attributes" Like %attributes, except that "attr=value" pairs are printed with double quotes surrounding attribute values, and a leading space is added if some text is printed. Source: <define-tag foo>;;; %attributes <img<attributes-quote %attributes />/> </define-tag> <foo id="logo" src="logo.gif" name="Logo" alt="Our logo" /> <foo/> Output: id=logo src=logo.gif name=Logo alt=Our logo <img id="logo" src="logo.gif" name="Logo" alt="Our logo"/> <img/> o attributes-extract "S" "name1" "[,name2[,name3...]]" "%attributes" Extract from %attributes the "attr=value" pairs for names matching any of name1, name2.... Source: <define-tag img whitespace=delete> <img* <attributes-extract name,src,alt %attributes /> /> </define-tag> <img id="logo" src="logo.gif" name="Logo" alt="Our logo" /> Output: <img src=logo.gif name=Logo alt=Our logo /> o attributes-remove "S" "name1" "[,name2[,name3...]]" "%attributes" Remove from %attributes the "attr=value" pairs for names matching any of name1, name2.... Source: <define-tag img whitespace=delete> <img* <attributes-quote <attributes-remove name,src,alt %attributes />/> /> </define-tag> <img id="logo" src="logo.gif" name="Logo" alt="Our logo" /> Output: <img id="logo" /> Note: The two previous functions are special, because unlike all other macros, their expansion do not form a group. This is necessary to parse the resulting list of attributes. In those two functions, names of attributes may be regular expressions. Main goal of these primitives is to help writing macros accepting any kind of attributes without having to declare them. A canonical example is Source: <define-tag href whitespace=delete> <preserve url name /> <set-var <attributes-extract url,name %attributes />/> <a <attributes-quote <attributes-remove url,name %attributes />/><get-var name /></a> <restore url name /> </define-tag> <href class=web Output: <a class="web" href="">Welcome</a> But we want now to add an image attribute. So we may write Source: <define-tag href whitespace=delete> <preserve url name image /> <set-var <attributes-extract url,name,image %attributes />/> <a <attributes-quote <attributes-remove url,name,image %attributes />/> <if <get-var image /> <img <attributes-quote <attributes-remove url,name,image %attributes />/> Output: <a class="web" href=""><img class="web" src="foo.png" alt="Welcome" border=0 /></a> We need a mechanism to tell mp4h that some attributes refer to specific HTML tags. A solution is to prepend attribute with tag name, e.g. a:<img img:</a> This example shows that regular expressions may be used within attributes names, but it is still incomplete, because we want to remove prefix from attributes. One solution is with "subst-in-string", but there is a more elegant one:<img id="logo" border="1" src="foo.png" alt="Welcome" /></a> When there are subexpressions within regular expressions, they are printed instead of the whole expression. Note also that I put a colon before the prefix in order not to mix them with XML namespaces. Entities Entities are macros in the same way as tags, but they do not take any arguments. Whereas tags are normally used to mark up text, entities contain already marked up text. Also note that unlike tags, entities are by default case sensitive. An entity has the following form: &entity; o define-entity "name" This function lets you define your own entities. First argument is the entity name. Replacement text is the function body. Source: <define-entity foo>bar</define-entity> &foo; Output: bar Variables Variables are a special case of simple tags, because they do not accept attributes. In fact their use is different, because variables contain text whereas macros act like operators. A nice feature concerning variables is their manipulation as arrays. Indeed variables can be considered like newline separated lists, which will allow powerful manipulation functions as we will see below. o set-var "S" "name[=value]" "[name[=value]] ..." This command sets variables. o set-var-verbatim "S""V" "name[=value]" "[name[=value]] ..." As above but attributes are read verbatim. o set-var-x "name=variable-name" This command assigns a variable the value of the body of the command. This is particularly useful when variable values contain newlines and/or quotes. Note that the variable can not be indexed with this command. Note also, that this command behaves as set-var-verbatim: The body is not expanded until the variable is shown with get-var. o get-var "S" "name" "[name] ..." Show variable contents. If a numeric value within square brackets is appended to a variable name, it represents the index of an array. The first index of arrays is 0 by convention. Source: <set-var This is version <get-var version /> <set-var-xOperating sytem is "<include command="uname" /><include command="uname -r" />"</set-var-x> <get-var osversion /> Output: This is version 0.10.1 Operating sytem is "Linux 2.6.24-27-server " Source: <set-var <get-var foo[2] foo[0] foo /> Output: 200 1 2 3 o get-var-once "S""V" "name" "[name] ..." As above but attributes are not expanded. Source: <define-tag foo>0.10.1</define-tag> <set-var;;; Here is version <get-var version /> <set-var-verbatim;;; Here is version <get-var version /> <set-var-verbatim;;; Here is version <get-var-once version /> Output: Here is version 0.10.1 Here is version 0.10.1 Here is version <foo/> o preserve "S" "name" "[name] ..." All variables are global, there is no variable or macro scope. For this reason a stack is used to preserve variables. When this command is invoked, arguments are names of variables, whose values are put at the top of the stack and variables are reset to an empty string. o restore "S" "name" "[name] ..." This is the opposite: arguments are names of variables, which are set to the value found at the top of the stack, and stack is popped down. Note: The "preserve" tag pushes its last argument first, whereas "restore" first pops its first argument. Source: <define-tag foo whitespace=delete> <preserve src name text /> <set-var %attributes /> Inside: src=<get-var src /> name=<get-var name /> text=<get-var text /> <restore src name text /> </define-tag> <set-var src=foo.png Before: src=<get-var src /> name=<get-var name /> text=<get-var text /> <foo src=bar name=quux /> After: src=<get-var src /> name=<get-var name /> text=<get-var text /> Output: Before: src=foo.png name= text=Hello, World! Inside: src=bar name=quux text= After: src=foo.png name= text=Hello, World! o unset-var "S" "name" "[name] ..." Undefine variables. o var-exists "S" "name" Returns "true" when this variable exists. o increment "S" "name" "[by=value]" Increment the variable whose name is the first argument. Default increment is one. "by=value" Change increment amount. Source: <set-var i=10 /> <get-var i /> <increment i /><get-var i /> <increment i<get-var i /> Output: 10 11 8 o decrement "S" "name" "[by=value]" Decrement the variable whose name is the first argument. Default decrement is one. "by=value" Change decrement amount. Source: <set-var i=10 /> <get-var i /> <decrement i /><get-var i /> <decrement i<get-var i /> Output: 10 9 6 o copy-var "S" "src" "dest" Copy a variable into another. Source: <set-var i=10 /> <copy-var i j /> <get-var j /> Output: 10 o defvar "S" "name" "value" If this variable is not defined or is defined to an empty string, then it is set to the second argument. Source: <unset-var title /> <defvar title "Title" /><get-var title /> <defvar title "New title" /><get-var title /> Output: Title Title o symbol-info "S" "name" Show information on symbols. If it is a variable name, the "STRING" word is printed as well as the number of lines contained within this variable. If it is a macro name, one of the following messages is printed: "PRIM COMPLEX", "PRIM TAG", "USER COMPLEX" or "USER TAG" Source: <set-var <define-tag foo>bar</define-tag> <define-tag bar endtag=required>quux</define-tag> <symbol-info x /> <symbol-info symbol-info /> <symbol-info define-tag /> <symbol-info foo /> <symbol-info bar /> Output: STRING 5 PRIM TAG PRIM COMPLEX USER TAG USER COMPLEX String Functions o string-length "S" "string" Prints the length of the string. Source: <set-var;;; <string-length <get-var foo /> /> <set-var;;; <set-var l=<string-length <get-var foo /> /> />;;; <get-var l /> Output: 7 7 o downcase "S" "string" Convert to lowercase letters. Source: <downcase "Does it work?" /> Output: does it work? o upcase "S" "string" Convert to uppercase letters. Source: <upcase "Does it work?" /> Output: DOES IT WORK? o capitalize "S" "string" Convert to a title, with a capital letter at the beginning of every word. Source: <capitalize "Does it work?" /> Output: Does It Work? o substring "S" "string" "[start [end]]" Extracts a substring from a string. First argument is original string, second and third are respectively start and end indexes. By convention first character has a null index. Source: <set-var <substring <get-var foo /> 4 /> <substring <get-var foo /> 4 6 /> Output: efghijk ef o string-eq "S" "string1" "string2" "[caseless=true]" Returns "true" if first two arguments are equal. Source: 1:<string-eq "aAbBcC" "aabbcc" /> 2:<string-eq "aAbBcC" "aAbBcC" /> Output: 1: 2:true "caseless=true" Comparison is case insensitive. Source: 1:<string-eq "aAbBcC" "aabbcc" caseless=true /> 2:<string-eq "aAbBcC" "aAbBcC" caseless=true /> Output: 1:true 2:true o string-neq "S" "string1" "string2" "[caseless=true]" Returns "true" if the first two arguments are not equal. Source: 1:<string-neq "aAbBcC" "aabbcc" /> 2:<string-neq "aAbBcC" "aAbBcC" /> Output: 1:true 2: "caseless=true" Comparison is case insensitive. Source: 1:<string-neq "aAbBcC" "aabbcc" caseless=true /> 2:<string-neq "aAbBcC" "aAbBcC" caseless=true /> Output: 1: 2: o string-compare "S" "string1" "string2" "[caseless=true]" Compares two strings and returns one of the values less, greater or equal depending on this comparison. Source: 1:<string-compare "aAbBcC" "aabbcc" /> 2:<string-compare "aAbBcC" "aAbBcC" /> Output: 1:less 2:equal "caseless=true" Comparison is case insensitive. Source: 1:<string-compare "aAbBcC" "aabbcc" caseless=true /> Output: 1:equal o char-offsets "S" "string" "character" "[caseless=true]" Prints an array containing indexes where the character appear in the string. "caseless=true" Comparison is case insensitive. Source: 1:<char-offsets "abcdAbCdaBcD" a /> 2:<char-offsets "abcdAbCdaBcD" a caseless=true /> Output: 1:0 8 2:0 4 8 o printf "S" "format" "string" "[string ...]" Prints according to a given format. Currently only the %s flag character is recognized, and "$" extension is supported to change order of arguments. Source: 1:<printf "foo %s bar %s" baz 10 /> 2:<printf "foo %2$s bar %1$s" baz 10 /> Output: 1:foo baz bar 10 2:foo 10 bar baz Regular Expressions Regular expression support is provided by the PCRE (Perl Compatible Regular Expressions) library package, which is open source software, copyright by the University of Cambridge. This is a very nice piece of software, latest versions are available at "<URL:>". Before version 1.0.6, POSIX regular expressions were implemented. For this reason, the following macros recognize two attributes, "caseless=true" and "singleline=true|false". But Perl allows a much better control on regular expressions with so called modifiers, which are assed to the new "reflags" attribute. It may contain one or more modifiers: i Matching is case insensitive m Treat string as multiple lines. When set, a "^" matches any beginning of line, and "$" any end of line. By default, they match begin and end of string. s Treat string as single line. A dot (".") may also match a newline, whereas it does not by default. x Allow formatted regular expression, that means whitespaces, newlines and comments are removed from regular expression before processing. Note: Attribute "singleline=true" is a synonym for the "s" modifier, whereas "singleline=false" is a synonym for the "m" modifier. This behaviour was different up to mp4h 1.0.6. o subst-in-string "S" "string" "regexp" "[replacement]" "[caseless=true]" "[singleline=true|false]" "[reflags=[imsx]]" Replace a regular expression in a string by a replacement text. Source: <set-var <subst-in-string <get-var foo /> "[c-e]" /> <subst-in-string <get-var foo /> "([c-e])" "\\1 " /> Output: abfghijk abc d e fghijk Source: <set-var <subst-in-string <get-var foo /> ".$" "" /> <subst-in-string <get-var foo /> ".$" "" singleline=false /> <subst-in-string <get-var foo /> " ([a-c]) | [0-9] " ":\\1:" reflags=x /> Output: abcdefghijk abcdefghijk abcdefghij abcdefghij abcdefghij abcdefghij :a::b::c:defghijk :a::b::c:defghijk :a::b::c:defghijk o subst-in-var "S" "name" "regexp" "[replacement]" "[caseless=true]" "[singleline=true|false]" "[reflags=[imsx]]" Performs substitutions inside variable content. o match "S" "string" "regexp" "[caseless=true]" "[singleline=true|false]" "[reflags=[imsx]]" "[action=report|extract|delete|startpos|endpos|length]" "action=report" Prints "true" if string contains regexp. "action=extract" Prints the expression matching regexp in string. "action=delete" Prints the string without the expression matching regexp in string. "action=startpos" Prints the first char of the expression matching regexp in string. If there is no match, returns "-1". "action=endpos" Prints the last char of the expression matching regexp in string. If there is no match, returns "-1". "action=length" Prints the length of the expression matching regexp in string. Source: 1:<match "abcdefghijk" "[c-e]+" /> 2:<match "abcdefghijk" "[c-e]+" action=extract /> 3:<match "abcdefghijk" "[c-e]+" action=delete /> 4:<match "abcdefghijk" "[c-e]+" action=startpos /> 5:<match "abcdefghijk" "[c-e]+" action=endpos /> 6:<match "abcdefghijk" "[c-e]+" action=length /> Output: 1:true 2:cde 3:abfghijk 4:2 5:5 6:3 Arrays With mp4h one can easily deal with string arrays. Variables can be treated as a single value or as a newline separated list of strings. Thus after defining <set-var one can view its content or one of these values: Source: <get-var digits /> <get-var digits[2] /> Output: 0 1 2 3 2 o array-size "S" "name" Returns an array size which is the number of lines present in the variable. Source: <array-size digits /> Output: 4 o array-push "S" "name" "value" Add a value (or more if this value contains newlines) at the end of an array. Source: <array-push digits "10\n11\n12" /> <get-var digits /> Output: 0 1 2 3 10 11 12 o array-pop "S" "name" Remove the toplevel value of an array and returns this string. o array-topvalue "S" "name" Prints the last entry of an array. Source: <array-topvalue digits /> Output: 12 o array-add-unique "S" "name" "value" "[caseless=true]" Add a value at the end of an array if this value is not already present in this variable. Source: <array-add-unique digits 2 /> <get-var digits /> Output: 0 1 2 3 10 11 12 "caseless=true" Comparison is case insensitive. o array-concat "S" "name1" "[name2] ..." Concatenates all arrays into the first one. Source: <set-var <set-var <array-concat foo bar /><get-var foo /> Output: foo bar o array-member "S" "name" "value" "[caseless=true]" If value is contained in array, returns its index otherwise returns -1. Source: <array-member digits 11 /> Output: 5 "caseless=true" Comparison is case insensitive. o array-shift "S" "name" "offset" "[start=start]" Shifts an array. If offset is negative, indexes below 0 are lost. If offset is positive, first indexes are filled with empty strings. Source: <array-shift digits 2 /> Now: <get-var digits /> <array-shift digits -4 /> And: <get-var digits /> Output: Now: 0 1 2 3 10 11 12 And: 2 3 10 11 12 "start=start" Change origin of shifts (default is 0). Source: <array-shift digits -2 start=2 /><get-var digits /> Output: 2 3 12 o sort "S" "name" "[caseless=true]" "[numeric=true]" "[sortorder=reverse]" Sort lines of an array in place. Default is to sort lines alphabetically. Source: <sort digits /><get-var digits /> Output: 12 2 3 "caseless=true" Comparison is case insensitive. "numeric=true" Sort lines numerically Source: <sort digits numeric=true /><get-var digits /> Output: 2 3 12 "sortorder=reverse" Reverse sort order Source: <sort digits numeric=true sortorder=reverse />;;; <get-var digits /> Output: 12 3 2 Numerical operators These operators perform basic arithmetic operations. When all operands are integers result is an integer too, otherwise it is a float. These operators are self-explanatory. o add "S" "number1" "number2" "[number3] ..." o substract "S" "number1" "number2" "[number3] ..." o multiply "S" "number1" "number2" "[number3] ..." o divide "S" "number1" "number2" "[number3] ..." o min "S" "number1" "number2" "[number3] ..." o max "S" "number1" "number2" "[number3] ..." Source: <add 1 2 3 4 5 6 /> <add 1 2 3 4 5 6. /> Output: 21 21.000000 Source: <define-tag factorial whitespace=delete> <ifeq %0 1 1 <multiply %0 "<factorial <substract %0 1 /> />" /> /> </define-tag> <factorial 6 /> Output: 720 o modulo "S" "number1" "number2" Unlike functions listed above the modulo function cannot handle more than 2 arguments, and these arguments must be integers. Source: <modulo 345 7 /> Output: 2 Those functions compare two numbers and returns "true" when this comparison is true. If one argument is not a number, comparison is false. o gt "S" "number1" "number2" Returns "true" if first argument is greater than second. o lt "S" "number1" "number2" Returns "true" if first argument is lower than second. o eq "S" "number1" "number2" Returns "true" if arguments are equal. o neq "S" "number1" "number2" Returns "true" if arguments are not equal. Relational operators o not "S" "string" Returns "true" if string is empty, otherwise returns an empty string. o and "S" "string" "[string] ..." Returns the last argument if all arguments are non empty. o or "S" "string" "[string] ..." Returns the first non empty argument. Flow functions o group "S""V" "expression" "[expression] ..." "[separator=string]" This function groups multiple statements into a single one. Some examples will be seen below with conditional operations. A less intuitive but very helpful use of this macro is to preserve newlines when "whitespace=delete" is specified. Source: <define-tag text1> Text on 3 lines without whitespace=delete </define-tag> <define-tag text2 whitespace=delete> Text on 3 lines with whitespace=delete </define-tag> <define-tag text3 whitespace=delete> <group "Text on 3 lines with whitespace=delete" /> </define-tag> <text1/> <text2/> <text3/> Output: Text on 3 lines without whitespace=delete Text on3 lines withwhitespace=delete Text on 3 lines with whitespace=delete Note that newlines are suppressed in "text2" and result is certainly unwanted. o compound "expression" "[expression] ..." "[separator=string]" Like "group", but this tag is complex. "separator=string" By default arguments are put aside. This attribute define a separator inserted between arguments. o disjoin "S" "expression" Does the opposite job to "group", its argument is no more treated as a single object when processed by another command. o noexpand "S""V" "command" "[command] ..." Prints its arguments without expansion. They will never be expanded unless the "expand" tag is used to cancel this "noexpand" tag. o expand "S" "command" "[command] ..." Cancels the "noexpand" tag. Source: <subst-in-string "=LT=define-tag foo>bar=LT=/define-tag>" "= <foo/> <subst-in-string "=LT=define-tag foo>quux=LT=/define-tag>" "=" /> <foo/> Output: bar <define-tag foo>quux</define-tag> bar o if "S""V" "string" "then-clause" "[else-clause]" If string is non empty, second argument is evaluated otherwise third argument is evaluated. Source: <define-tag test whitespace=delete> <if %0 "yes" "no" /> </define-tag> <test "string" /> <test "" /> Output: yes no o ifeq "S""V" "string1" "string2" "then-clause" "[else-clause]" If first two arguments are identical strings, third argument is evaluated otherwise fourth argument is evaluated. o ifneq "S""V" "string1" "string2" "then-clause" "[else-clause]" If first two arguments are not identical strings, third argument is evaluated otherwise fourth argument is evaluated. o when "string" When argument is not empty, its body is evaluated. o while "V" "cond" While condition is true, body function is evaluated. Source: <set-var i=10 /> <while <gt <get-var i /> 0 />>;;; <get-var i /> <decrement i />;;; </while> Output: 10 9 8 7 6 5 4 3 2 1 o foreach "variable" "array" "[start=start]" "[end=end]" "[step=pas]" This macro is similar to the "foreach" Perl's macro: a variable loops over array values and function body is evaluated for each value. first argument is a generic variable name, and second is the name of an array. Source: <set-var <foreach i x><get-var i /> </foreach> Output: 1 2 3 4 5 6 "start=start" Skips first indexes. Source: <set-var <foreach i x start=3><get-var i /> </foreach> Output: 4 5 6 "end=end" Stops after index has reached that value. Source: <set-var <foreach i x end=3><get-var i /> </foreach> Output: 1 2 3 "step=step" Change index increment (default is 1). If step is negative, array is treated in reverse order. Source: <set-var <foreach i x step=2><get-var i /> </foreach> <foreach i x step=-2><get-var i /> </foreach> Output: 1 3 5 6 4 2 o var-case "S""V" "var1=value1 action1" "[var2=value2 action2 ..." This command performs multiple conditions with a single instruction. Source: <set-var i=0 /> <define-tag test> <var-case x=1 <group <increment i /> x<get-var i /> /> x=2 <group <decrement i /> x<get-var i /> /> y=1 <group <increment i /> y<get-var i /> /> y=2 <group <decrement i /> y<get-var i /> /> /> </define-tag> <set-var x=1 y=2 /><test/> <set-var x=0 y=2 /><test/> Output: x1y0 y-1 o break "S" Breaks the innermost "while" loop. Source: <set-var i=10 /> <while <gt <get-var i /> 0 />>;;; <get-var i /> <decrement i />;;; <ifeq <get-var i /> 5 <break/> />;;; </while> Output: 10 9 8 7 6 o return "S" "[up=number]" "string" This command immediately exits from the innermost macro. A message may also be inserted. But this macro changes token parsing so its use may become very hazardous in some situations. "up=number" This attribute determines how much levels have to be exited. By default only one level is skipped. With a null value, all current macros are exited from. A negative value do the same, and stops processing current file. o warning "S" "string" Prints a warning on standard error. o exit "S" "[status=rc]" "[message=string]" Immediately exits program. "message=string" Prints a message to the standard error. "status=rc" Selects the code returned by the program (-1 by default). o at-end-of-file This is a special command: its content is stored and will be expanded after end of input. File functions o directory-contents "S" "dirname" "[matching=regexp]" Returns a newline separated list of files contained in a given directory. Source: <directory-contents . Output: mp4h.mp4h o real-path "S" "patname=pathname" Resolves all symbolic links, extra ``/'' characters and references to /./ and /../ in pathname, and expands into the resulting absolute pathname. All but the last component of pathname must exist when real- path is called. This tag is particularly useful when comparing if file or directory names are identical. Source: <real-path pathname=<__file__/> /> Output: /build/buildd/mp4h-1.3.1/doc/mp4h.mp4h o file-exists "S" "filename" Returns "true" if file exists. o get-file-properties "S" "filename" Returns an array of information on this file. These information are: size, type, ctime, mtime, atime, owner and group. Source: <get-file-properties <__file__/> /> Output: 68628 FILE 1271080359 1271080359 1271080359 buildd buildd o include "S" "file=filename | command=command-line" "[alt=action]" "[verbatim=true]" Insert the contents of a file in the file system - if the "file" attribute is given -, or the output from executing a system command - if the "command" attribute is given - into the input stream. For backwards compatibility, if neither the "file" nor the "command" attributes are given, the first argument is taken as a file to include. "file=filename" The given file is read and inserted into the input stream. This attribute cannot be combined with the command attribute.). "command=command-line" The given command line is executed on the operating system, and the output of it is inserted in the input stream. This attribute cannot be combined with the file attribute. The given command line is executed using the popen(3) standard C library routine. The command is executed using the standard system shell, which on Posix compliant systems is sh(1). "alt=action" If file is not found, this alternate action is handled. If this atribute is not set and file is not found, then an error is raised. This attribute has no effect when the command attribute is specified. "verbatim=true" File content is included without expansion. This is similar to using the m4 undivert macro with a filename as argument. Source: <include command="uname -a" /> Output: Linux vernadsky 2.6.24-27-server #1 SMP Fri Mar 12 01:45:06 UTC 2010 i686 GNU/Linux o use "S" "name=package" Load definitions from a package file. o comment This tag does nothing, its body is simply discarded. o set-eol-comment "S" "[string]" Change comment characters. o set-quotes "S" "[string string]" "[display=visible]" By default, all characters between "<@[" and "]@"> pairs are read without parsing. When called without argument, this macro inhibates this feature. When called with two arguments, it redefines begin and end delimiters. Begin delimiter must begin with a left-angle bracket, and end delimiter must end with a right-angle bracket. "display=visible" Delimiters are also written into output. Diversion functions Diversions are a way of temporarily saving output. The output of mp4 mp4h tries to keep diversions in memory. However, there is a limit to the overall memory usable by all diversions taken altogether.. o divert "S" "[ divnum=diversion-number ]" Output is diverted using this tag, where diversion-number is the diversion to be used. If the divnum attribute is left out, diversion- number is assumed to be zero. If output is diverted to a non-existent diversion, it is simply discarded. This can be used to suppress unwanted output. See the example below. When all mp4h input will have been processed, all existing diversions are automatically undiverted, in numerical order. Several calls of divert with the same argument do not overwrite the previous diverted text, but append to it. Source: <divert divnum="-1"/> This is sent nowhere... <divert/> This is output. Output: This is sent nowhere... This is output. o undivert "S" "[ divnum=diversion-number ]" This tag explicitly undiverts diverted text saved in the diversion with the specified number. If the divnum attribute is not given, all diversions are undiverted, in numerical order. When diverted text is undiverted, it is not reread by mp4h, but rather copied directly to the current output. It is therefore not an error to undivert into a diversion. Unlike m4, the mp4h undivert tag does not allow a file name as argument. The same can be accomplished with the include tag with the This text is diverted. <divert/> This text is not diverted. <undivert divnum="1"/> Output: This text is diverted. This text is not diverted. o divnum "S" This tag expands to the number of the current diversion. Source: Initial <divnum/> <divert divnum="1"/> Diversion one: <divnum/> <divert divnum="2"/> Diversion two: <divnum/> <divert/> Output: Initial 0 Diversion one: 1 Diversion two: 2 Debugging functions When constructs become complex it could be hard to debug them. Functions listed below are very useful when you could not figure what is wrong. These functions are not perfect yet and must be improved in future releases. o function-def "S" "name" Prints the replacement text of a user defined macro. For instance, the macro used to generate all examples of this documentation is Source: <function-def example /> Output: <set-var-verbatim verb-body=%ubody /><subst-in-var verb-body "<" "<" /> <subst-in-var verb-body ">" ">" /><subst-in-var verb-body "^\n*" "" /><subst-in-var verb-body "^" " " reflags=m /><set-var body=%body /><subst-in-var body "<three-colon/>[^;\n]*\n[ \t]*" "" /><subst-in-var body "<three-colon/>$" "" reflags=m /><subst-in-var body "^\n*" "" /><subst-in-var body "^" " " reflags=m /><group "Source: <get-var-once verb-body /> Output: <get-var-once body /> " /> o debugmode "S" "string" This command acts like the "-d" flag but can be ynamically changed. o debugfile "S" "filename" Selects a file where debugging messages are diverted. If this filename is empty, debugging messages are sent back to standard error, and if it is set to "-" these messages are discarded. Note: There is no way to print these debugging messages into the document being processed. o debugging-on "S" "name" "[name] ..." Declare these macros traced, i.e. information about these macros will be printed if "-d" flag or "debugmode" macro are used. o debugging-off "S" "name" "[name] ..." These macros are no more traced. Miscellaneous o __file__ "S" "[name]" Without argument this macro prints current input filename. With an argument, this macro sets the string returned by future invocation of this macro. o __line__ "S" "[number]" Without argument this macro prints current number line in input file. With an argument, this macro sets the number returned by future invocation of this macro. Source: This is <__file__/>, line <__line__/>. Output: This is ./mp4h.mp4h, line 2201. If you closely look at source code you will see that this number is wrong. Indeed the number line is the end of the entire block containing this instruction. o __version__ "S" Prints the version of mp4h. o dnl "S" Discard all characters until newline is reached. This macro ensures that following string is a comment and does not depend of the value of comment characters. Source: <dnl/>This is a comment foo <dnl/>This is a comment bar Output: foo bar o date "S" "[epoch]" Prints local time according to the epoch passed on argument. If there is no argument, current local time is printed. "time" An epoch time specification. "format" A format specification as used with the strftime(3) C library routine. Source: <date/> <set-var info=<get-file-properties <__file__/> /> /> <date <get-var info[2] /> /> <date time="<get-var info[2] />" format="%Y-%m-%d %H:%M:%S" /> Output: Mon Apr 12 13:53:26 2010 Mon Apr 12 13:52:39 2010 2010-04-12 13:52:39 o timer "S" Prints the time spent since last call to this macro. The printed value is the number of clock ticks, and so is dependent of your CPU. Source: The number of clock ticks since the beginning of generation of this documentation by &mp4h; is: <timer/> Output: The number of clock ticks since the beginning of generation of this documentation by B<mp4h> is: user 9 sys 0 o mp4h-l10n "S" "name=value" Set locale-specific variables. By default, the portable "C" locale is selected. As locales have different names on different platforms, you must refer to your system documentation to find which values are adapted to your system. o mp4h-output-radix "S" "number" Change the output format of floats by setting the number of digits after the decimal point. Default is to print numbers in the "%6.f" format. Source: <add 1.2 3.4 /> <mp4h-output-radix 2 /> <add 1.2 3.4 /> Output: 4.600000 4.60
EXTERNAL PACKAGES
It is possible to include external files with the "include" command.). Another way to include packages is with the "use" command. There are two differences between "use" and "include": first, package name has no suffix; and more important, a package cannot be loaded more than once.
MACRO EXPANSION
This part describes internal mechanism of macro expansion. It must be as precise and exhaustive as possible so contact me "<URL:mailto:barbier@linuxfr.org>" if you have any suggestion. Basics Let us begin with some examples: Source: <define-tag foo> This is a simple tag </define-tag> <define-tag bar endtag=required> This is a complex tag </define-tag> <foo/> <bar>Body function</bar> Output: This is a simple tag This is a complex tag User defined macros may have attributes like HTML tags. To handle these attributes in replacement text, following conventions have been adopted (mostly derived from Meta-HTML): o Sequence %name is replaced by the command name. o Attributes are numbered from 0. In replacement text, %0 is replaced by first argument, %1 by the 2nd, etc. As there is no limitation on the number of arguments, %20 is the 21st argument and not the third followed by the 0 letter. Source: <define-tag href> <a href="%0">%1</a> </define-tag> <href "The Gimp" /> Output: <a href="">The Gimp</a> o Sequence "%#" prints number of attributes. o Sequence "%%" is replaced by "%", which is useful in nested definitions. Source: <define-tag outer>;;; outer, # attributes: %# <define-tag inner1>;;; inner1, # attributes: %#;;; </define-tag>;;; <define-tag inner2>;;; inner2, # attributes: %%#;;; </define-tag>;;; <inner1 %attributes and some others /> <inner2 %attributes and some others /> </define-tag> <outer list attributes /> Output: outer, # attributes: 2 inner1, # attributes: 2 inner2, # attributes: 5 o Sequence %attributes is replaced by the space separated list of attributes. Source: <define-tag mail1> <set-var %attributes /> <get-var name /> <get-var mail /> </define-tag> <set-var <mail1 name="Dr. Foo" mail="hello@foo.com" /> Output: Dr. Foo hello@foo.com o Sequence %body is replaced by the body of a complex macro. Source: <define-tag mail2 endtag=required whitespace=delete> <set-var %attributes /> <a href="mailto:<get-var mail />">%body</a> </define-tag> <mail2 mail="hello@foo.com"> <img src="photo.png" alt="Dr. Foo" border=0 /> </mail2> Output: <a href="mailto:hello@foo.com"> <img src="photo.png" alt="Dr. Foo" border=0 /> </a> o The two forms above accept modifiers. When %Aattributes or %Abody is used, a newline separated list of attributes is printed. Source: <define-tag show-attributes whitespace=delete> <set-var <increment i /> </foreach> </define-tag> <show-attributes Output: %0: name=Dr. Foo%1: mail=hello@foo.com o Another alternate form is obtained by replacing "A" by "U", in which case text is replaced but will not be expanded. This does make sense only when macro has been defined with "attributes=verbatim", otherwise attributes are expanded before replacement. Source: <define-tag show1> Before expansion: %Uattributes After expansion: %attributes </define-tag> <define-tag show2 attributes=verbatim> Before expansion: %Uattributes After expansion: %attributes </define-tag> <define-tag bar>and here %attributes</define-tag> <show1 <bar we go /> /> <show2 <bar we go /> /> Output: Before expansion: and here we go After expansion: and here we go Before expansion: <bar we go /> After expansion: and here we go o Modifiers "A" and "U" can be combined. Note: Input expansion is completely different in Meta-HTML and in mp4h. With Meta-HTML it is sometimes necessary to use other constructs like %xbody and %qbody. In order to improve compatibity with Meta-HTML, these constructs are recognized and are interpreted like %body. Another feature provided for compatibility reason is the fact that for simple tags %body and %attributes are equivalent. These features are in the current mp4h version but may disappear in future releases. Attributes Attributes are separated by spaces, tabulations or newlines, and each attribute must be a valid mp4h entity. For instance with the definitions above, "<bar>" can not be an attribute since it must be finished by "</bar>". But this is valid: <foo <foo/> /> or even <foo <foo name=src url=ici /> /> In these examples, the "foo" tag has only one argument. Under certain circumstances it is necessary to group multiple statements into a single one. This can be done with double quotes or with the "group" primitive, e.g. <foo "This is the 1st attribute" <group and the second /> /> Note: Unlike HTML single quotes can not replace doube quotes for this purpose. If double quotes appear in an argument, they must be escaped by a backslash "\". Source: <set-var <get-var text /> Output: Text with double quotes " inside Macro evaluation Macros are characterized by o name o container status (simple or complex) o if attributes are expanded or not o function type (primitive or user defined macro) o for primitives, address of corresponding code in memory and for user defined macros the replacement text Characters are read on input until a left angle bracket is found. Then macro name is read. After that attributes are read, verbatim or not depending on how this macro as been defined. And if this macro is complex, its body is read verbatim. When this is finished, some special sequences in replacement text are replaced (like %body, %attributes, %0, %1, etc.) and resulting text is put on input stack in order to be rescanned. Note: By default attributes are evaluated before any replacement. Consider the following example, to change text in typewriter font: <define-tag text-tt endtag=required whitespace=delete> <tt>%body</tt> </define-tag> This definition has a major drawback: Source: <text-tt>This is an <text-tt>example</text-tt></text-tt> Output: <tt>This is an <tt>example</tt></tt> We would like the inner tags be removed. First idea is to use an auxiliary variable to know whether we still are inside such an environment: <set-var _text:tt=0 /> <define-tag text-tt endtag=required whitespace=delete> <increment _text:tt /> <ifeq <get-var _text:tt /> 1 "<tt*>" /> %body <ifeq <get-var _text:tt /> 1 "</tt*>" /> <decrement _text:tt /> </define-tag> (the presence of asterisks in HTML tags is explained in next section). Source: <text-tt>This is an <text-tt>example</text-tt></text-tt> Output: <tt>This is an example</tt> But if we use simple tags, as in the example below, our definition does not seem to work. It is because attributes are expanded before they are put into replacement text. Source: <define-tag opt><text-tt>%attributes</text-tt></define-tag> <opt "This is an <opt example />" /> Output: <tt>This is an <tt>example</tt></tt> If we want to prevent this problem we have to forbid attributes expansion with Source: <define-tag opt attributes=verbatim>;;; <text-tt>%attributes</text-tt>;;; </define-tag> <opt "This is an <opt example />" /> Output: <tt>This is an example</tt> Expansion flags When you want to embed some server-side scripting language in your pages, you face up some weird problems, like in <a href=<%= $url %>>Hello</a> The question is how do mp4h know that this input has some extra delimiters? The answer is that mp4h should not try to handle some special delimiters, because it cannot handle all of them (there are ASP, ePerl, PHP,... and some of them are customizable). Now, remember that mp4h is a macro-processor, not an XML parser. So we must focus on macros,and format our input file so that it can be parsed without any problem. Previous example may be written <a href="<%= $url %>">Hello</a> because quotes prevent inner right-angle bracket from closing the "a" tag. Another common problem is when we need to print only a begin or an end tag alone. For instance it is very desirable to define its own headers and footers with <define-tag header> <html*> <head> ... put here some information .... </head> <body* </define-tag> <define-tag footer> </body*> </html*> </define-tag> Asterisks mark these tags as pseudo-simple tags, which means that they are complex HTML tags, but used as simple tags within mp4h because tags would not be well nested otherwise. This asterisk is called ``trailing star'', it appears at the end of the tag name. Sometimes HTML tags are not parsable, as in this javascript code: ... document.write('<*img src="foo.gif"'); if (text) document.write(' alt="'+text+'"'); document.write('>'); ... The ``leading star'' is an asterisk between left-angle bracket and tag name, which prevents this tag from being parsed. That said we can now understand what the "--expansion" flag is for. It controls how expansion is performed by mp4h. It is followed by an integer, which is a bit sum of the following values 1 do not parse unknown tags. When set, HTML tags are not parsed. When unset, HTML tags are parsed, i.e. that attributes and/or body is collected. 2 unknown tags are assumed being simple. When set, HTML tags are simple by default. When unset, HTML tags are complex by default, unless their attribute contain a trailing slash or a trailing star appear just after tag name (see below). 4 trailing star in tag name do not make this tag simple. When set, trailing star in tag name has no special effect. When unset, it causes an HTML tag to be simple. 8 an unmatched end tag closes all previous unmatched begin tags. When set, all missing end closing tags are automatically inserted. When unset, an unmatched end tag is discarded and interpreted as normal text, so processing goes on until matching and tag is found. 16 interpret backslashes as printf. When set, backslashes before non special characters are removed. When unset, they are preserved. 32 remove trailing slash in tag attributes. When set, remove trailing slash in tag attributes on output. When unset, they are preserved. 64 do not remove trailing star in tag name. When set, trailing star after tag name are preserved on output. When unset, they are removed. 128 do not remove leading star in tag name. When set, leading star before tag name are preserved on output. When unset, they are removed. 256 do not add a space before trailing slash in tag attributes By default, a space is inserted before trailing slash in tag attributes. When set, this space is not prepended. 1024 suppress warnings about bad nested tags. When set, warnings about bad nested tags are not displayed. When unset, they are printed on standard error. 2048 suppress warnings about missing trailing slash. When set, warnings about missing trailing slash are not displayed. When unset, they are printed on standard error. Run mp4h -h to find default value. Current value matches HTML syntax, and it will tend to zero when XHTML syntax becomes more familiar.
AUTHOR
Denis Barbier "<URL:mailto:barbier@linuxfr.org>" Mp4h has its own homepage "<URL:>".
THANKS
Sincere thanks to Brian J. Fox for writing Meta-HTML and Rene Seindal for maintaining this wonderful macro parser called GNU m4. | http://manpages.ubuntu.com/manpages/oneiric/man1/mp4h.1.html | CC-MAIN-2014-10 | refinedweb | 8,461 | 55.54 |
On Tuesday 03 June 2003 22:29, Joerg Heinicke wrote:
JH> 1: A solution for the HTMLSerializer was discussed
JH> (startPrefixMapping(), endPrefixMapping()). Maybe TidySerializer
JH> provides a better solution, but I guess this can be adapted too.
A little more would be nercessary. You would have to map all xhtml namespace
to default prefix and remove the declaration. All other namespaces would have
to generate an error. When I know someone will commit it, I can make this
patch for HTMLSerializer.
JH> 2: Human readability is as you say for debugging reasons. This needs not
JH> to be done on live systems. We use IMO a better way: on the last
JH> transformer step we add a label="format". We access a page in debugging
JH> mode via test.html?cocoon-view=format. The view "format" is simply a
JH> further transformer step using format.xsl and the XMLSerializer. The
JH> different between live and debugging mode is the URL, not the sitemap.
JH> And there is no need for second component.
You can use tidyserializer for the same via views. For this purpose, a
stylesheet would also be OK. But it wouldn't be simply indenting. You have to
check for xml:strip-space and friends. That's why I haven't done it so far.
JH> 3: Also only for debugging, isn't it?
Yesno. I want it as part of quality assurance.
JH> Validating every request on live systems is too much resource consuming.
Not that real, when it (hopefully) is integrated in xalan some time. For xslt
2.0 type handling, there has to be (IMHO) validation inside. It only has to
be forceable for output.
JH> And what do you want to do on
JH> live systems when a validation error occurs? A message "We can't deliver
JH> the page, because it's not valid HTML"?
Redirecting to a internal error page, like with other errors to.
JH> But you have other
JH> possibilities, e.g. using or a mix of Jakarta
JH> commons httpclient with Tidy as we did. If you integrate this in a test
JH> system you can validate your pages automatically.
That's what I actually do more or less. I validate all generated files via
xmllint. I am looking for a cleaner solution which enforces this already in
cocoon. When this is a parameter or a view, even better.
Regards
Torsten
--
Domain in provider transition, hope for smoothness. Planed date is 1.7.2003. | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200306.mbox/%3C200306032311.14251.torstenknodt@datas-world.de%3E | CC-MAIN-2017-30 | refinedweb | 415 | 69.38 |
For some time, I’ve been reading blogs about exploiting that seems now at last my time has come to change the role. In this blog I will publish posts as a workshop “from scratch” the exploiting, and additionally I try often bring examples of vulnerabilities (hereafter “vulns” if allowed me) in the rial guorld. This post is primarily an intro with to acquire basic ideas.
Anyway, let’s start by defining the exploiting. The exploiting is the set of techniques that seek to exploit errors (bugs) the programmer to manipulate the behavior of the program, these errors are mainly poor forecasting of potential data that can provide the user. One time I read a quote (from a certain P. Williams) which he reads
From the point of view of a programmer, the user is no more than a peripheral drumming when you are sent a read request.
This idea is possibly leading to the occurrence of these errors. For me, the user is a potential serial murderer with which we must be educated better and why we do what we ask, to some extent, so it is necessary to verify the data that provides the user always properly before handling.
Exploitation can be at many levels (you can exploit for example the web server binary itself, or exploit the webapp that manages that server, techniques such as SQLi and XSS belong to the latter world, as for us, we will enjoy only the world binary, more beautiful at least for me) and of course there are cases of extremely complex operation and others with simpler schemes. A more complexity, more fun.
To continue to understand the full meaning of every sentence you read, dear reader, perhaps as healthy as it is to know a little assembler (for now IA32, and go into AMD64 and maybe ARM/64, maybe even AVR) I know it can be scary but well tamed is quite docile, in addition nor excessive knowledge is required before any instruction that one unknown can always consult a manual (for instance, Intel). Similarly, you need to know C or C ++ and Assembler more depth (“asm” from now on).
These techniques carry exploited since before many of us were born, although I do not think it is possible to determine in which year began research in this field. In any case, surely the intelligence agencies and take the advantage when the public started with it (as has always happened, being the best example cryptography, especially the discovery of asymmetric cryptography). However, the most important milestone was possibly the November 2, 1988 when Robert Tappan Morris, seeing the disaster that was coming, executed his famous Morris worm, responsible for infecting 6,000 computers 60000 connected to the Internet at the time (still ARPANET) ie 10%, causing a damage of thousands of dollars.
Multiple viruses have been used since then to spread by exploiting the network, the latter being the most notorious case WannaCry, who operated the EternalBlue (already try).
Another really important event was the publication of the article Smashing the Stack for Fun and Profit by Aleph1 during 1996 in Phrack (a large electronic magazine about hacking and phreacking) which was the exploitation of the buffer overflow.
Anyway, let’s start at once, right?
A buffer overflow occurs when a program allows writing beyond the end of the memory space that was reserved to store the data you are receiving. It is generally understood that a buffer overflow is in the stack, while for those that occurred in the heap will be called heap overflow (very broad topic that we still have some way off). Synonyms for stack buffer overflow are smashing, buffer overrun, stack overflow and similar combinations.
Let’s see how a writing out of bounds can lead to arbitrary code execution. You have to understand that a
call asd equivalent to performing a
push EIP; mov eip, asd, and that a
ret instruction is practically a
pop eip. Therefore, if we take advantage of a buffer overflow to overwrite the saved reach EIP, running
ret at the end of the function will be placed in EIP value we have placed there, gaining control program flow.
Normally a C function when it is compiled to assembler has a fixed pattern consisting of a prologue (keeps the stack frame of the above function and creates a new appropriate framework for local variables of the current function) code and an epilogue (restores the stack frame of the caller function), although the compiler adds certain instructions apart for reasons we do not want to analyze, exploit all that when we consider what we can change in our payload. Let’s see as an example the following code:
#include <stdio.h>
int print(char* arg)
{
printf(arg);
return 123;
}
int main()
{
print(“Hola, soy un programa de mier… prueban”);
return 0;
}
After compiled (
gcc test.c -o test) the following code in asm (extracted by
objdump -d test -Mintel) is obtained
08048492 <print>: 8048492: 55 push ebp 8048493: 89 e5 mov ebp,esp 8048495: 53 push ebx 8048496: 83 ec 04 sub esp,0x4 8048499: e8 59 00 00 00 call 80484f7 <__x86.get_pc_thunk.ax> 804849e: 05 62 1b 00 00 add eax,0x1b62 80484a3: 83 ec 0c sub esp,0xc 80484a6: ff 75 08 push DWORD PTR [ebp+0x8] 80484a9: 89 c3 mov ebx,eax 80484ab: e8 a0 fe ff ff call 8048350 <printf@plt> 80484b0: 83 c4 10 add esp,0x10 80484b3: b8 7b 00 00 00 mov eax,0x7b 80484b8: 8b 5d fc mov ebx,DWORD PTR [ebp-0x4] 80484bb: c9 leave 80484bc: c3 ret
We proceed to analyze
push ebp
mov ebp,esp
Prologue, the former ebp is saved, and placed in ebp value esp.
push ebx
For some reason the program to save the value of ebx modifies before they recover before leaving the function.
sub esp,0x4
Prologue is finished creating the stack frame with a space between ebp and esp wait decreasing function of space needed for local variable or what it is. The framework is now 4 bytes.
call 80484f7 <__x86.get_pc_thunk.ax>
This gets in eax the address of the next instruction to be executed, it would be like a
mov eax, eip, but it turns out that instruction is illegal.
add eax,0x1b62
Related to the above statement, we’ll see what they do and how they relate.
sub esp,0xc
For some reason the frame is enlarged by 12 bytes.
push DWORD PTR [ebp+0x8]
Access to the argument of the function in which we find as the calling convention cdecl (While x86_64 employed fastcall).
mov ebx,eax
Finally, the compilers perform often nonsense.
call 8048350 <printf@plt>
Finally called printf(), passing as an argument the string that has happened to us as an argument (the argument is already pushed, as we have seen for two instructions).
add esp,0x10
Get rid of the frame.
mov eax,0x7b
Equivalent to
return 123;, as the functions return value in eax.
mov ebx,DWORD PTR [ebp-0x4]
The value of ebx recovers previously saved.
leave
Equivalent to
mov esp, ebp ; pop ebp
It´s the epilogue function, the frame is restored.
ret
We return to the caller function.
We could have analyzed the main () function, however this feature is specialist.
#include
void print(char* arg)
{
char buf[128];
strcpy(buf, arg);
printf(“%sn”, buf);
}
int main(int argc, char** argv)
{
if(argc < 2) return 1;
print(argv[1]);
return 0;
}
The problem is that strcpy copy without limit until finding a value ‘x00’ (NULL).
We can see through a debugger like gdb how to get control program flow to overdo the size of buf, introduce 500 A’s knowing that buf has a size 128.
$ gcc b.c -o vuln -fno-stack-protector -D_FORTIFY_SOURCE=0 -m32
$ gdb -q ./vuln
Reading symbols from ./vuln...(no debugging symbols found)...done.
(gdb) run `perl -e 'print "A"x500'`
Starting program: /home/arget/vuln `perl -e 'print "A"x500'`A
Program received signal SIGSEGV, Segmentation fault.
0x41414141 in ?? ()
(gdb)
As you can see, there is a moment that the program attempted to execute code 0x41414141, 41 is in hexadecim0al the character ‘A’, showing that we walked the saved return address being collected by the
ret instruction. In the next post we will see how to take advantage of this control. | https://www.puffinsecurity.com/starting-from-zero/index.html | CC-MAIN-2020-34 | refinedweb | 1,394 | 63.32 |
Pop up Very Very Urgent - JSP-Servlet
Pop up Very Very Urgent Respected Sir/Madam,
I am R.Ragavendran.. I got your reply.. Thank you very much for the response. Now I am sending... with the selected Employee ID and Name. I went through your coding but I dont know to apply
Very Very Urgent -Image - JSP-Servlet
Very Very Urgent -Image Respected Sir/Madam,
I am... with some coding, its better..
PLEASE SEND ME THE CODING ASAP BECAUSE ITS VERY VERY URGENT..
Thanks/Regards,
R.Ragavendran... Hi friend,
Code
very urgent
very urgent **
how to integrate struts1.3
,ejb3,mysql5.0 in jboss server with
myeclipse IDE
Help Very Very Urgent - JSP-Servlet
Help Very Very Urgent Respected Sir/Madam,
I am sorry..Actually... requirements..
Please please Its Very very very very very urgent...
Thanks/Regards,
R.Ragavendran..
Hi friend.. Hi friend,
Read for more... Employee ID and Name. I went through your coding but I dont know to apply it in my
Radio Buttons in DB Very Urgent - JSP-Servlet
Radio Buttons in DB Very Urgent Respected Sir/Madam,
I am R.Ragavendran.. I got your reply.. Thank you very much for the response. Now I am sending... Very Urgent..
Please send
James Server and JBoss - JavaMail
James Server and JBoss Hi Sir/Madam,
How to set class path for James server and Jboss servers to run programs with clear examples. Please send the reply very urgent...=GetXmlHttpObject();
if (xmlhttp==null)
{
alert ("Your browser does...("Microsoft.XMLHTTP");
}
return null;
}
Your results will be displayed here
I am getting Undefined in Text Box Very Urgent - JSP-Servlet
I am getting Undefined in Text Box Very Urgent Respected Sir/Madam,
I am R.Ragavendran.. Thanks for your superb reply. I got the coding. But I find.../Regards,
R.Ragavendran.. Hi friend,
Here is the
Ple help me its very urgent
Ple help me its very urgent Hi..
I have one string
1)'2,3,4'
i want do like this
'2','3','4'
ple help me very urgent... know exactly the cause of the problem. Anyway i am sending you the link for your
Creating a web service that connects to the database - WebSevices
had time to look into my problem.
Than you once again for your help.
Kind...Creating a web service that connects to the database Hello,
Good... Chishimba Hi,
Please tell me the table structure and other details
Data Service and Query Builder Tool
and organize your data service based on your requirements. It can be a logic data...
Data Service and Query Builder Tool
Data Service is more than data manager or data
Validating Number Very Urgent - JSP-Servlet
Validating Number Very Urgent Respected Sir/Madam,
I am... of Emp ID text box.. I am sending the code for your kind reference which tends you to insert your coding easily in it..
Here's the coding
(very urgent) - Design concepts & design patterns
(very urgent) hi friends,
This is my code in html
...
hi srikala
if u want to display submit button in center of web
(very urgent) - Java Server Faces Questions
(very urgent) hi friends,
This is my code in JSF
very urgent....
Hi friend,
I am sending you a link...Hi Hi friends,
must for struts in mysql or not necessary please tell me its very urgent....I have a only oracle10g database please let me
jboss with local entity bean - Development process
provide your valuable suggestions regarding this issues. Hi friend...jboss with local entity bean We are using eclipse version 3.4.0 for EJB 2.0 application development ( which has ready plug-in for jboss server
JBoss Seam - Web 2.0 Applications
easy integration with the JBoss Enterprise Service Bus (ESB) and Java Business...JBoss Seam - Web 2.0 Applications
JBoss Seam is a powerful new
hi... - Struts
also its very urgent Hi Soniya,
I am sending you a link. I hope that, your problem will be solved. Please visit for more information...hi... Hi Friends,
I am installed tomcat5.5 and open
very important - Kindly help
very important - Kindly help I am creating web page for form registration to my department ..I have to Reprint the Application Form (i.e Download the PDf File from the Database ) , when the user gives the Application number
creating windows service - RMI
service for this server so that it can start/stop at any time. please help me out it is very urgent for me. Pls................. Hi friend,
http...creating windows service Hi, I have created a project using spring
JBoss and Sevlet - JSP-Servlet
;Hi friend,
Plz check your war file "gg.war" and check the configuration...JBoss and Sevlet I am trying to get familar with JBoss. I package up...
IsItWorking
/gg/servlets
I have JBoss installed and put my gg.war into G
Hi.. - Java Beginners
Hi.. Hi,
I got some error please let me know what is the error
integrity constraint (HPCLUSER.FAFORM24GJ2_FK) violated - parent key
its very urgent Hi Ragini
can u please send your complete source
Leading VoIP Service Providers
. The top rated VoIP service providers worldwide are mainly focused at households... service providers which provides the equipments for free. In addition... lower. It has emerged as an top most VoIP service providers due to its excellent
Attachement in Web Service + EJB 3 + Jboss
Attachement in Web Service + EJB 3 + Jboss How to send attachements in Web Service using EJB3 with JBoss Application Server
hi.... - Java Beginners
very urgent
Hi ragini,
First time put the hard code after that check the insert query. Because your code very large. If hart code...hi.... Hi friends
i am using ur sending code but problem
Flex as a Service
Flex as a Service Hi.......
just tell me about
How do I run Flex as a service?
please give me an example if possible
Thanks Ans:
Flex is not a server. It is the pert of your web application. you canboss sever
jboss sever Hi
how to configure data source in jboss server and connection pooling
Thanks
Kalins naik
JBoss Tutorials
Jboss 3.2 EJB Examples
use of this package along with Jboss, which is also free and open, is very useful... Jboss 3.2 EJB Examples
... be required by different
customers
Part 1
Stateless session bean jboss 3.2
GPS Mobile Tracking Service
GPS Mobile Tracking Service
The mobile tracking system is currently.... In this response, several companies have begin the mobile tracking service... Global Positioning System (GPS) based mobile tracking service at nominal charges
Servlet service method - Java Beginners
Servlet service method Hi EveryOne ,
I have a simple...";
public void service(HttpServletRequest request,HttpServletResponse response... way to send the path from servlet to normal java class this is very urgent
Hi
Hi I want import txt fayl java.please say me...
Hi,
Please clarify your problem!
Thanks
Hi..
Hi.. what are the steps mandatory to develop a simple java program?
To develop a Java program following steps must be followed by a Java...
in your system and its path must be
located in the Environment variable.
For detail
Hi - Struts
Hi Hi Friends,
I am new in Struts please help me starting of struts and accept my request also..please send its very urgent....
I ahve... server please tell me.
Hi friend,
For solving the problem
HI!!!!!!!!!!!!!!!!!!!!!
HI!!!!!!!!!!!!!!!!!!!!! import java.awt.*;
import java.sql....+"'");
JOptionPane.showMessageDialog(null,"Your savings is: "+ts... withdrawn "+withdrawl+" shillings and your balance is "+leftSavings
hi.......
hi....... import java.awt.;
import java.sql.*;
import javax.swing....+"'");
JOptionPane.showMessageDialog(null,"Your savings is: "+ts);
}
}
catch(Exception e...(null," You have withdrawn "+withdrawl+" shillings and your balance
hi
hi i want to develop a online bit by bit examination process as part of my project in this i am stuck at how to store multiple choice questions...])){
count++;
}
}
out.println("Your "+count+" answers are correct");
%>
For more simple `Hello world' java program that prints HelloWorld
and run it. I am
assuming that latest version of JDK is installed on your machine
Dedicated Hosting Service
service, and in this type of hosting you will get the full control of your...Dedicated Hosting Service - Ideal for hosting high traffic website... for hosting your applications.
The dedicated hosting is all about leasing a Server
plz help -java project very urgent
plz help -java project very urgent ? Ford furniture is a local furniture store in acts, and they as CS graduate students to implement a software system to generate various reports for them at the end of each month. You
Hi..Again Doubt .. - Java Beginners
Hi..Again Doubt .. Thank u for ur Very Good Response...Really great... WORRY WE'ILL ROCK ");
}
}
}
Hi
Can u send your MainMenu.java... JLabel("SELECT YOUR CHOICE FROM THE LIST BELOW");
p1.add(lb1);p1.add(lb2
JavaFX deployment in Jboss
JavaFX deployment in Jboss Hi,
How to deploy javafx application into JBoss
Get Noticed with Your Resume
/hobbies. Ideally your resume should not be longer than a page- two if you are very...
Get Noticed with Your Resume
... the importance of your resume- they tell you that your resume can make or break your chances
Find Your Host Name
Find Your Host Name
... the
host name of the local system in a very simple example. Here we are just call... of the local
system in a very simple manner. the complete code of the example, some are related to annotations and hibernate so on..
I am very confusing
Hi, - Java Beginners
Hi, Hi friends
I have some query please write the code and send me.I want adding value using javascript onChange() Events...please write the code and send me ts very urgent related linkage error
Jboss related linkage error Please check below error when i run in jboss
java.lang.LinkageError:
loader constraint violation in interface itable... loader (instance of org/jboss/classloader/spi/base/BaseClassLoader
Location Based Service (LBS) in Tourism
Location Based Service (LBS) in Tourism
... say that it is a service that determines where a mobile device and its user... is the nearby attraction? Where is the nearest service? etc. and the answer
me its very urgent....
Hi Friend,
Plz give full details...Hi.... Hi Friends,
Thanks for reply can send me sample of code using ur idea...
First part i have completed but i want to how to go
The world of Blogs ? Promote your Business
The world of Blogs – Promote your Business
Blogs prove to be very much useful and powerful tool for
increasing your business. Through blogs, experiences of millions of people are
shared by just in very less time.
One
hi roseindia - Java Beginners
hi roseindia what is java? Java is a platform independent..., Threading, collections etc.
For Further information, you can go for very good...).
Thanks/Regards,
R.Ragavendran... Hi deepthi,
Read for more
Ask Programming Questions and Discuss your Problems
Ask Programming Questions and Discuss your Problems
Dear Users, Analyzing your plenty of problems and your love for Java and Java and Java related fields, Roseindia Technologies
Hi.... - Java Beginners
Hi....
I hv 290 data and very large form..In one form it is not possible to save in one form so i want to break in to part....And i give...; Hi friend,
Some points to be member to solve the problem :
When
Apple's Ipad? How It Can Enhance Your Social Networking Experience?
Apple’s Ipad – How It Can Enhance Your Social Networking... to stores to get their hands on their very own iPad. The device has been... to enhance your social networking experience.
Connectivity
The most important.*"%>
Mobile Software Testing Services, Mobile Application Software Testing Service RoseIndia
to polish up their product. Our mobile software testers are very efficient....
We offer mobile application software testing, mobile software testing service... company or a mobile software development firm but don’t have your own software
Struts - Jboss - I-Report - Struts
Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report
Interview Questions and Answers
a challenging issue you had to face, your answer would depend on your particular background... when you failed in spite of your best efforts?
This is a very common situation... about your goals- both short term and long term.
This is a very you-specific
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/13423 | CC-MAIN-2015-40 | refinedweb | 2,071 | 66.94 |
- Code: Select all
class RelativeIndentPasteCommand(sublimeplugin.TextCommand):
def run(self, view, args):
view.runCommand('insertInlineSnippet', ['$PARAM1', sublime.getClipboard()])
You could almost get away with just view.runCommand('insertInlineSnippet', [sublime.getClipboard()]), but then the contents of the clipboard will be interpreted as a snippet, which is probably a bad idea.
It's also worth noting that there are couple of extra things that copy/paste do in sublime, that aren't accessible via getClipboard/setClipboard:
- An extra flag is stored in the clipboard indicating if it should be pasted as a stand alone line. This is used to make Ctrl+C, Ctrl+V function as 'duplicate line' when the selection is empty.
- The syntax of the text copied to the clipboard is also stored there for meta info, so when pasting into a new buffer, the syntax will be applied. | http://www.sublimetext.com/forum/viewtopic.php?p=385 | CC-MAIN-2015-18 | refinedweb | 139 | 55.74 |
STRING MANIPULATION
String Manipulation
ASCII Characters
Null Character
The null character (represented by
'\0' ,
0x00 , or a constant defined as
NULL) is used to terminate strings and arrays (essentially the same thing anyway). Most string-based commands will automatically insert the null character at the end of a string, while memory commands won’t. Special care has to be taken to make sure the memory set aside for a string can accommodate the null character (when talking in bytes, the number of ASCII characters + 1).
Carriage Return And New Line
These two characters are used to being a new line of text. The carriage return moves the cursor back to the start of the line, and the new line shifts the cursor down one line. These characters are reminiscent of the typewriter days. The carriage return is inserted into code using
\r, and a new line using
\n. The standard order is to write the carriage return first, and then the new line
\r\n.
stringBuff[100]; // Nicely formatted text snprintf(stringBuff, sizeof(stringBuff), "This is a line of text.\r\n"); snprintf(stringBuff, sizeof(stringBuff), "This is a second line of text.\r\n"); // Muddled text snprintf(stringBuff, sizeof(stringBuff), "This is a line of text."); snprintf(stringBuff, sizeof(stringBuff), "This text will be muddled with the first because there is no carriage return or new line.\r\n");
Case Switching
The case of ASCII characters can be switched in code by inverting the 5th bit. It can also be done by exclusive ORing with the space character.
printf("A" ^ " "); // stdout: a printf("a" ^ " "); // stdout: A
Strings
Strings in C are arrays where each element of the array holds an ascii character. When they are defined using double quotes, they are called a string literal. Normally these are 8-bit elements representing the standard ascii format (click for ascii table). The arrays can be defined first, and then ascii characters assigned to them, of the ascii values can be assigned easily when the array is defined as follows:
This will define a 7 byte string, 6 bytes to hold “abc123” and then the 7th to terminate the string with
\0 (
0x00). Here are two ways of writing this.
char* myString = "abc123"; char myString[] = "abc123";
Special Characters
Special characters can be added to strings using the escape character
\ followed by a single identifier.
Typically, both the carriage return and new line characters are used for making a new line (and in that order). This is normally appended at the end of strings to be printed with
\r\n.
Finding The Length
Use the
strlen() function provided by
#include <stdio.h> #include <string.h> int main() { char my_string[] = "Hello"; print("%z\n", strlen(my_string)); return 0; } // Prints "5"
Copying
strcpy() is a standard library function for copying the contents of one C string to another.
Concatenating
Unlike many higher level languages, you cannot just concatenate C “strings” together like so:
my_string_1 + my_string_2 (remember, they are just arrays of characters!). Instead you have to use the
strcat() function:
C Number To String Functions
printf (And It’s Variants)
printf() can be a very taxing function on the processor, and may disrupt the real-time deadlines of code (especially relevant to embedded programming). It is a good idea to keep
printf() function calls away from high-priority interrupts and control loops, and instead use them in background tasks that can be pre-empted (either by interrupts or a higher-priority threads when running a RTOS).
printf()
printf() is the most commonly used string output function. It is a variadic function (it takes a variable number of arguments, note that this is not the same as function overloading, which is something that C does not support).
On Linux, this will print the string to the calling terminal window. Most embedded systems do not support
printf() as their is no “standard output” (although this can be re-wired to say, a UART). Instead, in embedded applications,
printf variants like
sprintf() are more common.
If you want to print an already-formulated string using
printf (with no additional arguments to be inserted), do not use the syntax
printf(msg). Instead, use the format
printf(%s, msg).
char* msg = "Example message"; // This is the incorrect and unsafe way of printing // an already formulated string, and the compiler // should give you a warning. printf(msg); // The correct way of printing an already-formulated // string printf(%s, msg);
The
printf() function takes format specifiers which tell the function how you want the numbers displayed.
Most C compiler installations include standard C libraries for manipulating strings. A common one is
stdio.h, usually included into a C file using the syntax
#include <stdio.h>. This library contains string copy, concatenate, string build and many others. Most of them rely on null-terminated strings to function properly. Some of the most widely used ones are shown below.
itoa()
itoa() is a widely implemented but non-standard extension to the C programming language. Although widely implemented, it is not ubiquitous, as GCC on Linux does not support it (which has a huge share of the C compiler space). Even though it is not specified in the C programming standard, it is confusingly included via
stdlib.h as it complements the existing functions in that header. It is typically defined as:
char * itoa(int value, char * str, int base);
Usage:
#include <stdlib.h> int main() { int number = 436; char buffer[10]; itoa(number, buffer, 10); printf(buffer); }
itoa() can cause undefined behaviour if the buffer is not large enough to hold the string-representation of the passed in integer. If you have a restricted range of integer that are provided to
itoa(), you can quite easily determine how big the buffer should be. If it could be any integer, you need a buffer that can handle
INT_MIN (and a trailing
NULL). A safer alternative (that is also portable) to
itoa() is to use
snprintf("%d", ...).
Another good reason to abandon
itoa() is that it is not supported in C++.
C String To Number Functions
atof()
atof() is a historic way of converting a string to a double-precision float (yes, even though the function has
f in it’s name, it actually returns a
double).
The biggest let-down with
atof() is that you cannot distinguish between the text input
"0.0" and when there is no valid number to convert. This is because
atof() returns
0.0 if it can’t find a valid float number in the input string. For example:
// Result is the same (0.0) in both of these cases). double result; result = atof("0.0"); result = atof("blah blah");
There is a better alternative
strtod(), which allows you to test for this condition, if your system supports it.
strtod()
This stands for (string-to-double). It is a safer way of converting strings to doubles than
atof(). The code example below shows how to use
strtod() to convert a string to a double and also how to check that the input string contained a valid number. Newer versions of C/C++ also provide
strtof() which performs the same function but returns a
float rather than a
double.
double ConvertStringToDouble(char* input) { char* input = "34.56"; char* numEnd; // Do the conversion double output = strtod(input, &numEnd); // Bounds checking if(numEnd == input) { printf("Error: Input to strtod was not a valid number.\r\n"); return false; } }
strtol()
strtol() behaves very similarly to
strtod() except parses the string into a
long int rather than a
double.
Memory manipulation functions are also useful for string manipulation. Some of the useful functions are shown below.
Decoding/Encoding Strings
strtok() is a standard function which is useful for decoding strings. It splits a string up into a subset of strings, where the strings are split at specific delimiters which are passed into the function. It is useful when decoding ASCII-based (aka human readable) communication protocols, such as the command-line interface, or the
NMEA protocol. Read more about it on the
C++ Reference site.
getopt() is a standard function for finding command-line arguments passed into main() as an array of strings. It is included in the
GCC glibc library. The files are also downloadable locally
here (taken from GCC gLibC v2.17).
Conversion Specifiers
Conversion specifiers determine how
printf() interprets the input variables and displays their value as a string. Conversion specifiers are added in the input string after the
% character (optional format specifiers may be added between the
% symbol and the conversion specifier).
Although the basic behaviour is defined in the ANSI standard, the exact implementation of
printf() is likely to vary slightly between C libraries.
If you actually wanted to print the % character rather than use it to specify a conversion, use two of them (
printf("%%"); // prints "%").
Format Specifiers
There are plenty of format specifiers that you can use with
printf() which changes the way the text is formatted. Format specifiers go between the
% symbol and the conversion specifier, mentioned above. They are optional, but if used, have to be added in the correct order.
I have come across embedded implementations of
printf() which do not support string padding (e.g.
%5s or
%-6s). This includes the version used with the PSoC 5.
Portable size_t Printing
For portability, you can use the
z format specifier when you want to print a value of
size_t (e.g. the number returned by
sizeof()).
// Portable way of printing a size_t number, // using the 'z' modifier. // (which sizeof() returns) printf("Size of int = %zi.\r\n", sizeof(int));
This was introduced in ISO C99.
Z (upper-case
z) was a GNU extension predating this standard addition and should not be used in new code.
snprintf()
sprintf() has plenty of special characters you can add to format the output number exactly how you want it.
The length parameter specifies the length of the variable (for example, you can have 8-bit, 16-bit, 32-bit, e.t.c integers). In the Keil C51 compiler, the
b or
B length specifier is used to tell
sprintf that the number is 8-bit. If you don’t use this, 8-bit numbers stored in
uint8_t or
char data types will not print properly. You do not have to use the
b/B specifier when using GCC.
// Print a float to 2 decimal places snprintf(txBuffer, sizeof(txBuffer), "Float to 2dp: %.2f", float1);
// Print a 8-bit number in the Keil C51 compiler, using the length specifier 'b' ('B' can be used also) uint8 eightBitNum = 23; sprintf(txBuffer, "This is an 8-bit number: %bu", eightBitNum);
Related Content:
- pybind11
- Python SWIG Bindings From C/C++
- Passing A C++ Member Function To A C Callback
- STM32CubeIDE
- A Tutorial On geopandas | https://blog.mbedded.ninja/programming/languages/c/string-manipulation/ | CC-MAIN-2021-17 | refinedweb | 1,791 | 62.88 |
The GPIO: General Purpose Input/Output lets you interface your Raspberry Pi with the outside world, making it a powerful interactive device for just $40-$50.
This Instructable will show you how to install the GPIO package on your Raspberry Pi and how to wire up a simple push button circuit with an LED.
I use the command-line and Python for this, no web browser or GUI.
Before you do this Instructable, make sure you have your Raspberry Pi ready for action. My Ultimate Raspberry Pi Configuration Guide covers how to do this in detail.
You will want to have an internet connection to download the packages and probably use ssh.
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Gather Your Components
Components
* Raspberry Pi
* Cobbler breakout board with cable — you can order this from Adafruit for $8.
* Breadboard
* Wires — breadboard or otherwise
* standard LED
* 270 Ohm resistor
* 1K resistor
* 10K resistor
* Push button
* USB power for RPI (not pictured)
* Monitor + keyboard (not pictured)
Tools
* wire-stripper
* small diagonal snips
* multimeter for checking continuity
Step 2: Assemble Your Circuit
We will have a simple push button that will turn an LED on when the button is pressed and off when it is released. I know, it's not super-exciting, but think of this as a building block for digital input and output.
Here is the schematic and breadboard diagram.
The LED is straightforward: 270 Ohm resistor is needed to light up the LED from a 3.3V input.
The concept is simple: the LED is an output of pin 4 and the button is an input of pin 22. The button circuit has a pull-down resistor. Pin 22 will be pulled down to zero through the 10K resistor when the button is inactive. When it gets pressed, the 3.3V power from the Raspberry Pi goes into the pin 22 input, bypassing the 10K resistor.
Without the 10K resistor, you'll have a floating input and will get erratic behavior in your code. The 1K resistor protects the Raspberry Pi from too much current.
Step 3: Putting It on the Breadboard
After doing the wiring diagram in Fritzing, I laid out the components on a real breadboard. Strip the wires and use the diagonal snips for a clean-looking breadboard.
This is what it looks like with the Cobbler breakout board before I attach the ribbon cable to the Pi.
Even with a simple circuit like this, double-check your components to make sure you're not shorting your circuit or anything else like that.
Step 4: Connect to the Pi
Use the breakout cable and connect the cobbler to your Raspberry Pi.
You will want to be connected to the internet to download the latest packages via Wifi or an Ethernet cable.
Power up your Raspberry Pi.
Step 5: Install the GPIO Package
Just in case, I suggest installing them again, which involves first installing the Python Development toolkit that RPi.GPIO uses and then the GPIO package itself.
Type in
sudo apt-get install python-devThen, for the GPIO libraries:
sudo apt-get install python-rpi.gpioIf asked for confirmation on either of these, press Y
You'll get the usual Linux garble like this. In my case, the Python development tools were not installed but GPIO was.
Step 6: Blink an LED in Python
sudo nano gpio_blink.pyAnd enter in this script. The advantage with using ssh is that you can just copy-and-paste the script. Alternatively, I have this on a GitHub repository.
# gpio_blink.py
# by Scott Kildall ()
# LED is on pin 4, use a 270 Ohm resistor to ground
import RPi.GPIO as GPIO
import time
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.OUT)
state = True
# endless loop, on/off for 1 second
while True:
GPIO.output(4,True)
time.sleep(1)
GPIO.output(4,False)
time.sleep(1)
Ctrl-X, Y to save the file.
Now run the script:
sudo python gpio_blink.pyNote: you have to invoke sudo — root access for the GPIO library.
You should have a blinking LED on your circuit. We're not using the switch at all at this point.
Ctrl-C to exit the script
how it works
- Pin 4 is an input pin. We alternate between high (True) and low (False) for 1 second at a time.
- I turn warnings off because I was getting errors in my script because the GPIO wasn't properly closed (this shouldn't matter and I found it an annoyance).
- Setting the mode to BCM means that the pin numbers etched on the Raspberry Cobbler match the ones that you are using in your code.
Step 7: Python Script for Switch-activated LED
Type in:
sudo nano gpio_blink.pyYou can also refer to the GitHub repository, if need be.
# gpio_swtich.py
# by Scott Kildall ()
# LED is on pin 4, use a 270 Ohm reistor to ground
# Switch is on pin 22, use a pull-down resistor (10K) to ground
import RPi.GPIO as GPIO
import time
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.OUT)
GPIO.setup(22,GPIO.IN)
# input of the switch will change the state of the LED
while True:
GPIO.output(4,GPIO.input(22))
time.sleep(0.05)
Cntl-X, Y to save the file
(note the indentations af
Now run the script:
sudo python gpio_switch.pyIf you press the switch, the LED should turn on and when you let it go, it should turn off. Congratulations, you have an input and output into your Raspberry Pi.
how it works
This is like the previous script, except that we are designating Pin 22 as an input pin. We set the output of Pin 4 to match the input of Pin 22. When Pin 22 goes high, so does Pin 4. The time.sleep(0.05) is there to account for any debouncing in the button.
Step 8: Troubleshooting...or Done!
Here is the GitHub repository for some of the GPIO scripts that I reference, and am adding to.
Adafruit, as always has some excellent material on the GPIO and Raspberry Pi.
Troubleshooting
There's a lot that can go awry with electronics and interfacing with the outside world. Here are some suggestions for things to look for:
- If you're getting no flashing LED or the switch doesn't seem to work, check your breadboard connections or the switch doesn't work, check your breadboard connections. Make sure you have the right components in the right places. Test for continuity with a multimeter.
- Make sure your LED is facing the right direction. Flat side goes to ground. Longer lead is oriented to the positive electron flow.
- You can test your LED and the GPIO connection by connecting the LED to the 3.3V output of the GPIO and to the 270 Ohm resistor and to the ground of the GPIO.
I hope this was helpful
For more Raspberry Pi projects and other programmatic artworks, you can find me here: @kildall or
Scott Kildall
1 Person Made This Project!
VaibhavV18 made it!
11 Discussions
3 years ago
I used a soldering iron. Way cheaper. And not as hard as it seems.
Now I have my led strip connected, and it works great.
3 years ago
I need this pdf to use this feature for my project. But I cant afford this. Can anyone mail me this PDF.
4 years ago on Step 6
4 years ago on Introduction
good catch! my circuit diagram has the LED flipped. I'll have to correct this at some point soon.
4 years ago on Introduction
Hey! thanks a lot for the tutorial! it's an indescriptible feeling when you make it work!. I just have a quick comment:
I am no expert on electronics, however, from other tutorials I have taken, I have noticed that in the breadboard diagrams, the anode (+) leg is usually depicted as the longer "curved" leg. however, in your diagram this leg is connected to ground (also the LED triangle in schematic points towards the output). Thus, the first time I tried it it didn't work but once I changed the direction of the legs it worked like magic!.
5 years ago on Introduction
Good point. I'm usually stopping my Python scripts with a ctrl-C rather than an event like a keydown, and so the GPIO.cleanup() doesn't get called. My workaround is setting the warnings to false. But, as you point out, the GPIOs will stay at the state they were last in, leaving the LEDs on — better coding practice would be to properly close the ports.
Reply 4 years ago on Introduction
The simplest way to ensure some cleanup code gets run is to wrap your while loop in a try/finally block.
If you specifically want to catch Ctrl+C but nothing else, you can use try/except to catch KeyboardInterrupt.
Finally, if you want a more robust way to ensure some cleanup gets done, take a look at lines 312 through 322 in this revision of my Procrastinator's Timeclock utility....
First, it assigns the cleanup function to sys.exitfunc so it'll get called on clean exit, then it hooks all the common POSIX signals the kernel might send s ensure that they trigger a clean exit.
* SIGINT is sent by Ctrl+C, which INTerrupts the program.
* SIGTERM is sent by things like task managers politely asking your program to TERMinate itself.
* SIGHUP is sent by the kernel when your program loses the terminal it's attached to.
* SIGQUIT is sent by Ctrl+\, which asks the program to quit and dump core if core dumping is enabled.
(My code technically doesn't handle SIGQUIT properly, since it doesn't dump core, but I've never seen people using it properly.)
Finally, it also uses try/except to catch Ctrl+C, just to play it safe. (And it's still not perfect because PyGTK doesn't let the program attach a handler for "lost connection to X server")
If you want a version that's a bit more complicated, but also works on Windows, look at this revision:...
(It checks that each signal exists before hooking it)
Reply 4 years ago on Introduction
Thanks for sharing!
This is beyond what I've tried to do so far. But, I'll check out the SIG___ interrupts that you've pointed out here.
And hopefully others can make use of your code on GitHub as well.
Reply 4 years ago on Introduction
Oh, I meant to mention where the name "SIGHUP" comes from.
Back in the days when computers cost a fortune and users had to share them, it meant that the modem connecting the terminal to the computer had Hung UP.
5 years ago
sometimes if your don't properly close the gpios they stay at the state they were last set. when I was making a robot with my pi, and I quickly cancelled the script so it wouldn't run into something, it continued on its merry way. simply put GPIO.cleanup() at the end of your program, and you should stop getting that warning. great little indestructible though:)
Reply 5 years ago
great little instructable* | https://www.instructables.com/id/Raspberry-Pi-Python-scripting-the-GPIO/ | CC-MAIN-2019-43 | refinedweb | 1,897 | 73.07 |
Matplotlib is a great python package for the visualization of your data. In this entire tutorial, you will learn how to create subplots in Matplotlib through step by step guide.
Steps by Steps to Create Subplots in Matplotlib
Step 1: Learn the Syntax to create matplotlib subplot.
The method for the matplotlib subplot is pyplot.subplot(). There are some arguments you have to pass to create subplots.
matplotlib.pyplot.subplots ( nrows =1,ncols =1 ,sharex = False,sharey= True)
Parameters Description
nrows : It denotes the number of rows of the subplot. The default value is 1.
ncols: Default value is 1. It denotes the number of columns for the plot.
sharex : It allows you to share the x-axis for all the subplots. False is the default value.
sharey : It allows you to share the y-axis value for the subplots. The default value is False.
You will understand it more in further examples.
Step 2: Import all the necessary libraries.
The first and most important step is to import all the necessary python libraries. Here in this tutorial, I will use only NumPy with matplotlib.
import numpy as np import matplotlib.pyplot as plt
Step 3: Create data to create subplots in matplotlib.
Now let’s create different data points that I will use to Create Subplots in Matplotlib. Here I am creating four data points.
)
Step 4: Create Subplots in Matplotlib
The data points are already created above use them here.
Single Subplot
# data1 x1 = [10,20,30,40,50] y1 = [1,2,3,4,5] fig, ax = plt.subplots() ax.plot(x1, y1) ax.set_title('Single plot') plt.show()
You can see nothing is passed as arguments inside the method subplots(). It means one row and one column or one plot will be in the figure. The method will return fig and ax. When you will run the above program you will get the same output.
Create two Subplots in Matplotlib
To create two matplotlib subplots you have to pass the 1 and 2 or 2 and 1 as arguments for no of rows and no of columns. If you will use 1 and 2 then the plots will be in arranged column-wise and if you use 2 and 1 the rowwise.
# data 1 x1 = [10,20,30,40,50] y1 = [1,2,3,4,5] # data2 x2 = np.array([1,2,3,4,5]) y2 = np.cos(x2**2) fig, (ax1,ax2) = plt.subplots(1,2) ax1.plot(x1, y1) ax2.plot(x2, y2) ax1.set_title('First plot') ax2.set_title('Second plot') plt.show()
Here ax1 and ax2 are will get the value of the axis array returned by the subplots() method.
The above was the example of a single subplot. Let’s create multiple subplots of four data points. The method will be the same as the above but the arguments differ.
Create Four Subplots in Matplotlib
Just like you have created two plots by passing 1 and 2 as arguments to the subplots() method. You have to pass 2 and 2 to plot the four matplotlib subplots.
#) fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2,2) ax1.plot(x1, y1) ax2.plot(x2, y2) ax3.plot(x3, y3) ax4.plot(x4, y4) ax1.set_title('First plot') ax2.set_title('Second plot') ax3.set_title('Third plot') ax3.set_title('Fourth plot') plt.show()
You can see in the above code We are getting the axis array value to each variable ax1,ax2,ax3, and ax4. It allows you to plot each individual figure in your own axis.
You can read more about it on official creating subplots in matplotlib documentation.
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox. | https://www.datasciencelearner.com/create-subplots-matplotlib/ | CC-MAIN-2021-39 | refinedweb | 621 | 78.75 |
Previous page: Fixture Arguments Next page: Target objects Parent page: Important concepts
Flow ModeIf DoFixture or SequenceFixture are loaded by the first table on a test page, they take over the entire test page processing. This allows you to split the test table into multiple tables, to make the test more readable.
!|info.fitnesse.fixturegallery.DoFixtureTest| |fill|10|times with|x| |check|char at|4|x| |set list|A,B,C,D| |show|char at|2|
When the table is split like this, then each sub-table is first matched to an enclosing flow fixture method. If no such method exists, a fixture class is loaded as it would normally be, without the flow mode. This allows you to write tests that use DoFixture methods to implement story-like workflow, but still use the benefits of a more structured approach when that makes sense. DoFixture even helps you with that: if the flow mode method returns an instance of a fit.Fixture class or one of its subclasses, the rest of the active table is processed as if it were writen for that fixture. So you can use a DoFixture method to initialise other fixtures, prepare the context or clean up. If the flow method returns an array or list of objects, the table is analysed as if it was written for an ArrayFixture . Here is an example that uses an embedded fixture and array conversion:
!|info.fitnesse.fixturegallery.DoFixtureFlowTest| !3 The following table is executed by an embedded !-SetUpFixture-!-!-! |prepare players| |player|post code|balance| |John Smith|SW4 66Z|10.00| |Michael Jordan|NE1 8AT|12.00| !3 The following table is executed by an !-ArrayFixture-!-!-! |list players| |name|post code|balance| |John Smith|SW4 66Z|10.00| |Michael Jordan|NE1 8AT|12.00|
Java Source Code
package info.fitnesse.fixturegallery; import info.fitnesse.fixturegallery.domain.Player; import java.util.List; import fit.Fixture; import fitlibrary.DoFixture; public class DoFixtureFlowTest extends DoFixture{ public Fixture preparePlayers(){ return new SetUpFixtureTest(); } public List<Player> listPlayers(){ return Player.players; } }
.NET Source Code
using System; using System.Collections.Generic; using System.Text; using fit; namespace info.fitnesse.fixturegallery { public class DoFixtureFlowTest : fitlibrary.DoFixture { public Fixture PreparePlayers() { return new SetUpFixtureTest(); } public List<Player> ListPlayers() { return Player.players; } } } Parent page: Important concepts | http://fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.FixtureGallery.ImportantConcepts.FlowMode | CC-MAIN-2021-17 | refinedweb | 376 | 51.44 |
From: Jeff Garland (jeff_at_[hidden])
Date: 2007-04-06 12:42:00
Beman Dawes wrote:
>.
>
I'd do it, but I just dont' have time right now. Here's the basic details.
BTW, this is detailed in Chapter 10 of Efficient C++ (Bulka/Mayhew) circa 2000
-- and I'm guessing they didn't invent it -- so, this isn't exactly a new
idea. In their case the rational was performance tuning.
Here's the basic idea:
1) in the header, conditionally include the implementation file (eg: .ipp)
2) in the implementation file, conditionally inline methods
3) in the .cpp file include .ipp and turn off the conditional inlining
For system it appears that it's essentially error_code.hpp/cpp we need to
convert. So here's what needs to be done. In error_code.hpp we add something
like this at the near the end of the file:
//make system all inline
#ifdef BOOST_USE_OS_NATIVE_HEADERS_INLINE
#include <error_code.ipp>
#endif
Now, in the IPP file you'll need a macro to optionally define inline:
//code to define inline
#ifdef BOOST_USE_OS_NATIVE_HEADERS_INLINE
#define BOOST_INLINE inline
#endif
and the methods need to be declared like this:
BOOST_INLINE
int
errno_ed( const error_code & ec )
To be safe at the end of the .ipp you'll probably want this:
#ifdef BOOST_USE_OS_NATIVE_HEADERS_INLINE
#undef BOOST_INLINE
#endif
Note that in system.cpp code looks like there's some namespace scope const
arrays and things that probably need to be turned into enums or in general
cleaned up for header only use.
So the result is that when someone does
#def BOOST_USE_OS_NATIVE_HEADERS_INLINE 1
#include <boost/system.hpp>
the effect is all inlined code -- they don't need to link the library.
Now the error_code.cpp effectively just includes the .ipp with the macro
undefined. So it builds the functions into the library because BOOST_INLINE
resolves to an empty string.
So, for those that just use the usual:
#include <boost/system.hpp>
they will need to link the library.
That's the essence of the technique...I'm sure someone with an hour or 2 to
goof around with it could implement it on system.
Jeff
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2007/04/119589.php | CC-MAIN-2019-26 | refinedweb | 374 | 68.16 |
from numpy import * import pylab # data to fit x = random.rand(6) y = random.rand(6) # fit the data with a 4th degree polynomial z4 = polyfit(x, y, 4) p4 = poly1d(z4) # construct the polynomial z5 = polyfit(x, y, 5) p5 = poly1d(z5) xx = linspace(0, 1, 100) pylab.plot(x, y, 'o', xx, p4(xx),'-g', xx, p5(xx),'-b') pylab.legend(['data to fit', '4th degree poly', '5th degree poly']) pylab.axis([0,1,0,1]) pylab.show()Let's see the two polynomials:
Thursday, July 14, 2011
Polynomial curve fitting
We have seen already how to a fit a given set of points minimizing an error function, now we will see how to find a fitting polynomial for the data using the function polyfit provided by numpy:?
Hi nick, maybe I don't get exactly what you need. By the way, if you need more information about the polynomial interpolation you can find out more looking for 'Lagrange interpolation' on any algebra book or at . If you need a tool to work with symbolic calculation, I suggest you maxima and () the symbolic matlab toolbox.
como se suman polinomios?
Hi ! I have read your article with much interest.
In your previous comment, you speak about "Lagrange interpolation" and I remember using this method on a series to get "intermediate" values. I used scipy.interpolate.lagrange for this but this function needs to be given an extract of the series. Indeed, the length of its parameters gives the degree of the polynomial (minus 1 I guess).
What does polyfit compared to interpolate.lagrange ? Does it select the best points to create what I call the "sub series" ? Thanks for your answer :)
Pierre.
Hello Pierre, thank for you comment.
I suggested to nick to begin with Lagrange Interpolation because I thought that he was looking for something that doesn't involve an optimization process.
As you noticed, the Lagrange interpolation is exact while the polyfit is not. Indeed, polyfit finds the coefficients of a polynomial that fits the data in a least squares sense.
There's no point selection in polyfit. | http://glowingpython.blogspot.it/2011/07/polynomial-curve-fitting.html | CC-MAIN-2017-30 | refinedweb | 351 | 57.37 |
Hi Will, Mark,
On 2016/3/30 0:31, Mark Rutland wrote:
> On Tue, Mar 29, 2016 at 05:18:38PM +0100, Will Deacon wrote:
>> > On Thu, Mar 24, 2016 at 10:44:31PM +0800, Shannon Zhao wrote:
>>> > >;
>>> > > + }
>> >
>> > Hmm, but xen_initial_domain() is false when xen isn't being used at all,
>> > so it feels to me like this is a bit too far-reaching and is basically
>> > claiming the "/hypervisor" namespace for Xen. Couldn't it be renamed to
>> > "xen,hypervisor" or something?
>> >
>> > Mark, got any thoughts on this?
> The node has a compatible string, "xen,xen" per [1], which would tell us
> absolutely that xen is present. I'd be happy checking for that
> explicitly.
>
I think actually the xen_initial_domain is the result of the
fdt_find_hyper_node. If the compatible string "xen,xen" doesn't exist,
the xen_initial_domain() will return false and whatever the current node
is the above check will return 1 since the device tree is not empty.
> In patch 11 fdt_find_hyper_node checks the compatible string. We could
> factor that out into a helper like is_xen_node(node) and use it here
> too.
>
I don't think so because we already check the compatible string before
and we could get the result simply via xen_initial_domain().
Thanks,
--
Shannon
_______________________________________________. | https://lists.xenproject.org/archives/html/xen-devel/2016-03/msg03736.html | CC-MAIN-2018-30 | refinedweb | 209 | 62.17 |
I observed this bug when trying to modify libio/tst-fopenloc.c (which uses fputws) to use test-skeleton.c (which un-buffers stdout).
When writing to an unbuffered stream, `fputws' seems to error out as soon as it encounters a UTF-8 character that takes up more than one byte.
Reproducer: The below test case should print "Platform 9¾" to stdout and finish successfully. It does not. fputws prints "Platform 9" then returns -1:
#include <locale.h>
#include <stdio.h>
#include <wchar.h>
#include <string.h>
#include <errno.h>
int
main (void)
{
wchar_t buf[100] = L"Platform 9";
FILE *fp;
int r;
setlocale (LC_ALL, "en_US.UTF-8");
buf[10] = L'\xbe'; /* unicode code point for "3/4" */
buf[11] = L'\n';
buf[12] = L'\0';
setvbuf (stdout, NULL, _IONBF, 0);
if (fputws (buf, stdout) < 0)
return 1;
return 0;
}
The problem is that an unbuffered stream has only room for a single byte buffer for code conversion.
(In reply to Andreas Schwab from comment #1)
> The problem is that an unbuffered stream has only room for a single byte
> buffer for code conversion.]
(In reply to Arjun Shankar from comment #3)
> .
It's a QoI issue.
>]
That's right, it is an implementation detail.
Because UTF-8 is a variable length encoding, you would need to immediately print a character whenever you complete it regardless of the buffer size, but rather based on the fact that you are unbuffered.
I don't know how much more work it would be to enhance the file stream support to do this when unbuffered.
For example, printing ASCII, should just print right away, it's unbuffered, and that's valid UTF-8. It should not be a naive implementation where you might have 4 ASCII characters waiting in a buffer before being printed.
Does that answer your question?
(In reply to Carlos O'Donell from comment #4)
> It's a QoI issue.
Yes.
> For example, printing ASCII, should just print right away, it's unbuffered,
> and that's valid UTF-8. It should not be a naive implementation where you
> might have 4 ASCII characters waiting in a buffer before being printed.
Agreed. I incorrectly used the word 'buffer' in my comment. What I was trying to say is that ideally, the encoder should have enough "internal memory" to convert the internal representation of a wide character into the one used by the current encoding scheme.
Which brings us back to the question of:
* we do this?:
> Thus tst-skeleton.c needs to be enhanced to allow the test to define the
> size of the stdout buffer it needs and then that can be allocated and passed
> to setvbuf
* or this?:
> enhance the file stream support to do this when unbuffered.
(In reply to Arjun Shankar from comment #5)
> * we do this?:
> > Thus tst-skeleton.c needs to be enhanced to allow the test to define the
> > size of the stdout buffer it needs and then that can be allocated and passed
> > to setvbuf
This.
The quality of the implementation should not stop you from attaining your goal of adding tst-skeleton support to tests. That will enhance testing across the board. However, now you are armed with enough information to justify *why* you want this buffer size hack in tst-skeleton.
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU C Library master sources".
The branch, master has been updated
via 04b76b5aa8b2d1d19066e42dd1a56a38f34e274c (commit)
from 4c6da7da9fb1f0f94e668e6d2966a4f50a7f0d85 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------;h=04b76b5aa8b2d1d19066e42dd1a56a38f34e274c
commit 04b76b5aa8b2d1d19066e42dd1a56a38f34e274c
Author: Andreas Schwab <schwab@suse.de>
Date: Thu Oct 30 12:18:48 2014 +0100
Don't error out writing a multibyte character to an unbuffered stream (bug 17522)
-----------------------------------------------------------------------
Summary of changes:
ChangeLog | 8 ++++++++
NEWS | 2 +-
libio/Makefile | 2 +-
posix/tst-fnmatch3.c => libio/tst-fputws.c | 21 +++++++++++++++------
libio/wfileops.c | 25 ++++++++++++++++++++-----
5 files changed, 45 insertions(+), 13 deletions(-)
copy posix/tst-fnmatch3.c => libio/tst-fputws.c (71%)
Fixed in 2.21.
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU C Library master sources".
The branch, master has been updated
via a0d424ef9d7fc34f7d1a516f38c8efb1e8692a03 (commit)
via 8b460906cdb8ef1501fa5dcff54206b201e527d5 (commit)
via fa13e15b9a5cc49c9c6dee33084c3ff54d48e50e (commit)
via 0e426475a70800b6a17daa7a8ebbafeabfcbc022 (commit)
from 4f646bce1cae4031bfe7517e4793f1edc1a15220 (commit)
Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.
- Log -----------------------------------------------------------------;h=a0d424ef9d7fc34f7d1a516f38c8efb1e8692a03
commit a0d424ef9d7fc34f7d1a516f38c8efb1e8692a03
Author: Siddhesh Poyarekar <siddhesh@redhat.com>
Date: Tue Dec 16 16:53:05 2014 +0530
Fix 'array subscript is above array bounds' warning in res_send.c
I see this warning in my build on F21 x86_64, which seems to be due to
a weak check for array bounds. Fixed by making the bounds check
stronger.
This is not an actual bug since nscount is never set to anything
greater than MAXNS. The compiler however does not know this, so we
need the stronger bounds check to quieten the compiler.;h=8b460906cdb8ef1501fa5dcff54206b201e527d5
commit 8b460906cdb8ef1501fa5dcff54206b201e527d5
Author: Arjun Shankar <arjun.is@lostca.se>
Date: Tue Dec 16 15:21:01 2014 +0530
Modify libio/tst-fopenloc.c to use test-skeleton.c
This test would earlier fail when run under test-skeleton.c due to
bug #17522 in 'fputws'. That bug is now fixed and so this test may
be modified.;h=fa13e15b9a5cc49c9c6dee33084c3ff54d48e50e
commit fa13e15b9a5cc49c9c6dee33084c3ff54d48e50e
Author: Arjun Shankar <arjun.is@lostca.se>
Date: Tue Dec 16 15:19:51 2014 +0530
Modify stdlib/tst-bsearch.c to use test-skeleton.c
This test used to define a 'struct entry' that conflicts with the
definition in search.h included in test-skeleton. The struct is
now renamed 'item'.;h=0e426475a70800b6a17daa7a8ebbafeabfcbc022
commit 0e426475a70800b6a17daa7a8ebbafeabfcbc022
Author: Arjun Shankar <arjun.is@lostca.se>
Date: Tue Dec 16 15:18:46 2014 +0530
Modify stdio-common/tst-fseek.c to use test-skeleton.c
This test needs a TIMEOUT longer than the default 2 seconds since it
sleeps twice for a second each.
-----------------------------------------------------------------------
Summary of changes:
ChangeLog | 15 +++++++++++++++
libio/tst-fopenloc.c | 7 +++++--
resolv/res_send.c | 2 +-
stdio-common/tst-fseek.c | 8 ++++++--
stdlib/tst-bsearch.c | 27 +++++++++++++++------------
5 files changed, 42 insertions(+), 17 deletions(-)
----------------------------------------------------------------------- | https://sourceware.org/bugzilla/show_bug.cgi?id=17522 | CC-MAIN-2017-39 | refinedweb | 1,068 | 66.84 |
‘-gnu’ to obtain this format, although doing so will not cause an
error. Option settings which correspond to the GNU style are: ‘-kr’ option. The Kernighan & Ritchie style corresponds to the following set of options: ‘-orig’ (or by specifying ‘--original’, using the
long option name). This style is equivalent to the following settings:
The Linux style is used in the linux kernel code and drivers. Code generally has to follow the Linux coding style to be accepted. This style is equivalent to the following settings:
Various programming styles use blank lines in different places.
indent has a number of options to insert or delete blank lines in
specific places.
The ‘-bad’ option causes
indent to force a blank line after
every block of declarations. The ‘-nbad’ option causes
indent not to force such blank lines.
The ‘-bap’ option forces a blank line after every procedure body. The ‘-nbap’ option forces no such blank line.
The ‘-bbb’ option forces a blank line before every boxed comment (See section Comments.) The ‘-nbbb’ option does not force such blank lines.
The ‘-sob’ option causes
indent to swallow optional blank
lines (that is, any optional blank lines present in the input will be
removed from the output). If the ‘-nsob’ is specified, any blank
lines present in the input file will be copied to the output file.
The ‘-bad’ option forces a blank line after every block of declarations. The ‘-nbad’ option does not add any such blank lines.
For example, given the input
indent -bad produces
and
indent -nbad produces
The ‘-bap’ option forces a blank line after every procedure body.
For example, given the input
indent -bap produces
and
indent -nbap produces
No blank line will be added after the procedure
foo.:
The:
By default
indent will line up identifiers, in the column
specified by the ‘-di’ option. For example, ‘-di16’ makes
things look like:
Using a small value (such as one or two) for the ‘-di’ option can be used to cause the identifiers to be placed in the first available position; for example:
The value given to the ‘-di’ option will still affect variables which are put on separate lines from their types, for example ‘-di2’ will lead to:
If the ‘-bc’ option is specified, a newline is forced after each comma in a declaration. For example,
With the ‘-nbc’ option this would look like
The ‘-bfda’ option causes a newline to be forced after the comma separating the arguments of a function declaration. The arguments will appear at one indention level deeper than the function declaration. This is particularly helpful for functions with long argument lists. The option ‘-bfde’ causes a newline to be forced before the closing bracket of the function declaration. For both options the 'n' setting is the default: -nbdfa and -nbdfe.
For example,
With the ‘-bfda’ option this would look like
With, in addition, the ‘-bfde’ option this would look like
The ‘ ‘-T’
option to tell
indent the name of all the typenames in your
program that are defined by
typedef. ‘-T’ can be specified
more than once, and all names specified are used. For example, if your
program contains
you would use the options ‘-T CODE_ADDR -T COLOR’.
The ‘-brs’ or ‘-bls’ option specifies how to format braces in struct declarations. The ‘-brs’ option formats braces like this:
The ‘-bls’ option formats them like this:
Similarly to the structure brace ‘-brs’ and ‘-bls’ options, the function brace options ‘-brf’ or ‘-blf’ specify how to format the braces in function definitions. The ‘-brf’ option formats braces like this:
The ‘-blf’ option formats them like this:
One issue in the formatting of code is how far each line should be
indented from the left margin. When the beginning of a statement such
as
if or
for is encountered, the indentation level is
increased by the value specified by the ‘-i’ option. For example,
use ‘-i8’ to specify an eight character indentation for each
level. When a statement is broken across two lines, the second line is
indented by a number of additional spaces specified by the ‘-ci’
option. ‘-ci’ defaults to 0. However, if the ‘
‘-nlp -ci3’ in effect:
With ‘-lp’ in effect the code looks somewhat clearer:
When a statement is broken in between two or more paren pairs (...), each extra pair causes the indentation level extra indentation:
The option ‘-ipN’ can be used to set the extra offset per paren. For instance, ‘-ip0’ would format the above as:
indent assumes that tabs are placed at regular intervals of both
input and output character streams. These intervals are by default 8
columns wide, but (as of version 1.2) may be changed by the ‘-ts’
option. Tabs are treated as the equivalent number of spaces.
The indentation of type declarations in old-style function definitions is controlled by the ‘-ip’ parameter. This is a numeric parameter specifying how many spaces to indent type declarations. For example, the default ‘-ip5’ makes definitions look like this:
For compatibility with other versions of indent, the option ‘-nip’ is provided, which is equivalent to ‘-ip0’.
ANSI C allows white space to be placed on preprocessor command lines
between the character ‘#’ and the command name. By default,
indent removes this space, but specifying the ‘-lps’ option
directs
indent to leave this space unmodified. The option ‘-ppi’
overrides ‘-nlps’ and ‘-lps’.
This option can be used to request that preprocessor conditional statements can be indented by to given number of spaces, for example with the option ‘-ppi 3’
becomes ‘-cli’ parameter for that. For example with the option ‘-il 1’
becomes
With the option ‘-ln’, or ‘- two options that allow one to interfere with the algorithm that determines where to break a line.
The ‘-bbo’ option causes GNU
indent to prefer to break
long lines before the boolean operators
&& and
||. The
‘-nbbo’ option causes GNU
indent not have that
preference. For example, the default option ‘-bbo’ (together
with ‘--line-length60’ and ‘--ignore-newlines’) makes code
look like this:
Using the option ‘-nbbo’ will make it look like this:
The default ‘-hnl’, however, honours newlines in the input file by giving them the highest possible priority to break lines at. For example, when the input file looks like this:
then using the option ‘-hnl’, or ‘--honour-newlines’, together with the previously mentioned ‘-nbbo’ and ‘--line-length60’, will cause the output not to be what is given in the last example but instead will prefer to break at the positions where the code was broken in the input file:
The idea behind this option is that lines which are too long, but are already
broken up, will not be touched by GNU
indent. Really messy code
should be run through
indent at least once using the
‘-
(‘.indent.pro’).
While an attempt was made to get
indent working for C++, it
will not do a good job on any C++ source except the very simplest.
indent does not look at the given ‘- ‘-v’
option) when
indent is turned off with
/* *INDENT-OFF* */.
/ ‘indent.texinfo’ and ‘indent.info’, and near the
end of ‘indent.1’.
Here is a list of all the options for
indent, alphabetized by
short option. It is followed by a cross key alphabetized by long option.
Force blank lines after the declarations.
See section Blank lines.
Force blank lines after procedure bodies.
See section Blank lines.
Force blank lines before block comments.
See section Blank lines.
Prefer to break long lines before boolean operators.
See section Breaking long lines.
Force newline after comma in declaration.
See section Declarations.
Put braces on line after
if, etc.
See section Statements.
Put braces on line following function definition line.
See section Declarations.
Indent braces n spaces.
See section Statements.
Put braces on the line after
struct declaration lines.
See section Declarations.
Put braces on line with
if, etc.
See section Statements.
Put braces on function definition line.
See section Declarations.
Put braces on
struct declaration line.
See section Declarations.
Put a space between
sizeof and its argument.
See section Statements.
Put comments to the right of code in column n.
See section Comments.
Indent braces after a case label N spaces.
See section Statements.
Put comments to the right of the declarations in column n.
See section Comments.
Put comment delimiters on blank lines.
See section Comments.
Cuddle while of
do {} while; and preceding ‘}’.
See section Comments.
Cuddle else and preceding ‘}’.
See section Comments.
Continuation indent of n spaces.
See section Statements.
Case label indent of n spaces.
See section Statements.
Put comments to the right of
#else and
#endif statements in column n.
See section Comments.
Put a space after a cast operator.
See section Statements.
Set indentation of comments not to the right
of code to n spaces.
See section Comments.
Break the line before all arguments in a declaration.
See section Declarations.
Break the line after the last argument in a declaration.
See section Declarations.
If -cd 0 is used then comments after declarations are left justified
behind the declaration.
See section Declarations.
Put variables in column n.
See section Declarations.
Format comments in the first column.
See section Comments.
Do not disable all formatting of comments.
See section Comments.
Use GNU coding style. This is the default.
See section Common styles.
Prefer to break long lines at the position of newlines in the input.
See section Breaking long lines.
Set indentation level to n spaces.
See section Indentation.
Set offset for labels to column n.
See section Indentation.
Indent parameter types in old-style function
definitions by n spaces.
See section Indentation.
Use Kernighan & Ritchie coding style.
See section Common styles.
Set maximum line length for non-comment lines to n.
See section Breaking long lines.
Set maximum line length for comment formatting to n.
See section Comments.
Use Linux coding style.
See section Common styles.
Line up continued lines at parentheses.
See section Indentation.
Leave space between ‘#’ and preprocessor directive.
See section Indentation.
Do not force blank lines after declarations.
See section Blank lines.
Do not force blank lines after procedure bodies.
See section Blank lines.
Do not prefer to break long lines before boolean operators.
See section Breaking long lines.
Do not force newlines after commas in declarations.
See section Declarations.
Don't put each argument in a function declaration on a separate line.
See section Declarations.
Do not put comment delimiters on blank lines.
See section Comments.
Do not cuddle
} and the
while of a
do {} while;.
See section Statements.
Do not cuddle
} and
else.
See section Statements.
Do not put a space after cast operators.
See section Statements.
See section Declarations.
Do not format comments in the first column as normal.
See section Comments.
Do not format any comments.
See section Comments.
Do not prefer to break long lines at the position of newlines in the input.
See section Breaking long lines.
Zero width indentation for parameters.
See section Indentation.
Do not line up parentheses.
See section Statements.
Do not put space after the function in function calls.
See section Statements.
Do not put a space after every '(' and before every ')'.
See section Statements.
Put the type of a procedure on the same line as its name.
See section Declarations.
Do not put a space after every
for.
See section Statements.
Do not put a space after every
if.
See section Statements.
Do not put a space after every
while.
See section Statements.
Do not put the ‘*’ character at the left of comments.
See section Comments.
Do not swallow optional blank lines.
See section Blank lines.
Do not force a space before the semicolon after certain statements.
Disables ‘-ss’.
See section Statements.
Use spaces instead of tabs.
See section Indentation.
Disable verbose mode.
See section Miscellaneous options.
Use the original Berkeley coding style.
See section Common styles.
Do not read ‘.indent.pro’ files.
See section Invoking
indent.
Insert a space between the name of the
procedure being called and the ‘(’.
See section Statements.
Specify the extra indentation per open parentheses '(' when a statement is broken. See section Statements.
Preserve access and modification times on output files. See section Miscellaneous options.
Specify the indentation for preprocessor conditional statements. See section Indentation.
Put a space after every '(' and before every ')'.
See section Statements.
Put the type of a procedure on the line before its name.
See section Declarations.
Put a space after each
for.
See section Statements.
Put a space after each
if.
See section Statements.
Put a space after each
while.
See section Statements.
Indent braces of a struct, union or enum N spaces.
See section Statements.
Put the ‘*’ character at the left of comments.
See section Comments.
Swallow optional blank lines.
See section Blank lines.
On one-line
for and
while statements,
force a blank before the semicolon.
See section Statements.
Write to standard output.
See section Invoking
indent.
Tell
indent the name of typenames.
See section Declarations.
Set tab size to n spaces.
See section Indentation.
Use tabs. This is the default.
See section Indentation.
Enable verbose mode.
See section Miscellaneous options.
Output the version number of
indent.
See section Miscellaneous options.
Here is a list of options alphabetized by long option, to help you find the corresponding short option.
‘+’ is being superseded by ‘--’ to maintain consistency with the POSIX standard.
indentProgram
indent
This document was generated by david on December, 15 2008 using texi2html 1.78.
The buttons in the navigation panels have the following meaning:
where the Example assumes that the current position is at Subsubsection One-Two-Three of a document of the following structure:
This document was generated by david on December, 15 2008 using texi2html 1.78. | http://www.gnu.org/software/indent/manual/indent.html | CC-MAIN-2017-04 | refinedweb | 2,267 | 70.5 |
C
Table of Contents
- 1. 变量和基本类型
- 2. Language Specification
- 3. idioms
- 4. Cast
- 5. Compound Literals
- 6. extern
- 7. restrict
- 8. volatile
- 9. Operator Precedence
- 10. Unix Library
1 变量和基本类型
1.1 类型
- 是用
char时,需要明确指明
unsigned char还是
signed char,否则根据系统的不同而不同。
long long>=
long>=
int>=
short
1.1.1 如何选择类型
- 若不为负,选unsigned
- 用
int和
long long,而不用
long,因为
long一般和
int一样。
- 明确指定
char的类型
- 一般用
double而不用
float
1.1.2 Static
static member function: can use
ClassName::function()directly
static member variable: only one object for all instance of the class
static variable: A static variable inside a function keeps its value between invocations. A static global variable or a function is "seen" only in the file it's declared in
static functions: Static functions are not visible outside of the C file they are defined in.
1.1.3 转义
1.1.4 字面值类型
前缀
后缀
对象是指一块能存储数据并具有某种类型的内存空间。
1.1.5 初始化
int a=0; int a={0}; int a{0}; //C++11 int a(0);
定义于任何函数体之 外 的 内置类型变量 被初始化为0. 定义于任何函数体之 内 的 内置类型变量 不初始化。
string默认为空串。
extern int i; // 声明 extern int i=1; // 定义,不可在函数体内部
2 Language Specification
2.1 varargs functions (variadic functions)
a function to take a variable number or type of arguments
Receiving of arguments: You actually need both the number and type of the arguments to retrieve.
- create
va_list
- initialize using
va_start
- access using multiple
va_arg. The first gives you the first arg, and so on.
- call
va_end
The macro prototypes (defined in
stdarg.h). Note these are macros, not functions.
void va_start (va_list ap, last-required)
type va_arg (va_list ap, type)
void va_end (va_list ap)
An real world example:
; }
2.2 Variadic Macros
#define eprintf(format, ...) fprintf (stderr, format, __VA_ARGS__)
The
__VA_ARGS__ will be replaced with whatever in
....
The variable argument is completely macro-expanded before it is inserted into the macro expansion,
just like an ordinary argument.
3 idioms
typedef enum { false, true } bool;
4 Cast
4.1 In a word
static_cast: ordinary type conversions.
dynamic_cast: converting pointers/references within an inheritance hierarchy.
reinterpret_cast: low-level reinterpreting of bit patterns. Use with extreme caution.
const_cast: casting away const/volatile. Avoid this unless you are stuck using a const-incorrect API.
4.2 C style cast: DO NOT USE
4.3 Static Cast.
4.4 Const Cast.
4.5 Dynamic Cast
dynamic_cast is almost in the case of a pointer,
or throw
std::bad_cast in the case of a reference.
dynamic_cast has some limitations, though.
It doesn't work if there are multiple objects of the same type in the inheritance hierarchy
(the so-called 'dreaded diamond') and you aren't using virtual inheritance.
It also can only go through public inheritance -
it will always fail to travel through protected or private inheritance.
This is rarely an issue, however, as such forms of inheritance are rare.
4.6 Reinterpret Cast an aligned pointer.
4.7 C style cast
C casts are casts using (type)object or type(object). A C-style cast.
5 Compound Literals
A compound literal looks like a cast containing an initializer. Its value is an object of the type specified in the cast, containing the elements specified in the initializer; it is an lvalue.
5.1 Example
struct foo {int a; char b[2];} structure;
The constructing:
structure = ((struct foo) {x + y, 'a', 0});
5.2 more examples
char **foo = (char *[]) { "x", "y", "z" };
5.3 static
Value in the compound literals must be constant.
static struct foo x = (struct foo) {1, 'a', 'b'}; static int y[] = (int []) {1, 2, 3}; static int z[] = (int [3]) {1};
6 extern
- extern means extend the visibility of a variable or function.
- Declaration can be many times, but definition can only appear once.
- Definition will allocate memory, but declaration will never allocate memory.
6.1 Function
For function declare and define, `extern` is added by compiler by default. So use or not use `extern` for functions are equivalent.
6.2 Variable
define a variable
int a;
declare a variable
extern int a;
This can be used so that in this file, a refer to the variable actually defined and allocated in another file. The definition of the variable in the other file does not have extern, but it is still available by this file …
An exception: extern a variable with initialization
extern int a = 8;
This will be treated as definition.
6.3 extern "C"
extern "C" makes a function-name in C++ have 'C' linkage (compiler does not mangle the name) so that client C code can link to (i.e use) your function using a 'C' compatible header file that contains just the declaration of your function.
- Since C++ has overloading of function names and C does.
6.4 syntax
- can specify "C" linkage to each individual declaration/definition explicitly
- use a block to group a sequence of declarations/definitions to have a certain linkage:
extern "C" void foo(int); extern "C" { void g(char); int i; }
7 restrict
The restrict keyword is a declaration of intent given by the programmer to the compiler.
It says that for the lifetime of the pointer, only it or a value directly derived from it (such as pointer + 1) will be used to access the object to which it points.
This limits the effects of pointer aliasing, aiding optimizations.
If the declaration of intent is not followed and the object is accessed by an independent pointer, this will result in undefined behavior.
8 volatile
When your code works without compiler optimization, but fails when you turn optimization on, perhaps it is because of `volatile`.
If compiler found that around a variable, no one change it, it will do some optimization based on this. Maybe remove unnecessary code which it thinks will never execute.
The keyword tells the compiler that the value of the variable may change at any time. It may change unexpectedly, so DO NOT optimize the code when you compiler think it would not change.
8.1 syntax
declare a variable(both are equalvalent)
volatile int foo; int volatile foo;
declare pointers to volatile varialbes(common usage)
volatile uint8_t *pReg; uint8_t volatile *pReg;
volatile pointers to non-volatile data(very rare)
int * volatile p;
volatile pointer to volatile variable(also rare)
int volatile * volatile p;
8.2 When to use it
8.2.1 Memory-mapped peripheral registers
The register's value may change by hardware. But in the code, compiler cannot see it, so it may assume it is constant, and do some optimization.
uint8_t *pReg = (uint8_t) 0x1234; while (*pReg==0) {}
Since no `volatile`, the assembly looks like:
mov ptr, #0x1234 mov a, @ptr loop: bz loop
To fix it, use volatile to declare it:
uint8_t volatile *pReg = (uint8_t volatile *)0x1234
The assembly will be:
```asm mov ptr, #0x1234 loop: mov a, @ptr bz loop ```
8.2.2 Global variables modified by an ISR(Interrupt Service Routine)
Compiler will of course not know about interrupt. So when the global file can be modified by interrupt, we must tell it.
int volatile etx_rcvd = FALSE; void main() { while(!ext_rcvd) {} } interrupt void rx_isr(void) { if (ETX == rx_char) { etx_rcvd = TRUE; } }
If no volatile, compiler will think the while condition always be true, thus never go out of the loop.
8.2.3 Global variables accessed by multiple tasks within a multi-threaded application
Compiler doesn't find the variable change near the code it is defined, so it may assume it is unchanged. While another task in the same time may change it, it is just like the interrupt.
9 Operator Precedence
9.1 notes
9.1.1 For
?:
the middle of the conditional operator (between ? and :)
is parsed as if parenthesized: its precedence relative to
?: is ignored
9.1.2 For C++
The operand of sizeof can't be a C-style type cast:
the expression
sizeof (int) * p is unambiguously interpreted as
(sizeof(int)) * p,
but not
sizeof((int)*p).
9.1.3 In c++ table, the
?: is also in 14 cell
10 Unix Library
sleep
#include <unistd.h> unsigned int sleep(unsigned int seconds); // seconds int usleep(useconds_t useconds); // microseconds int nanosleep(const struct timespec *rqtp, struct timespec *rmtp);
There's no implementation of
clock_gettime as in Unix, the following serves as an portable solution.
#ifdef __MACH__ #include <sys/time.h> #define CLOCK_REALTIME 0 #define CLOCK_MONOTONIC 0 //clock_gettime is not implemented on OSX int clock_gettime(int /*clk_id*/, struct timespec* t) { struct timeval now; int rv = gettimeofday(&now, NULL); if (rv) return rv; t->tv_sec = now.tv_sec; t->tv_nsec = now.tv_usec * 1000; return 0; } #endif
use it like this
double get_time() { struct timespec ts; ts.tv_sec=0; ts.tv_nsec=0; clock_gettime(CLOCK_REALTIME, &ts); double d = (double)ts.tv_sec + 1.0e-9*ts.tv_nsec; return d; } | http://wiki.lihebi.com/c.html | CC-MAIN-2017-22 | refinedweb | 1,452 | 64.91 |
WCF RESTful service and Azure AppFabric Service Bus working together allow thin client (browser) to access a remote machine; and this is achieved with remarkably small amount of code. The dataflow of the proposed solution is depicted in Fig. 1.
The WCF REST acts as lightweight Web server, whereas the Service Bus provides remote access solving problems of firewalls and dynamic IP addresses. A small application presented in this article employs the two technologies. The application ScreenShare running on a target machine allows the user to view and control the target machine's screen from his/her browser. Neither additional installations nor <object> tag components (like applets or ActiveX) are required on the client machine.
<object>
Client's browser depicts an image of a remote target machine screen and allows to operate simple controls (like buttons and edit boxes) on the image, i.e. implement to some extent active screen sharing. Since this article is just an illustration of concept, the sample implements minimum features. It consists of ScreenShare.exe console application and a simple RemoteScreen.htm file. The application provides:
The Web server implemented as self-contained RESTful WCF service is able to receive ordinary HTTP requests generated [in our case] by a browser. The service exposes endpoints to transfer text (text/html) and image (image/jpeg) data to client. The endpoints have Relay bounding to support communication via Azure AppFabric Service Bus. Such a communication enables connection between browser and WCF service on target machine running in different networks with dynamic IP addresses behind their firewalls.
To run the sample, you need first to open an account for Azure AppFabric, create a service namespace (see MSDN or e.g. [1], chapter 11) and install Windows Azure SDK on target machine. In App.config file in Visual Studio solution (and in configuration file ScreenShare.exe.config for demo), values of issuerName and issuerSecret attributes in <sharedSecret> tag and placeholder SERVICE-NAMESPACE should be replaced with appropriate values of your Azure AppFabric Service Bus account. Then build and run the project (or run ScreenShare.exe in demo). Application writes in console URL by which it may be accessed:
<sharedSecret>
SERVICE-NAMESPACE
Output of ScreenShare application is shown in Fig. 2.
Now you can access the screen of the target machine from your browser using the above URL.
As it was stated above, this sample is a merely a concept illustration, and its functionality is quite limited. For now, the client can get an image of a target machine screen in his/her browser and control target machine by left mouse click and text inserted in browser and transmitted back to target machine. After left mouse click was performed at image in browser, coordinates of the clicked point are transferred to target machine and ScreenShare application emulates the click at the same coordinates in the target machine screen. To transmit text, the client should move the cursor to browser image of selected control (make sure that the image is in focus), type text (as the first symbol is typed, a text box appears to accommodate the text) and press "Enter" button. The inserted text will be transmitted to the target machine and placed into appropriate control. After click or text transfer, the image in browser will be updated to reflect changes on target machine.
ScreenShare configuration file contains two parameters controlling its behavior. ScreenMagnificationFactor changes size of screen image before its transmit to browser. This parameter of type double can be anything between 0 and 1 providing trade-off between quality and size of screen image transmitted. Client can switch between full size image (ScreenMagnificatioFactor = 1) and current magnification with "Space" button in the browser. SleepBeforeScreenCaptureMs sets pause (in ms) between action on the target machine screen and its capture to catch the change caused by the action.
ScreenShare application may serve several browsers. Its WCF service behavior is defined with ConcurrencyMode.Single. This definition ensures that client calls are served sequentially and synchronization is not required. ScreenShare application supports session with its state for each client (browser). For now, the session state consists of one parameter ScreenMagnificationFactor since the current image size may vary for different clients.
So far ScreenShare was tested with Internet Explorer (IE), Chrome, Firefox and browsers of Android, Windows Phone 7 (WP7), iPhone and Simbian mobile devices. Pictures below show fragments of original target machine screen (Fig. 3) and its images in IE (Fig. 4) and emulators of Android (Fig. 5) and WP7 (Fig. 6).
The best results were achieved with IE and Chrome. Firefox shows the initial picture but then fails to react (RemoteScreen.htm file should be changed to make Firefox update source of image). Mobile browsers function properly but image size should be adjusted, and it is difficult to insert text. Actually htm file and probably application itself should be tuned to serve at least the most popular browsers (the application is aware of browser type).
The sample in this article by no means compete with sophisticated screen sharing applications. But the power of the described approach is that theoretically any browser without additional installation may be used to operate your desktop remotely. The browser ability is limited with just the developer's HTML/JavaScript skills.
The sample of this article may be improved in many ways. HTML code may be updated in order to support more browsers. Additional control options like e.g. more mouse events and text editing capabilities may be added. It is also possible to share (or rather automate) just one application instead of the entire screen. For application automation purposes, the approach of this article may be combined with code injection technique [2]. Improvements may be also made in image update. Various possibilities to update only changed part of the screen image should be considered. This can be achieved e.g. either with split of the entire screen area on several regions or with creation of overlapping regions for updated parts of the screen. Yet another improvement can be carried out by converting ScreenShare console application into Windows service. This is not a trivial task as it seems at the first glance since Windows service runs in a different desktop and therefore by default captures screen of its desktop and not of primary desktop.
The RESTful WCF + Azure AppFabric Service Bus approach allows user to access and control remote machine with any browser from any place. Very little code and development efforts are required to achieve this goal. The code sample illustrates this approach.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
maq_rohit wrote:I would expected to give more details on azure fabric in your article.
maq_rohit wrote:I was expected to get more details specially on the behaviour of code as well.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/133311/RESTful-WCF-Azure-AppFabric-Service-Bus-Access-to?msg=4337924 | CC-MAIN-2015-35 | refinedweb | 1,166 | 55.03 |
Better late than never. If you missed the earlier discussions in this magazine related to deep learning tools and techniques, it’s time to refresh your learning.
Deep learning is not a new topic as such. It’s been around since 2014. However, the techniques and tools are evolving at a constant pace. A majority of them are free and open source software or FOSS. The investments are also considerable across organisations and geographies. As the practitioners say, we are in the prolonged summer phase of deep learning; so it’s never too late to jump into a beautiful paradigm of DL.
AI, ML and DL
Note: DL is the new kid on the block, a subset of ML and AI.
There’s always been some confusion about artificial intelligence (AI), machine learning (ML) and deep learning (DL). We often use these terms interchangeably. However, they actually are not the same. Artificial intelligence (AI) can be considered a superset of everything. AI can be viewed as automation of cognitive processes. Machine learning (ML), on the other hand, deals with the development of models out of training data sets. ML is a subset of AI. Deep learning (DL), in turn, is a subset of ML. It deals with the layering of models that have parameters, with a weight associated with each layer, and with continuously differentiable functions in a multi-dimensional space.
- Figure 1: The relationship between AI, ML and DL (Courtesy: Nikhil Gupta, Hackernoon)
- Figure 2: Turning scrambled paper to unscrambled form through a network of nodes
Intuition and technique
Before getting into the techniques and tools of DL, let’s look at intuition first. Let us suppose that you have a crushed piece of paper in your hand that you are slowly and steadily opening out and flattening (or ‘unscrambling’) to its original form. DL is a similar process. When the initial data is available, it’s in a scrambled and tangled form. You can hardly get anything out of it. The DL process consists of a set of deep learning layers arranged as a network of nodes, called a neural network. Each layer has a parameter that we call the weight of the layer. There can be thousands of such layers and weights in a typical DL neural network. The DL process is all about opening out and flattening a crushed ball of paper. By passing the ‘scrambled’ data (in the form of an input tensor data structure) through the layer of the neural network, each layer is unscrambled a bit to finally flatten it out to a perfectly flat form. This final form often is in the shape of a vector, the 1D tensor.
Throughout the process, the weight of the layers is fine-tuned such that incremental unscrambling takes place. Of course, this process is iterative, and corrections over many iterations are collectively called epochs. Every time we measure the difference between what we want and what we get – the desired state minus the actual one. This gives us what is called the loss function. The sole purpose of the deep learning system is to minimise the loss function to attain the desired accuracy we want our DL model to achieve.
This always involves a feedback loop, where the loss function is minimised in each step by calibrating the weight. Technically speaking, the layer’s weight is adjusted a bit in the direction opposite to that of the gradient of continuous differentiation of the loss function with respect to the layer’s weight. This technique is called back propagation. Finally, the process converges at the end of the epoch and we have achieved our desired goal (often accuracy).
Note: Unscrambling a scrambled paper into a clear, flattened sheet through successive steps is the intuitive approach that DL is based on.
- Figure 3: Back propagation illustrated
- Figure 4: Layers of a DL deployment taxonomy
Important definitions
Now, let’s get familiar with some of the DL related definitions we often encounter.
1. The model consists of a network of layers that have parameters (weights).
2. Layer: output = function
(weight, input)
3. Tensors: Data structures that represent input and output
4. Loss function: diff (expected output, actual output)
5. Differentiability: Gradient G = d (LossFunciont) d(Weight)
6. Gradient descent: Move layer’s weight in the opposite direction of G to reduce the loss function on every step.
7. Back propagation: Loopback through optimiser such that the loss function is minimised and runs through the gradient descent.
8. Evaluation: Measures the success (say, the accuracy of a prediction).
Taxonomy and tools
Before learning about the different tools involved in DL – platforms, languages, software – let us first find out more about the different layers in a typical DL deployment scenario.
Table 1 describes the FOSS tool taxonomy, from bottom to top, in the stack.
Keras installation
Keras can usually be deployed in many ways across platforms – Windows, Linux, Mac and even in the cloud (GCP, AWS). For practising DL programming using Keras, it is perfectly fine to have it running in the laptop with conventional CPUs on Windows as well. However, it is possible to book an EC2 GPU instance in AWS and get the benefit of a GPU. That way, one need not procure dedicated GPU hardware and can use the EC2 instance pay-as-you-go model. (However, for longer usage, the cloud may not necessarily be cost effective.)
Here we are going to describe the steps for the installation of Keras and associated dependencies in Ubuntu and Windows 10 using the underlying CPU of the laptop (no GPU needed).
Installation in Ubuntu
1. To install Python-2, use the following code:
sudo apt-get update sudo apt-get install python-pip python
2. In case you want to install Python-3 and want to make it the default Python, type:
sudo apt-get update sudo apt-get install python3-pip python3 alias python=python3 alias pip=pip3
3. Install Python-specific scientific dependencies required for DL as follows.
a. For various Python math packages like NumPy, use:
sudo pip install matplotlib sudo pip install numpy sudo pip install scipy sudo pip install PyYAML
b. To view the result (various graph viewers), type:
sudo pip install graphviz
4. The code for installing Keras is:
sudo pip install keras
The Keras config file can be found at ~/.keras/keras.json, which has information like backend: tensorflow.
- Figure 5: MNIST data set digit output
- Figure 6: MNIST data set training vs validation plotting
Installation in Windows 10
It is possible to install Keras in Windows 10 also, using Anaconda. The following steps need to be followed.
1. Download Anaconda.
2. While installing Anaconda, you may optionally want to download the VSCode also as the preferred IDE for writing DL programs. However, you can use the Jupyter notebook as well, initially, for writing and running the program, piece-wise.
3. Next launch Anaconda Prompt from the Windows program launcher.
4. Next install Python-3.6:
conda install python=3.6
5. Create a conda environment:
conda env create -q -f environment.yml
6. A typical environment.yml looks like:
name: practice channels: - defaults dependencies: - python=3.7.* - bz2file==0.98 - cython==0.29.* - pip==19.1.* - numpy==1.16.* - jupyter==1.0.* - matplotlib==3.1.* - setuptools==41.0.* - scikit-learn==0.21.* - scipy==1.2.* - pandas==0.24.* - pillow==6.1.* - seaborn==0.9.* - h5py==2.9.* - pytest==5.0.* - twisted==19.2.* - tensorflow==1.13.* // For GPU it will be something like tensorflow-gpu==1.13.* - pip: - keras==2.2.*
1. conda info –envs will show the practice as the environment.
2. conda activate practice
3. jupyter notebook
4. Launch in your favourite browser.
A sample program using Keras
We are now going to look at the MNIST data set digit-detection problem using Jupyter Notebook.
To get an idea about what is inside the MNIST data set, use the following code:
import keras from keras.datasets import mnist from keras import models, layers import numpy as np import matplotlib.pyplot as plt from keras.utils import to_categorical # Reading the mnist dataset which is 28X28 pixel of 70k integer (0-9) RBG scale images # Out of these 70K, I am using 60K to train my deep network, rest 10k will be used for validation/testing (train_images, train_labels), (test_images, test_labels) = mnist.load_data() print (train_images.shape); print (train_labels.shape); print (train_labels); # Lets see the images visually using MatPlot plt.figure() plt.title(‘First Train Image:’); plt.imshow(train_images[0]); plt.colorbar() plt.grid(False) plt.show(); plt.title(‘First Test Image:’); plt.imshow(test_images[0])
The output is as shown in Figure 5.
Building the neural network
We have chosen a simple sequential network with two dense layers.
## Building the TensorFlow Network # As this is a categorical classification (multi-class, single label) problem, going with seq “Dense” sort of deep networks network = models.Sequential(); # First layer is downshaping the 28 x 28 image to a 512 tensor vector, relu is max(0, input) network.add(layers.Dense(512, activation=’relu’, input_shape=(28 * 28,))); # Now, add the last layer, downshaping the output of the previous layer to 10 (i.e. 0-9) ‘digits’. # Remember: The output of this layer is merely a probablistic distribution, hence “softmax”. network.add(layers.Dense(10, activation=’softmax’)) ## let’s see how DL network looks like network.summary()
The output is:
Instructions for updating: Co-locations handled automatically by placer. Model: “sequential_1” _______________________________________ Layer (type) Output shape Param # ======================================= dense_1 (Dense) (None, 512) 401920 _______________________________________ dense_2 (Dense) (None, 10) 5130 ======================================= Total params: 407,050 Trainable params: 407,050 Non-trainable params: 0_______________________________________________________
The compilation of the network can be done as follows:
# Let’s compile the network # Well: compile here means setting up the back propagation, defining the loss function and determining how the model is working ## Optimiser is to reduce the loss function at every step, gradient descent, reducing d(loss_fn)/d (lthe ayer’s weight) # As it’s a categocial classification, picking up usual cat_crossent as loss method # How my network is converging? Lets measure ‘accuracy’ network.compile(optimizer=’rmsprop’, loss=’categorical_crossentropy’, metrics=[‘accuracy’]);
For data preparation, here’s the code snippet:
# Data preparation: To fit to my model train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype(‘float32’) / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype(‘float32’) / 255 # Sort of encoding such that the value remains b/w [-1, to +1] train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels)
Training can be done as follows:
# Now Training is done with 5 steps “epoch” on training data # Batch size typically power of 2, it’s the per batch processing in an epoch validation_images = train_images[:10000] actual_train_images = train_images[10000:] validation_labels = train_labels[:10000] actual_train_labels = train_labels[10000:] history = network.fit(actual_train_images, actual_train_labels, epochs=5, batch_size=128, validation_data=(validation_images, validation_labels))
The output is:
Train on 50000 samples, validate on 10000 samples Epoch 1/5 50000/50000 [==================================] - 9s 178us/step - loss: 0.2801 - acc: 0.9191 - val_loss: 0.1498 - val_acc: 0.9578 Epoch 2/5 50000/50000 [=================================] - 5s 96us/step - loss: 0.1170 - acc: 0.9656 - val_loss: 0.1070 - val_acc: 0.9675 Epoch 3/5 50000/50000 [==================================] - 5s 100us/step - loss: 0.0773 - acc: 0.9770 - val_loss: 0.0858 - val_acc: 0.9739 Epoch 4/5 50000/50000 [=================================] - 5s 99us/step - loss: 0.0555 - acc: 0.9828 - val_loss: 0.0879 - val_acc: 0.9734 Epoch 5/5 50000/50000 [==================================] - 5s 94us/step - loss: 0.0414 - acc: 0.9874 - val_loss: 0.0812 - val_acc: 0.9767
Note: As you can see, the loss function (val_loss) is reducing at every step whereas the accuracy (val_acc) is increasing at every step. That’s the effectiveness of the training.
Here is the code snippet for training vs validation plotting:
# history object, output of fit methof his_dict = history.history print (his_dict.keys()) # loss plotting: Taining vs Validation loss_values = his_dict[‘loss’] val_loss_values = his_dict[‘val_loss’] epochs = range(1, len(his_dict[‘acc’]) + 1) plt.plot(epochs, loss_values, ‘b’, label=’Training Loss’) plt.plot(epochs, val_loss_values, ‘g’, label=’Validation Loss’) plt.title(‘Training & Validation Loss’) plt.xlabel(‘Epochs’) plt.ylabel(‘Loss’) plt.legend() plt.show() plt.clf() ## clear the plot # Accuracy plotting: Training vs Validation acc_values = his_dict[‘acc’] val_acc_values = his_dict[‘val_acc’] plt.plot(epochs, acc_values, ‘b’, label=’Training Accuracy’) plt.plot(epochs, val_acc_values, ‘g’, label=’Validation Accuracy’) plt.title(‘Training & Validation Accuracy’) plt.xlabel(‘Epochs’) plt.ylabel(‘Loss’) plt.legend() plt.show()
The output is shown in Figure 6.
For testing over test data (10k samples), use the following code:
# Now it’s QA time! Test it over 10k samples. test_loss, test_acc = network.evaluate(test_images, test_labels) print(‘The accuracy I am expecting:’, test_acc);
The output is:
10000/10000 [====================================] - 1s 61us/step The accuracy I am expecting: 0.9752
The code for prediction is given below:
# Prediction section predictions = network.predict(test_images); print (predictions.shape); print (‘Probablistic distribution of the first digit prediction: \n’ + str(predictions[0])); # See what we got? predicted_value = np.argmax(predictions[0]); print (‘Predicted pic: ‘ + str(predicted_value)); print (‘Actual picture under test: ‘ + str(np.argmax(test_labels[0]))); print (‘Bingo!’)
The output is:
(10000, 10) Probablistic distribution of the first digit prediction: [2.10214646e-09 1.07541705e-10 1.82529277e-06 8.29521523e-05 7.31067290e-12 4.73346908e-08 1.90508078e-13 9.99913931e-01 2.09084714e-08 1.17789079e-06] Predicted pic: 7 Actual picture under test: 7 Bingo!
Universal approach
The steps needed to attack and solve any DL problem are listed in Table 2.
- Figure 7: Google Colab GPU settings
- Figure 8: Google Colab Google Drive mount
Using Google Colab
Many of our laptops do not have GPUs, and are purely CPUs. So, what is the recourse? The answer is: Google Colab. It’s an attempt from Google to provide on-demand VMs with GPUs to run your tests. You can potentially mount your Google Drive as well to get the data from it. The steps given below will help you.
1. Log in to colab.research.google.com.
2. Look at File in the top menu.
3. You can write a new notebook or you can import an existing one.
4. You can go to the Edit > Notebook settings, where you can choose the hardware accelerator as the GPU. Refer to Figure 7.
5. You can even mount your Google Drive using the following code. Refer to Figure 8.
from google.colab import drive drive.mount(‘/content/drive’) import tensorflow as tf print (tf.test.gpu_device_name())
Deep learning has been evolving at a tremendous pace since the last couple of years. It is necessary to keep technically abreast with this emerging green field to stay relevant in the technical landscape. This article describes the taxonomy and various tools available at our disposal. While some are proprietary/commercial, there are plenty of FOSS tools around also. One of them is Keras. Familiarity with basic Python scripting is good enough to start with Keras. It abstracts the low-level languages (e.g., CUDA) and platforms (e.g., TensorFlow) beautifully to get a kickstart in this paradigm. It’s easy to install even on your laptops, and on Linux or Windows. A sample program of the MNIST data set digit detection has also been presented to give a glimpse of the anatomy of a DL program. This is just the beginning, and the road ahead is endless and enriching.
Note: The MNIST digit detection example mentioned in this article can be found in the following GitHub link: | https://www.opensourceforu.com/2020/05/deep-learning-the-techniques-and-tools-you-must-know/?amp | CC-MAIN-2021-31 | refinedweb | 2,567 | 58.38 |
25+ C Interview Questions and Answer to Shape your Career
Preparation is the only mantra that will help you to get success. Don’t wait for an opportunity. Create it. Follow this mantra and make every opportunity count with these C interview questions and answers.
Success doesn’t just find you. You have to go out and get it. By Marva Collins
Here, we will help you in all possible ways. These are some latest C interview questions with answers that will guide you on how to crack and deal with interview questions.
So, let’s dive into the ocean of knowledge.
Refer this link, if you want to practice some tricky Interview Questions for C Programming.
Keeping you updated with latest technology trends, Join DataFlair on Telegram
1. Popular C Interview Questions and Answer
Q.1 How would you print the string “DataFlair” without using a semicolon to terminate the printf() statement?
Ans. You can print a string without terminating the printf() function by a semicolon with the help of if statement:
#include<stdio.h> int main() { if(printf("DataFlair")) {} return 0; }
Output-
Q.2 Consider the following variable declarations:
float f;
long l;
short s;
What would be the data type of the expression: f – l / s
Ans. The resultant expression would be of the float data type in C.
Here is a diagrammatic representation of how it works:
Q.3 How would you print all the numbers between 60 to 80 without using any loops?
Ans. We can easily print all the numbers in a given range with the help of recursion in C.
Here is a code that would help you solve this problem:
#include <stdio.h> int main() { static int n = 60; if(n <= 80) { printf("%d ", n++); main(); } return 0; }
Output-
Q.4 Why can’t you initialize a data element of a structure inside a structure?
Ans. In a structure, the data elements have no physical meaning. Memory in a structure is allocated only after the variable (object) associated with it is created. The data members of a structure are accessed through variables (objects) and hence initializing a value to a data member inside a structure is absurd.
Do you know Structures in C Makes Coder Life Easy?
Q.5 Sometimes we can enclose the header file within double quotes instead of angular brackets. When can it be done?
Ans. Enclosing the header file name within angular brackets signifies that the header file is located in the standard folder of all other header files in C.
Enclosing the header file name within double quotes signifies that the header file is located in the present folder you are working with. It is a preferred practice to include user-defined header files in this manner.
Q.6 How would you dereference a void pointer if its data type is not known?
Ans. A void pointer can be dereferenced only by explicit type casting in C.
For instance,
int number = 20;
void *pointer = &number;
printf(“%d\n”, *((int*)pointer));
Q.7 What are the files generated during processing?
Ans. From compilation to running, the source code with extension .c is converted into expanded source file with the .i extension which is then converted into the object code with .obj extension and then finally converted to the executable code with .exe extension.
Q.8 Which has greater time complexity: Iteration or Recursion and Why?
Ans. Recursion has a greater time complexity as the entire function is called repeatedly wears in iteration, the time complexity simply depends on the number of cycles of the loop. Internally, for every recursion, a new stack in the computer memory is created.
Q.9 What is a wild pointer?
Ans. Wild pointers in C are nothing but simply pointers that are left uninitialized. They can prove to be dangerous to use as they point to some random memory address and may cause the computer system to crash.
Q.10 What does %0.3f indicate in C?
Ans. The %0.3f is a modified version of a format specifier to indicate the floating-point value of a variable up to 3 decimal places.
2. Commonly Asked C Interview Questions with Answers
Q.11 How would you assign the value of an integer data type without using the equal to “=” operator?
Ans. The assignment of value to a variable of any data type can be done by enclosing the value within simple brackets followed by the identifier.
In this case, this is how you would assign a value to a variable of int data type:
int value(10);
It is similar to int value = 10;
Q.12 In C, what would be the value of the expression: 4 + 15 % 4 * 7 – 9
Ans. The modulus operation has equal precedence as multiplication. Therefore, the expression simplifies can be simplified in the following manner:
=> 4 + 15 % 4 * 7 – 9
=> 5 + 3 * 7 – 9
=> 4 + 21 – 9
=> 25 – 9
=> 16
The answer is 16.
Enhance Your Fundamental Skills With Operators in C
Q.13 Is it incorrect to use String = “DataFlair” in C? If yes, then what is the correct way to assign a value to a string?
Ans. A value of a string is copied by using the strcpy() function. The correct way is:
strcpy(String, “DataFlair”);
Q.14 What is the difference between gets() and fgets() function?
Ans. Both the functions are used to take the input of a string but they are different in terms of the syntax used.
The syntax of gets() is:
gets(string_name);
The syntax of fgets() is:
fgets(string_name, string_size, stdin);
There is a problem associated with gets() which makes it dangerous to use because of Buffer Overflow as gets() is not capable of performing the array bound test. Hence, gets() is deprecated. To overcome this problem, fgets() is used instead of gets() function in C.
Q.15 How would you preserve the value of a variable that becomes out of scope?
Ans. There are certain situations that the programmer might encounter where the value of a variable gets out of scope. With the help of the “static” keyword, the programmer can preserve the value of such variables.
This basic syntax of using the static keyword is:
static data_type identifier = value;
For instance,
static int number = 1;
Q.16 Write a C program to find the sum of the series up to n terms:
1 + (1 + 2) + (1 + 2 + 3) + (1 + 2 + 3 + 4) + …
Ans.
#include<stdio.h> int main() { printf("Welcome to DataFlair tutorials!\n\n"); int n, sum1, sum2 = 0, i, j; printf("Enter the number of terms in the series n: "); scanf("%d",&n); for(i = 1; i <= n; i++) { sum1 = 0; for(j = 1; j <= i ; j++) sum1 = sum1 + j; sum2 = sum2 + sum1; } printf("The sum of the series is: %d\n",sum2); return 0; }
Output-
Q.17 How would you allocate enough memory for an array of 2000 floating-point values?
Ans. This is how we can perform the given task:
float* const value(new float[2000]);
Get Samurai Technique to Learn Arrays in C
Q.18 What is the difference between the following declarations:
double *pointer = new double(10);
double *pointer = new double int[5];
Ans. In this first declaration, the memory of one double data type is allocated to pointer and it is initialized to 5. It means that the value 10 is stored in the pointer.
In the second declaration, the memory is allocated to 10 contiguous double data type elements in the array. The beginning address is stored in ‘pointer’.
Q.19 What is meant by a dangling pointer?
Ans. A dangling pointer is basically a pointer that points to the memory address of a variable that has been deleted or freed. A dangling pointer is mainly used during the deallocation of memory, function calls, and when the variable goes out of scope.
Q.20 What is the use of the volatile?
Ans. Volatile is basically a keyword that tells the compiler not to optimize anything that can change in ways that can’t be defined by the compiler.
3. Scenario-Based C Interview Questions and Answers
Q.21 When do we require to use the concept of a bit field?
Ans. A bit field in C is basically required to reduce the memory consumption by the variables. It is required when we already know that a particular data type can have values restricted to a particular range.
For example, we know that there are not more 31 days in a month. Therefore, instead of using the int data type to specify the number of days in a month, we can restrict its range of values from 1 to 32.
Q.22 What will happen if you store the value 32768 in an int data type?
Ans. You cannot store 32768 in the int data type as it is out of the range of integral values. The range of int data type is from -32768 to 32767.
Q.23 What is the difference between a pointer to an array and an array of pointers?
Ans. A pointer to an array is basically a pointer that points to the entire array, that is, it simply does not only point to the first element of the array but the complete array itself. It is usually done when dealing with multidimensional arrays in C.
This is how it is done:
float (*pointer)[10];
Here, the pointer points to the entire array consisting of 10 floating-point values.
An array of pointers is simply an array that consists of pointers. The pointer in the array points to a memory address of the variable.
For instance,
char *string[ ]={“Data”, “Flair”, “Tutorials”};
Q.24 What is the basic principle behind bucket sort?
Ans. Bucket sort (also called as radix sort) is based on the principle of sorting data elements (numbers or strings) with integer keys by grouping the keys by their corresponding individual digits that share the same significant position as well as value.
Learn everything about Strings in C
Q.25 What are self-referential structures?
Ans. Self-referential structures are nothing but structures that include a data member which is a pointer to another structure of similar type.
For instance,
struct student { int age; float marks; student *pointer; // Self-referential pointer }
Here, the structure student contains a data member pointer that points to another structure student of similar type.
Q.26 What is the difference between getch() and getche() functions?
Ans. Both functions are used to accept a character as an input value. By using getch(), the key that was pressed will not appear on the screen. It will automatically be captured and be assigned to a variable. On the other hand, by using getche(), the key that was pressed by the user will appear on the screen, while at the same time being assigned to a variable.
Q.27 What logic would you apply to print the left diagonal of any n x n matrix in C?
Ans. This is how indexing of rows and columns look like in a n x n matrix.
Here, we observe that the index of the rows and columns of the left diagonal are equal.
Therefore, the base condition would be:
If the rows and columns are represented with i and j respectively, then:
if ( i == j ) { printf (“ %d ”,a[ i ] [ j ] ); }
Q.28 Predict the output of the following code:
#include<stdio.h> int main() { char array[] = "DataFlair"; char *pointer = array; while(*pointer != '\0') ++*pointer++; printf("%s\n",array); return 0; }
Ans. The output would be EbubGmbjs
This problem is solely used on the understanding of postfix, prefix and dereference operators.
The program shows the increment of the value of the string “DataFlair”.
Output-
Q.29 Predict the output of the following code segment:
#include <stdio.h> #define fgets "%s DataFlair" int main() { printf(fgets, fgets); return 0; }
Ans. %s DataFlair DataFlair
Output-
Q.30 Predict the output of the following code segment:
#include <stdio.h> int main() { int i = 10; short s; long l = 3.3; printf("%ld\n", sizeof(i++ + s*l)); return 0; }
Output-
4. Summary
Your limitations are only imaginations. Believe in yourself and achieve everything in your life. These C Interview Questions and Answers will help you get a job in top IT companies like TCS, Infosys, Accenture, Intel, etc.
You can revise all the topics from our Free C Tutorial Series.
Thanks for allowing us to serve you better. Now, hold the steering and drive your career. | https://data-flair.training/blogs/c-interview-questions-and-answer/ | CC-MAIN-2020-34 | refinedweb | 2,086 | 64.61 |
All Dijits can be subclassed to change parts of their behavior, and then used as the original Dijits, or you can create your own Dijits from scratch and include existing Dijits (Forms, buttons, calendars, and so on) in a hierarchical manner.
All Dijits can be created in either of the following two ways:
- Using the dojoType markup property inside selected tags in the HTML page.
- Programmatic creation inside any JavaScript.
For instance, if you want to have a ColorPalette in your page, you can write the following:
<div dojoType="dijit.ColorPalette"></div>
But you also need to load the required Dojo packages, which consist of the ColorPalette and any other things it needs. This is generally done in a script statement in the <head> part of the HTML page, along with any CSS resources and the djConfig declaration. So a complete example would look like this:
<html>
<head>
<title>ColorPalette</title>
<style>
@import "dojo-1.1b1/dojo/resources/dojo.css";
@import "dojo-1.1b1/dijit/themes/tundra/tundra.css";
</style>
<script type="text/javascript">
djConfig=
{
parseOnLoad: true
}
</script>
<script type="text/javascript" src="dojo-1.1b1/dojo/dojo.js"></script>
<script type="text/javascript">
dojo.require("dojo.parser");
dojo.require("dijit.ColorPalette");
</script>
</head>
<body class=”tundra”>
<div dojoType="dijit.ColorPalette"></div>
</body>
</html>
Obviously, this shows a simple color palette, which can be told to call a function when a choice has been made. But if we start from the top, I've chosen to include two CSS files in the <style> tag.
The first one, dojo.css, is a reset.css, which gives lists, table elements, and various other things their defaults. The file itself is quite small and well commented.
The second file is called tundra.css and is a wrapper around lots of other stylesheets; some are generic for the theme it represents, but most are specific for widgets or widget families.
The two ways to create Dijits
So putting a Dojo widget in your page is very simple. If you would want the ColorPalette dynamically in a script instead, remove the highlighted line just before the closing body tag and instead write the following:
<script>
new dijit.ColorPalette({}, dojo.byId('myPalette'));
</script>
This seems fairly easy, but what's up with the empty object literal ( {} ) as the first argument? Well, as some Dijits take few arguments and others more, all arguments to a Dijit get stuffed into the first argument and the others, the last argument is (if needed) the DOM node which the Dijit shall replace with its own content somewhere in the page.
The default is, for all Dijits, that if we only give one argument to the constructor, this will be taken as the DOM node where the Dijit is to be created.
Let's see how to create a more complex Dijit in our page, a NumberSpinner. This will create a NumberSpinner that is set at the value '200' and which has '500' as a maximum, showing no decimals. To create this NumberSpinner dynamically, we would write the following:
<input type="text" name="date1" value="2008-12-30"
dojoType="dijit.form.DateTextBox"/>
One rather peculiar feature of markup-instantiation of Dijits is that you can use almost any kind of tag for the Dijit.
The Dijit will replace the element with its own template when it is initialized. Certain Dijits work in a more complicated fashion and do not replace child nodes of the element where they're defined, but wrap them instead.
However, each Dijit has support for template HTML which will be inserted, with variable substitutions whenever that Dijit is put in the page. This is a very powerful feature, since when you start creating your own widgets, you will have an excellent system in place already which constrains where things will be put and how they are called.
This means that when you finish your super-complicated graph drawing widget and your client or boss wants three more just like it on the same page, you just slap up three more tags which have the dojoType defining your widget.
How do I find my widget?
You already know that you can use dojo.byId('foo') as a shorter version of document.getElementById('foo'). If you still think that dojo.byId is too long, you can create a shorthand function like this:
var $ = dojo.byId;
And then use $('foo') instead of dojo.byId for simple DOM node lookup.
But Dijits also seem to have an id. Are those the same as the ids of the DOM node they reside in or what? Well, the answer is both yes and no.
All created Dijit widgets have a unique id. That id can be the same string as the id that defines the DOM node where they're created, but it doesn't have to be.
Suppose that you create a Dijit like this:
<div id='foo' dojoType='dijit._Calendar'></div>
The created Dijit will have the same Dijit id as the id of the DOM node it was created in, because no others were given. But can you define another id for the widget than for its DOM node? Sure thing. There's a magic attribute called widgetId. So we could do the following:
<div id='foo' dojoType='dijit._Calendar' widgetId='bar'></div>
This would give the widget the id of 'bar'.
But, really, what is the point? Why would we care the widget / Dijit has some kind of obscure id? All we really need is the DOM node, right? Not at all. Sure, you might want to reach out and do bad things to the DOM node of a widget, but that object will not be the widget and have none of its functions.
If you want to grab hold of a widget instance after it is created, you need to know its widget id, so you can call the functions defined in the widget. So it's almost its entire reason to exist!
So how do I get hold of a widget obejct now that I have its id? By using dijit.byId(). These two functions look pretty similar, so here is a clear and easy to find (when browsing the book) explanation:
- dojo.byId(): Returns the DOM node for the given id.
- dijit.byId(): Returns the widget object for the given widget id.
Just one more thing. What happens if we create a widget and don't give either a DOM or widget id? Does the created widget still get an id? How do we get at it?
Yes, the widget will get a generated id, if we write the following:
<div dojoType='dijit._Calendar'></div>
The widget will get a widget id like this: dijit__Calendar_0. The id will be the string of the file or namespace path down to the .js file which declares the widget, with / exchanged to _, and with a static widget counter attached to the end.
What's in the fridge? A smorgasbord of tasty Dijits
Most Dijits reside under the dijit subdirectory of your Dojo installation. Some reside under dojox, though, because they are in beta, or recently contributed to Dojo and not yet completely tested. Here is a fairly thorough list (for Dojo 1.1.0) that you can use as an inspiring starting point:
Dijit structure
The Dijit string listed in the first row can be used in a dojo.require(""); statement. At times, several Dijits are provided in one requirement. Some Dijits have been left out, considered to be superfluous or of minor importance.
This is only a list of Dijits, though. There are quite a lot of other components that live in the page.
Also, several components (not only Tree and Grid, but also ComboBox and SortableList, among others) can consume data stores.
The most basic definition of a Dijit is that it is a Dojo class which is associated with a certain DOM element in a web page. It is, in any other aspects, a regular Dojo class. The _Widget base class (that is the ultimate root class of all Dijits) does add a number of extra handle properties and functions to all Dijits, such as the variable domNode which refers to the DOM node which the Dijit is associated with.
All Dijits also have a lifecycle where Dojo ensures that certain functions on the Dijit are called at various times. There is, for example, one function that will be called before the Dijit is actually displayed and another one, which is called before any child Dijits are created, and so on.
Lifecycle
Regardless of whether a widget gets instantiated by the parser searching the page (or parts of it) for widgets to create, or by you creating it programmatically, certain functions get called at certain times.
In the base widget class dijit._Widget, we see the following comments:
create: function(/*Object?*/params, /*DomNode|String*/srcNodeRef){
//summary:
//Kick off the lifecycle of a widget
//description:
//To understand the process by which widgets are instantiated, it
//is critical to understand what other methods create calls and
//which of them you'll want to override. Of course, adventurous
//developers could override create entirely, but this should
//only be done as a last resort.
//
//Below is a list of the methods that are called, in the order
//they are fired, along with notes about what they do and if when
//you should over-ride them in your widget:
//
//* postMixInProperties:
//* a stub function that you can over-ride to modify
//variables that may have been naively assigned by
//mixInProperties
//* widget is added to manager object here
//* buildRendering:
//* Subclasses use this method to handle all UI initialization
//Sets this.domNode. Templated widgets do this automatically
//and otherwise it just uses the source dom node.
//* postCreate:
//* a stub function that you can over-ride to modify take
//actions once the widget has been placed in the UI
The postMixInProperties function is called right after Dojo has mixed in all functions and properties of the widgets' superclasses. Most widgets have the classes _Widget and _Templated mixed in, or subclasses of them. This is the function where you set important properties of the widget before it is displayed.
The function buildRendering is then called, which normally builds the UI and sets the domNode property of the widget, to be used in functions as 'this.domNode'. The variable _Templated overrides this to build the widget according to a template string or file. Override this only if you've read the source code a couple of times. And understand it.
The function postCreate is next, which is provided as a stub function which is called once the widget is visually inserted in the page.
The function startUp is next, which gets called for all widgets once all widgets in the page have been rendered properly, but before any child widgets have been created, so that any necessary properties can be messaged first.
The function uninitialize gets called when the widget is being destroyed, usually due to a call to destroyRecursive on the widget itself or to a parent widget.
Now here comes an important fact: You never need to implement any of these functions, unless you feel that you need to. All are implemented fully, or as stubs for your convenience, in superclasses of all widgets.
Templates
The whole point with a widget is that it should be visible. All widgets comprise at least one HTML element. The element which makes up a widget can be created programmatically in an overridden function, but for most purposes you get much cleaner code by separating the looks from the logic, so to speak.
Dojo actually supports two different templating langauges: the simple default and the Django Template Language (DTL, implemented originally in the Python Web framework of the same name).
Note that if you're using the first (standard) templating language, you're only allowed one element in the template! This is not as bad as it sounds. What it means is that you can only have one element with any number of children.
What will get you into trouble is if you have more than one element in your template parallel, or side-by-side, then only the first will get parsed.
The first supports simple variable substitutions, using ${..} syntax. For instance, you can define a HTML template snippet for a widget that includes a div tag which gets a custom id based upon the id given to the widget using the following:
<div id="${id}_custom"></div>
That id would be the same property that is set as id="..." in the page where the widget is created and which also can be accessed using this.id inside the functions of the widget.
So all properties being set in the widget, by instantiation or dynamically (but before the template is rendered), can be used in this way inside the template string or file.
For a concrete example, look at the template for dijit.form.TextBox:
<input class="dijit dijitReset dijitLeft"
dojoAttachPoint='textbox,focusNode'
name="${name}"
dojoAttachEvent='onmouseenter:_onMouse,onmouseleave:_onMouse,
onfocus:_onMouse,onblur:_onMouse,onkeypress:_onKeyPress'
autocomplete="off"
type="${type}"
/>
Inside the dijit/form/TextBox.js file, there are no definitions for these variables, however. But if we walk the class hierarchy, we see that TextBox derives from a class called dijit.form._FormValueWidget. This class, in turn, inherits from dijit.form._FormWidget (and both superclasses are defined in the file with the same name.) Finally, we get to the variables:
...
//Name used when submitting form; same as "name" attribute or
//plain HTML elements
name: "",
// alt: String
//Corresponds to the native HTML <input> element's attribute.
alt: "",
// value: String
//Corresponds to the native HTML <input> element's attribute.
value: "",
// type: String
//Corresponds to the native HTML <input> element's attribute.
type: "text",
...
So now you see that there is no magic underlying basic templating in Dojo. As long as you have a variable defined in a widget, even if it comes from a superclass or mixin (as you remember, a mixin is another (or more) multiple superclass, or a superclass added at runtime), it can be referred to in the template for the widget with the ${} syntax.
Attach points
Templates might contain fairly complex things, and it is not at all unusual to have the need to access DOM nodes inside the widget template after it has been instantiated. You might want to show or hide a dialog, or maybe add items dynamically to a list.
Either way, the quick and dirty way of doing this (as shown above) is to give the DOM node you want to access inside the widget template a dynamic id based upon the id of the widget, like this:
<div id="{id}_foo">
One could then use the following code in the actual JavaScript of the widget:
var x = dojo.byId(this.id+"_foo");
And x would always resolve to that particular node in the template, regardless of how many widgets of the same kind has been created in the page.
As it turns out, Dojo has excellent support for this, and it is called "Attach Points". An Attach Point is a name you can give to any DOM node or Dijit in the template. Dojo will automatically scan the template for Attach Points and then create contextual this variables for the Dijit with the same names.
Consider a template that uses Attach Points instead, which looks like this:
<div><div dojoAttachPoint='foo'>XXXX</div><div dojoAttachPoint='bar'>
YYYY</div></div>
What will happen when the Dijit is rendered in the page is that two new variables will be instantiated in the context of the Dijit, with the names 'foo' and 'bar', being references to the DOM nodes in question.
In the postCreate function of the widget, one could then have a line which reads:
this.foo.style.background = "#f94444";
This will change the background of the DOM node in the template, without having to know what DOM id it actually has, and without colliding with any other Dijits of the same (or other) type.
Events
But there were some other funny things in the template for TextBox, weren't there? There was a property called dojoAttachEvent, which is used expressly for the same reasons that one would use dojo.connect inside a script.
Let's look closer at those events:
dojoAttachEvent='onmouseenter:_onMouse...' ...
It is a series of comma-separated event names and function names pairs. In the case above, the event onmouseenter for the DOM node is connected to the function _onMouse in the TextBox widget class. This would otherwise be the prime reason for using Attach Points, so the code gets cleaner still.
Extension points
An extension point is what Dojo calls a function override inside the usual page. Several widgets aren't particularly useful unless you tell them what to do, such as the Button family. One way of telling a Button what to do is to create a small JavaScript somewhere in the page and use dojo.connect.
Another way could be to create you own widget which subclasses the Button widget, and then override the onClick function defined inside that widget using an extension point. We begin by checking out parts of the code in dijit.Button.js:
_onClick: function(/*Event*/ e){
// summary: internal function to handle click actions
if(this.disabled || this.readOnly)
{
dojo.stopEvent(e); // needed for checkbox
return false;
}
this._clicked(); // widget click actions
return this.onClick(e); // user click actions
},
This function is wired up to take care of click events for the widget. The last line of the widget calls another internal function, onClick, and returns the result of that. onClick looks like this:
onClick: function(/*Event*/ e){
//summary: user callback for when button is clicked
//if type="submit", return true to perform submit
return true;
},
To override this function by using an extension point, we can put the following in our page when declaring the Button widget:
<script>
function myfunc(e)
{
//Do something really clever with the event argument
}
</script>
<div dojoType="dijit.form.Button" onClick="myfunc"></div>
Django Templating Language (DTL)
DTL is quite another order of templating language. Here you can define more complex widget markups using both loops and conditionals.
Where Dojo's classical templating system relied on variable substitutions, leaving the widget to create more complex dynamical markup in special functions and then put them in the template, DTL gives us much more power, or conversely allows our logic to become smaller.
To use DTL inside a widget, you must subclass dojox.dtl.Template and dojox.dtl.HtmlTemplate. The former manages all simple text substitutions, and the latter manages all other complex DOM manipulation.
So when defining a widget, instead of writing this:
dojo.declare("mycorp.charting.chart1", [ dijit._Widget,
dijit._Templated],{...
You'd instead write the following:
dojo.declare("mycorp.charting.chart1", [ dijit._Widget,
dojox.dtl._Templated],
{...
For example, we have a widget which reads data from the server using dojo.xhrGet and then puts all items read in a simple but sortable SortList.
Using a standard templating language, we might have built a template that looked something like this:</ul>";
And then have some widget code that looks like this:
// To be called when the server responds with data
handleData: function(data)
{
var obj = eval(“(“+data+”)”);
dojo.forEach(obj, function(o)
{
var li = document.createElement('li');
li.innerHTML = “” + o;
this.domNode.appendChild(li);
});
}
Instead, with DTL, we can change the template too look like this instead:{% for item in
obj %}<li>{{ item }}</li>{% endfor %}</ul>";
The code will be changed to only set the obj variable, which is used in the template, and the template itself then manages looping and variable substitution all on its own.
//insert code26
To actually see the rendered data, DTL requires an extra step. You must at one point, when your variables are set up correctly, call the render function from dojox.dtl._Templated. But since your widget has mixed that class in, it is part of the this context.
// changed to use DTL functionality instead. Obj is now an instance
//variable.
handleData: function(data)
{
this.obj = eval("("+data+")");
}
However, dojox.widget.SortList is actually able to load its contents dynamically from a dojo.data store without any intervening widgets at all. I only use it as a daft but simple example on how one usually can go about creating dynamic content in a widget.
First, let's take a look at the features of DTL before we dive down into a longer example.
Variable substitution
Variables that we want to display from the widget are marked up differently.
{{person_name}}
instead of
${person_name}
Loops
The loops in DTL are reminiscent of server-based templating languages of yore, especially ASP and JSP.
Of course, these days the server need not do any templating or web magic at all, it only sends us data which gets rendered every which way in the Dojo client.
The variable item_list showing up in the example below are variables that exist in the this context of the widget, and can be put there using widget parameter passing 'the usual way', just so you know.
<ul>
{% for item in item_list %}
<li>{{ item }}</li>
{% endfor %}
</ul>
Conditionals
{% if ordered_warranty %}
<p>Your warranty information will be included in the packaging.</p>
{% else %}
<p>No warranty for you!</p>
{% endif %}
Argument passing
A Dijit is just a Dojo class with some special properties. One of those properties is that if the Dijit class declares an explicit class variable, that variable will automatically be detected and initialized if present in the Dijit argument object.
A recommended widget structure
The function dojo.require is a very powerful JavaScript loader. By creating a separate directory parallel to the Dojo directories, you will be able to load you own widgets (including having them auto-loading any of their dependencies) using just one dojo.require statement in your page.
There's another way to point out where to load classes from (this does ,of course, not only relate to widgets, but we'll get to that in a minute).
This is how you might want to set things up for a new customer or project.
This is just an example, and you might want to set things up differently for a variety of reasons. We'll take each non-standard directory and file at a time and describe the reasons for where they're put.
No matter how you choose to arrange the files, there are only two places which look up all the other things:
- The prime HTML file which actually loads the widget which you've created.
- The .js file that defines the widget.
The parts which are critical here come usually in the beginning of the widget. Let's look at a small example which fits into the scenario described above: the chart1.js widget:
dojo.provide("mycorp.charting.chart1");
dojo.require("dijit._Templated");
dojo.require("dijit._Widget");
dojo.require("dojox.charting.Chart2D");
dojo.declare("mycorp.charting.chart1", [ dijit._Widget, dijit._Templated],
{
templatePath: dojo.moduleUrl("mycorp","templates/chart1.html"),
widgetsInTemplate: true,
startup: function()
{
console.log("startup for mycorp.charting.chart1 called.");
var ch = this.corpchart;
console.log("corpchart "+ch+" has id "+ch.id);
var chart1 = new dojox.charting.Chart2D(this.corpchart.id);
chart1.addSeries("Series A", [1, 2, 5, 4, 1, 3, 2,3,2,4,3,4],
{stroke: {color: "red", width: 2}});
chart1.addSeries("Series B", [7, 6, 3, 5, 4, 6, 5,6,7,5,2,5],
{stroke: {color: "blue", width: 2}});
chart1.render();
}
});
The first line of the widget declaration is dojo.provide. That call must contain a string that exactly references where in the namespace this widget (or class, actually) exists. Since this widget exists in the file mycorp/charting/chart1.js, the string will be mycorp.charting.chart1.
Then follows a number of required statements. The big upside with having required statements inside the widget definition is that it leaves the main HTML page relatively uncluttered. In fact, with some luck, all the script references you'll need is the one for dojo.js, and then a required statement for your widget. The rest of the Dojo classes will be pulled in dynamically as their required statements get discovered.
Let's have a look at the test page for our class:
<html>
<head><title>Chart1 Test</title></head>
<style type="text/css">
@import "../../../dojo/resources/dojo.css";
@import "styling/chart1.css";
</style>
<script>
var djConfig={ parseOnLoad: true};
</script>
<script type="text/javascript" src="../dojo/dojo.js"></script>
<script>
dojo.require("dojo.parser");
dojo.require("mycorp.charting.chart1");
</script>
<body>
<div dojoType="mycorp.charting.chart1"></div>
</body>
</html>
We include two CSS files: the Dojo reset styles and the specific styles for our widget. It can be a good idea to create versioned CSS files for each package (or project) that combine all styles into one file, for regular milestones. However, for everyday coding, it is generally better to have separate files.
After that we get the djConfig declaration, which only tells the parser to begin parsing the page for Dijits immediately after load.
Then we pull in the main dojo loader—dojo.js, and require first the parser, and then our widget.
The body of the test file is minimal, since all special markup for the widget is folded down into its template.
The CSS file is also very small and looks like this:
.mycorp_chart
{
height: 200px;
width: 400px;
}
And the template itself looks like this:
<div dojoAttachPoint="corpchart" class="mycorp_chart"></div>
Using the dojoAttachPoint property, a variable gets created inside our widget, which we can refer to as this.corpchart and which is a reference to the DOM node in question.
If you try out the example and use Firebug, you will see the following in the Firebug console:
The id of the widget has been set to the namespace path of the widget, with _ replacing /, and a static counter tucked at the end.
The chart displayed will look approximately like this:
| https://www.packtpub.com/books/content/basic-dijit-knowledge-dojo | CC-MAIN-2015-40 | refinedweb | 4,329 | 62.88 |
This guide shows you how to use the Google Mobile Ads Unity plugin's Ad Placements feature to create and display ads for your app.
Prerequisites
Unity 2017.4 or higher.
Unity plugin prerequisites.
Download and import the early build of GMA Unity plugin.
Set your AdMob app ID in the Unity Editor.
Initialize the Google Mobile Ads SDK
Before loading ads, initialize the Mobile Ads SDK by calling
MobileAds.Initialize(), with an
Action<InitializationStatus> callback. This
needs to be done only once, ideally at app launch.
using GoogleMobileAds.Api; using System.Collections.Generic; ... public class GoogleMobileAdsDemoScript : MonoBehaviour { ... public void Start() { // Initialize the Mobile Ads SDK. MobileAds.Initialize((initStatus) => { // SDK initialization is complete }); ... } }
Creating ad placements
The first step in displaying a banner with Google Mobile Ads is to create and configure an ad placement. You can select an ad placement of Banner, Interstitial, or Rewarded format from Assets > Google Mobile Ads > Ad Placements in the Unity Editor. Three demo ad placements are then set up and ready for use.
To add a new Ad Placement, click the Add New Placement button at the end of the list. You can configure the ad placement from the Inspector view.
Ad placement configuration
Each placement has the following properties:
- Placement Name
- Name of the placement. Used to identify placements when setting up ads in a scene.
- Ad Format
- Banner, Rewarded, Interstitial. Type of the ad.
- Ad unit ID
- Provide your banner ad unit ID for Android and iOS. You need to provide at least one ad unit ID.
- Persistent across scenes
- When checked, the banner will persist on the screen regardless of scene changes (same behavior as
DontDestroyOnLoad).
- Auto Load Enabled
- When checked, an ad will be loaded automatically when a scene associated with the ad placement is loaded.
The following screenshot shows an example of an Ad Placement named My Awesome Banner.
Adding an AdGameObject to the scene
You can add an AdGameObject for Banner, Interstitial, or Rewarded formats to your scene using GameObject > Google Mobile Ads in the Unity Editor. Select the format to add a placement to the active scene.
Once you've added an AdGameObject to the scene, you'll see a GameObject representing the ad in the Hierarchy view of the Unity Editor.
You can change the name of the placement by changing the name of the GameObject itself. The following screenshot shows an example of an AdGameObject named Banner Ad.
AdGameObject settings
You can configure the AdGameObject in your scene from the Inspector view in the settings for the Ad Game Object (Script) component.
- Ad Placement
Select the ad placement from the drop-down list of configured placements. The list will only have ad units for the right format. For example, for banner ad game objects the dropdown will show only configured banner ad placements.
BannerAdGameObjectconfiguration (banner only)
- Size - Select the size of the banner that you want to use.
- Anchored Adaptive Banner provides a few more options:
- Orientation - Select device orientation used to calculate the ad height.
- Use full screen width - When checked, the banner will occupy full screen width. You can adjust % width of the screen (50~99%) if you uncheck the Use full screen width option.
- Custom allows you to provide the banner width and height.
- Ad Position - Select the position where the banner should be placed.
Callbacks
You can implement functions that correspond to ad callbacks. For example, if you want to handle when a banner ad fails to load:
Create a function compatible with the ad callback.
public void OnBannerAdFailedToLoad(string reason) { Debug.Log("Banner ad failed to load: " + reason); }
Attach the script which contains the above function to any GameObject in the scene.
Click the + button, then drag & drop the GameObject that you've attached the script to.
Select the function that you want to link to the ad callback. For the parameterized ad callbacks, select the function to accept the dynamic variable so you can get the parameter value from the SDK.
Use AdGameObject from script
Get the AdGameObject instance from the script
All
AdGameObject objects have the convenience method
LoadAd(). This will load
an ad with a plain, untargeted
AdRequest. To apply targeting, you should use
LoadAd(AdRequest adRequest) using your own configured ad request.
To get the instance of an AdGameObject use the following method for each format:
Banner
MobileAds.Instance.GetAd<BannerAdGameObject>("AD_GAMEOBJECT_NAME");
The returned
BannerAdGameObject object also has convenience methods
Hide() and
Show().
Interstitial
MobileAds.Instance.GetAd<InterstitialAdGameObject>("AD_GAMEOBJECT_NAME");
The returned
InterstitialAdGameObject object has a convenience method
ShowIfLoaded().
Rewarded
MobileAds.Instance.GetAd<RewardedAdGameObject>("AD_GAMEOBJECT_NAME");
The returned
RewardedAdGameObject object has a convenience method
ShowIfLoaded().
For example, you can get an instance of a
BannerAdGameObject and load it as
follows:
using UnityEngine; using GoogleMobileAds.Api; using GoogleMobileAds.Placement; public class BannerTestScript : MonoBehaviour { BannerAdGameObject bannerAd; void Start() { bannerAd = MobileAds.Instance .GetAd<BannerAdGameObject>("AD_GAMEOBJECT_NAME"); bannerAd.LoadAd(); ... } ... }
If there is a
BannerAdGameObject named BannerAd, you can get an instance of
it like this:
MobileAds.Instance.GetAd<BannerAdGameObject>("BannerAd");
Access underlying ad object in AdGameObject
These snippets demonstrate how to access the underlying ad object associated with the AdGameObject.
Banner
BannerAdGameObject bannerAd = MobileAds.Instance .GetAd<BannerAdGameObject>("AD_GAMEOBJECT_NAME"); // Access BannerView object BannerView bannerView = bannerAd.BannerView;
Interstitial
InterstitialAdGameObject interstitialAdGameObject = MobileAds.Instance .GetAd<InterstitialAdGameObject>("AD_GAMEOBJECT_NAME"); // Access InterstitialAd object InterstitialAd interstitialAd = interstitialAdGameObject.InterstitialAd;
Rewarded
RewardedAdGameObject rewardedAdGameObject = MobileAds.Instance .Get<RewardedAdGameObject>("AD_GAMEOBJECT_NAME"); // Access RewardedAd object RewardedAd rewardedAd = rewardedAdGameObject.RewardedAd;
Examples
Show an interstitial ad
Here is an example of how to configure a game to load and show an interstitial ad using an AdGameObject.
Add an
InterstitialAdGameObject to the scene and turn on the Auto Load
Enabled feature, so that the ad is loaded automatically when the scene loads.
Next, make sure you've initialized the SDK using as follows. Note that the Auto Load feature in AdGameObject will not work if you forget to initialize the SDK.
Then display an interstitial ad between a screen transition by calling the
InterstitialAdGameObject.ShowIfLoaded() function. The following code shows an
example of displaying an interstitial ad between a scene transition.
using UnityEngine; using UnityEngine.SceneManagement; using GoogleMobileAds.Api; using GoogleMobileAds.Placement; public class MainScene : MonoBehaviour { InterstitialAdGameObject interstitialAd; void Start() { interstitialAd = MobileAds.Instance .GetAd<InterstitialAdGameObject>("interstitial"); MobileAds.Initialize((initStatus) => { Debug.Log("Initialized MobileAds"); }); } public void OnClickShowGameSceneButton() { // Display an interstitial ad interstitialAd.ShowIfLoaded(); // Load a scene named "GameScene" SceneManager.LoadScene("GameScene"); } }
Since you've enabled the Auto Load feature in the ad placement, you don't need to request an ad explicitly. When the scene changes, an interstitial ad will appear if one is ready.
If you want to manually request an ad, disable the Auto Load feature from
the ad placements inspector and call the
InterstitialAdGameObject.LoadAd()
function instead. The following code snippet shows how to manually request an
ad.
public class MainScene : MonoBehaviour { InterstitialAdGameObject interstitialAd; void Start() { interstitialAd = MobileAds.Instance .GetAdGameObject<InterstitialAdGameObject>("interstitial"); MobileAds.Initialize((initStatus) => { Debug.Log("MobileAds initialized"); // Load an interstitial ad after the SDK initialization is complete interstitialAd.LoadAd(); }); } ... }
Handle "watch a rewarded ad" button state
Here is an example of how to enable a "watch a rewarded ad" button by using ad placements.
Add a Button GameObject (named Button in this example) to the scene, which will be used to display a rewarded ad. We'll make this button available only when a rewarded ad is available.
In the
Start() method, change the active state of the Button to
false. This
will make the button disappear from the scene.
public class MainScene : MonoBehaviour { ... void Start() { GameObject.Find("Button").SetActive(false); ... } }
Add a
RewardedAdGameObject to the scene and select the AdMob Demo Rewarded
Ad Ad Placement from the dropdown.
Under the Callbacks section in the
RewardedAdGameObject inspector, click
the + button from On Ad Loaded() to enable the function to be called
when a rewarded ad is loaded.
Drag and drop the Button GameObject that you added in the previous step to
the None (Object) field. Select a function to be called from the dropdown.
Click No Function > GameObject > SetActive(bool), then click the checkbox so
it sends
true as a parameter (calls
SetActive(true)).
In this Callbacks section, you can also link up an event that will be called
when the
RewardedAd.OnUserEarnedReward event is fired. For more details,
refer to this section.
Next, make the button to display a rewarded ad when clicked. From the On Click() Callbacks section in the button inspector, click the + button and drag and drop the Rewarded Ad Placement GameObject (named Rewarded Ad in this example) to the None (Object) field.
Then, attach the
RewardedAdGameObject.ShowIfLoaded() function to the button's
On Click() callback.
Lastly, don't forget to initialize the SDK. The following code snippet is the complete code for the scene used in this example:
using UnityEngine; using GoogleMobileAds.Api; public class MainScene : MonoBehaviour { void Start() { GameObject.Find("Button").SetActive(false); MobileAds.Initialize((initStatus) => { Debug.Log("Initialized MobileAds"); }); } }
Once you run the project, you'll see the button displayed on the scene when a rewarded ad is loaded and ready to be shown.
Configure a reward callback for a RewardedAdGameObject
Here is an example of how to configure a rewarded callback to a rewarded ad placement, so you can give a reward to a user when a callback function is called.
Create a new script and define a function which accepts
Reward as a parameter
as follows.
using UnityEngine; using GoogleMobileAds.Api; class RewardedTestScript : MonoBehaviour { ... public void OnUserEarnedReward(Reward reward) { Debug.Log("OnUserEarnedReward: reward=" + reward.Type + ", amount=" + reward.Amount); } ... }
Attach the
RewardedTestScript script to any GameObject (except the Ad
Placement GameObject) in the scene. In this example, it is attached to the Main
Camera GameObject.
Add a
RewardedAdGameObject to the scene. Then, under the Callbacks section
in the
RewardedAdGameObject inspector, click the + button on On User
Earned Reward (Reward) to enable the function to be called when a reward is
granted to a user.
Drag and drop the Main Camera GameObject that you've added in the previous step to the None (Object) field. Select a function to be called from the dropdown. Click No Function > RewardedTestScript > OnUserEarnedReward.
Once you run the project and watch a rewarded ad,
RewardedTestScript.OnUserEarnedReward() will be invoked when you are to be
rewarded for interacting with the ad. | https://developers.google.com/admob/unity/ad-placements?hl=cs-CZ | CC-MAIN-2021-49 | refinedweb | 1,697 | 50.12 |
Implementing Pure Functions
A while back, I wrote an article explaining how the D programming language's "pure" annotation could be applied to functions. A pure function does what you'd expect — the compiler enforces purity of the function. The function may not have any side effects, may not rely on any mutable data, and is statically checkable.
Functional programming enthusiasts have long understood the benefits of pure functions — excellent encapsulation, low cognitive load, threadsafe, and so on. Pure functions have gained popularity among the D programming community as well. Good D style encourages the use of pure functions whenever practical.
But there's a problem.
Pure functions can only call other pure functions. One might say "it's turtles all the way down." Now, suppose we've got a rather deep call chain of pure functions calling pure functions, and we suspect a bug lurks in the deepest, darkest one of them sitting in bottom of the sump pump in the basement.
What's the first tool many of us grab? Stick a print statement in it to verify that the function is actually being called, and what it's arguments are, such as:
import std.stdio; pure int square(int x) { writefln("square(%d)", x); return x * x; }
Uh-oh. The compiler means it when it checks for function purity:
test.d(5): Error: pure function 'square' cannot call impure function 'writefln'
Let's examine the options:
- Remove the pure annotation. That will enable square() to successfully compile, but then every pure function that calls square() will now fail to compile. The "turtles all the way down" now becomes "turtles all the way up" as you are forced to remove the pure attribute from the entire function call graph. Clearly, this is very unappealing.
- D has an all-purpose escape from type checking — the cast. Casting is a blunt instrument used to get us out of all kinds of jams:
import std.stdio; int square_debug(int x) { writefln("square(%d)", x); return x * x; } pure alias int function(int) fp_t; pure int square(int x) { auto fp = cast(fp_t)&square_debug; return (*fp)(x); }
The pure function is split into two, and the impure one is bashed into submission by forcibly casting it to be pure. The best face we can put on this is that it works and gets the job done.
But I hate it.
D is meant to be a joy to work with, and there's no joy in this. It's time to think of a language modification.
In a functional programming language, using monadic output for debugging messages may be the right choice. But D does not use monadic I/O, preferring straight calls to I/O functions. Leaving the debate of which is better to philosophers, let me just assert that monads should not be necessary for taking care of this small matter in D.
Programmers just want to stick the print statement in and have it work despite it being impure. How can we accommodate that, yet still be pure? Is there anything in D that can be pressed into service for this?
Yes. D has something called a debug statement. Debug statements are regular statements prefixed with the keyword debug. They are compiled only when the -debug switch is passed to the compiler. With a simple change to the language, we can disable purity checking inside a debug statement. So, to make our original example compile:
import std.stdio; pure int square(int x) { debug writefln("square(%d)", x); return x * x; }
Simple, easy, and it works. The compiler otherwise still treats the function as pure, so depending on the situation, the user may see a variable number of writes. This is because the compiler is allowed to cache the result of pure functions, or on the contrary to call them repeatedly.
Some uneasiness about this is understandable. After all, it is breaking purity. The rationale is that debug code is not production code. It's reasonable to be able to break the rules as necessary for debugging. Of course, the onus is on the programmer to not introduce a heisenbug (where the program works when compiled with -debug and fails without).
Thanks to Andrei Alexandrescu, Brad Roberts, David Held, and Bartosz Milewski for their helpful comments and corrections on this post. | http://www.drdobbs.com/tools/implementing-pure-functions/230700070 | CC-MAIN-2015-06 | refinedweb | 722 | 72.87 |
reanimate-svg
SVG file loader and serializer
See all snapshots
reanimate-svg appears in
Module documentation for 0.9.8.0
reanimate-svg-0.9.8.0@sha256:541ea5d9677d9ad055ea0b47ce3ebdd585d7197e5bdec248a53a5d53a281519f,2507
reanimate-svg provides types representing a SVG document, and allows to load and save it.
The types definition are aimed at rendering, so they are rather comple. For simpler SVG document building, look after `lucid-svg`.
Changes
--change-log--
v0.9.0.0 April 2019
- Performance optimizations.
- Memo module and render cache.
v0.8.2.0 March 2019
- Export parser and serializer.
v0.8.1.0 March 2019
- Filter attributes.
v0.8.0.0 March 2019
- Remove WithDrawAttributes type class.
- Remove css and top-level definitions from Document.
- Basic support for SVG filters.
v0.7.0.0 March 2019
- fork from svg-tree due to ‘reanimate’ requiring breaking changes
- Fix: change x,y rect defaults from ‘0’ to ‘auto’.
- Expose list of named svg colors.
- Adding: Allow definitions to appear anywhere in an svg document.
- Change module namespace from Graphics.Svg to Graphics.SvgTree
v0.6.2.3 October 2018
- GHC 8.6 fixes
- Adding: Allow definitions to appear anywhere in an svg document. | https://www.stackage.org/lts-15.3/package/reanimate-svg-0.9.8.0 | CC-MAIN-2020-16 | refinedweb | 194 | 54.79 |
Hi,
On 4/20/06, David Kennedy <davek@us.ibm.com> wrote:
> Attached is a patch (I'm not sure how you typically like to receive patches) for
> the automatic registration of namespaces on import of xml or cnd nodetype
> files (JCR-349).
Excellent, thanks!
> I didn't want the readers to have knowledge of workspaces or registries so
> I basically added additional methods to the NodeTypeManagerImpl that would
> take a registry in which the namespaces should be registered.
How about adding the namespace registry as a private member variable
of the NodeTypeManagerImpl class and have SessionImpl pass the
namespace registry to the node type manager as a constructor argument?
This way the registry handling would be completely transparent to the
user and they could just continue using the existing methods.
> The code will ignore a namespace if it has already been registered (I can change
> it to throw an exception if the registered uri is different than what's in the file).
This is a good approach. BTW, instead of checking for the prefix you
should be checking for the availability of the namespace URI. The
process should be something like this:
if URI is not already registered, then
while PREFIX is already registered, do
generate another PREFIX
end while
register the PREFIX -> URI mapping
end if
It's a bit cumbersome, especially since the getPrefix() and getURI()
methods throw exceptions to indicate unregistered mappings, *and* it
isn't thread-safe, but this is the best we have at the moment. Perhaps
we could encapsulate this functionality into a custom
NamespaceRegistryImpl method...
> I've also included a test for each type of file. The tests will fail if run a second
> time against the same repository, but not because of the namespace registration,
> but because the nodetypes already exist.
Yeah, that's a problem with the fact that we currently can't modify or
even remove existing node types without manual intervention. I think
it's ok at the moment to just disable the test cases with a logged
warning if the conflicting node types.
This functionality would probably be a good fit for a
reregisterNodeTypes method. See JCR-322 for background.
> Let me know what you think and if you'd like patches in a different format.
Looks good. Can you fix the issues I mentioned (ns registry as member
variable, and the namespace registration process)?
I can use the zip format, but a more convenient alternative would be
the output of "svn diff". Quick steps to do this:
1) Checkout the latest sources from SVN as instructed on the
Jackrabbit web site
2) Make your modifications
3) Use "svn add ..." to mark any new files and directories you've added
4) use "svn diff" from the main jackrabbit directory to get a listing
of all the changes
5) Save the output in a file and attach it to the Jira issue (JCR-349)
6) Remember to select the license grant option on the attachment form
BR,
Jukka Zitting
--
Yukatan - - info@yukatan.fi
Software craftsmanship, JCR consulting, and Java development | http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200604.mbox/%3C510143ac0604200929v74584faak9608e5cc35c5230@mail.gmail.com%3E | CC-MAIN-2014-15 | refinedweb | 512 | 57.3 |
Connected the DHT11 sensor to TM4C123. The sensor is not that accurate, but simple to use. Result displayed on Nokia 5110 LCD.
Programs written with Energia. There are libraries for the sensor and LCD display, but both require slight modification.
Code available on github.
Updates 2016-05-14:
Here is the wiring. Note that my Nokia 5110 board support input of 3v to 5v. Your mileage may vary.
TM4C123 - LCD 5110 Comment
==================================
VBUS - Vcc My version of 5110 supports 3v to 5v
VBUS - BL Backlight
GND - GND
PB_5 - RST Reset
PB_4 - Clk SCK(2) to Clock
PB_7 - Din MOSI(2) to Serial data in
PA_7 - CE Chip Select
PA_2 - DC Select between data or command
TM4C123 - DHT11 Comment
==================================
PD_7 - Data
VBUS - Vcc
GND - GND
12 comments:
Hi,
I've been trying to use your code for the "TM4C123 with DHT11 sensor" - and failing!
I have a Launchpad TM4C123GXL and a Nokia 5110 on a red board. I'm using Energia, and trying to simplify the code just to be able to write to the display, without reading temperature and humidity (I have no sensors connected.)
I know the display works because I've been following an EdX course at the University of Texas at Austin - but there we use Keil, and the programming is a lot more complex. We used the following pins: SCE = 12 (PA3), RST = 10 (PA7), DC = 9 (PA6), DN = 8 (PA5), SCLK = 11 (PA2). We were told not to connect the backlight because it takes too much current, and could damage the board.
I'm new to Energia, and would be grateful if you could briefly explain a) where to place all the files - I've tried putting them all in the same folder as the .ino file, and that doesn't work!. b) How to assign the pins in the ino file; in particular, the Serial clock and serial data pins. Many thanks.
When you said "that doesn't work", what kind of error did you encounter? For pin assignment, it can be passed as parameter when you create the LCD_5110_SPI:
Thanks for replying so quickly. The previous error (with Energia 0014) was because it couldn't find the #include files. I have updated Energia to version 0101E0017, and now it compiles correctly. The Serial monitor shows the correct text every minute, but there is nothing on the LCD screen. So the programme is running!
I have assigned pins in LCD_5110_SPI myScreen(12, 9, 10, 13, PUSH2), but how do I assign pins for the SCLK and DN(MOSI) connections on the LCD board? e.g. I have the LCD's SCLK pin connected to pin 11(PA2), but the programme has no way of knowing this.
I have put a copy of the LCD_5110_SPI_EX.ino file (modified to ignore sensor readings) here:
Thanks for your help.
I can't remember exactly which pin I used, but referring to the pin diagram at:
I believe I used the third SSI port i.e. SCK(2) and MOSI(2), which is PB_4 for clock and PB_7 for serial data.
Thanks for your help. OK - this is my last attempt - I'm sure that I am testing your patience!
In the file LCD_5110_SPI.cpp, you have commented out all reference to the pinSerialData and pinSerialClock, because you don't seem to use them in the rest of the programme. So I've tried connecting these pins to 0volts and Vcc - in all combinations, but still nothing shows on the LCD.
Is there anything else you could suggest that I try?
Hi Rob. I found my TM4C123 and tried the program again. Indeed I was using the third SPI port.
I cleaned up the code a bit and added comment on wiring
Yes! It worked!
First I followed all the new wiring connections you listed in the programme - and still nothing. The serial monitor was (as before) showing text, but nothing on the LCD. I was about to give up when I thought I'd try the contrast setting (Line 69: myScreen.setContrast(0x28);). (I had tried different contrast settings with the previous version of the programme, and that hadn't helped, so I wasn't hopeful.) I changed the contrast to 0x48, and it was perfect!
Thank you for going to the trouble of finding the TM4C123, and explaining clearly in the programme how to connect the lcd panel to the board.
Now I want to find out how to draw graphics on the LCD_5110 :)
Many thanks.
Hi Rob. Glad to hear that it worked for you. For my 5110, setting contrast to 0x48 will render it all black. Probably a slight variation of the QC. :P
Hi, I've been investigating how graphics work on the 5110, and have added quite a few graphics commands to the LCD_5110_SPI.cpp library. I've also written a simple Pacman-style game using those graphics commands.
See this Youtube video for details:
The Library and game can be downloaded from:
I don't know whether you want to incorporate them in your github files, but you are welcome to do so.
hello
i tried to implement your code in energia after downloading it,but it shows error that
sketch_sep30a.cpp:47:26: fatal error: LCD_5110_SPI.h: No such file or directory
#include "LCD_5110_SPI.h"
^
compilation terminated.
Hi,
Which pin did you use as digital input in Tm4c123
PD_7. It is list in the original blog post. You can also use another pin and modify the source code. | https://www.clarenceho.net/2015/08/tm4c123-with-dht11-sensor.html?showComment=1589746748073 | CC-MAIN-2020-29 | refinedweb | 918 | 81.83 |
Hello, I'm very, very new to C++ and I'm taking computer science course right now for fun. I really want to learn C++ but I'm having a lot of trouble with this new homework assignment and I'm hoping I just take it one step at a time so I completely understand it.
I've read a lot of the topic here to help me out, but most topics that fall under this category are too complex for my understanding.
The entire problem involves reading one text file with "search words" in it and then counting the number of times these search words show up in a second text file.
From what I know of C++, I want to start by taking all of the words from the search words file (words.txt) and place them into an array which will be used for later (I can't open the search file more than once). I know how to open the file from a program I found online:
// reading a text file #include <iostream> #include <fstream> #include <string> using namespace std; int main () { string line; ifstream myfile ("word.txt"); if (myfile.is_open()) { while (! myfile.eof() ) { getline (myfile,line); } myfile.close(); } else cout << "Unable to open file"; return 0; }
I've been trying my best to work with a for loop nested in the while loop, but I just can't figure out how to do it. I get an error with each effort. My attempts at the for loop are placed right after the getline (myfile,line); and look something like this:
for(i=0;i<line.length();i++){ myfile >> word[i]; }
I delcare word as a string so it looks like:
string word[1000];
I don't even know if that is possible, but it my compiler doesn't appreciate it very much. Is there a better way of doing this? Or even just another way I could attack this problem so I can continue on?
Also, there are actually only 8 words in the file, but I can't get it to work with using the numbers 0 and 7 even.
Any help would be greatly appreciated.
Thank you. | https://www.daniweb.com/programming/software-development/threads/109311/storing-words-from-text-file-in-array | CC-MAIN-2017-51 | refinedweb | 364 | 76.66 |
Hello matplotlib community,
I am working with spyder(Python 3.7). Yesterday I created a virtual environment for tensorflow. But after that, when I tried to import matplotlib.pyplot as plt, it shows the following error: ModuleNotFoundError: No module named ‘matplotlib.artist’. I googled and tried different ways to solve the issue like reinstalling…But nothing helped me to solve the issue.
Specifications:
Windows 10
matplotlib version installed 3.2.2
Obtained Matplotlib from anaconda
Hoping to help in resolving the issue asap.
Thanks in advance
Here is the detailed view of the error
Command in spyder:
import matplotlib.pyplot as plt
Error:
IPython 7.19.0 – An enhanced Interactive Python.
···
Regards,
Suchitra | https://discourse.matplotlib.org/t/matplotlib-users-matplotlib-backend-problem/22061 | CC-MAIN-2021-21 | refinedweb | 113 | 52.56 |
Sounds like fun :-p
Thanks!
Type: Posts; User: crepincdotcom
Sounds like fun :-p
Thanks!
Greetings,
Back many years ago, I figured out how to control the pins of a serial port independently under Win98 in VB6. IE, rather than writing data to the port, I could set two of the pins low...
is -Wall like "use strict;" in perl then?
ahhhh you guys are great. I removed the time() call... I knew it was something stupid.
Thanks again,
Hey all,
I can't for the life of me figure out why this is segfaulting... sorry to bother you all with such a lame question. Here's the source:
#include <stdlib.h>
#include <stdio.h>...
(disregard this... browser issue)
Yes.... but I threw a few more tableouts() in there.... I'll keep working on it.
I know gcc is a good compiler, so globals in general are not bad, is that correct? It's an error on my part somewhere? I've debugged like crazy: it goes in one way, comes out a bit different... And...
I just did a logical run through and some debug strings and that var seems to be ok...
Thanks for the idea though, I should have thought of lint.
Hello all,
I'm writting (have written...) a program that uses genetic algorithms to solve the Travelling Salesman Problem. (Search google if the idea of the interests you). I hate to post a whole...
Hello,
I have an array, declared as such:
double a[8];
a[0] through a[7] contain numbers that I would like to sort. Were this simply the issue, I'm sure I could find a function on Google. But...
I was talking to a math major at MIT the other day, he had an intersting scope on this. As N approches infinity, Vn being the Nth term, Vn / V(n-1) becomes (1+ sqrt(5))/2. That is, the ratio between...
OK, but I'm not searching for a particular starting string, I'm searching for only second occurences of groups already present. Running through every 2-char combo would be a little inefficient...
...
Thanks all... I think I'll go cry now.
Just kidding, but my code is pretty bad isn't it.
Dave_Sinkula, your code is very pretty and I will test it tonight. But your use of pointers and (not...
Dave, the formula I provided is not the proper one. It is correctly inplemented in the code, I check that many times over.
The line
double top=0;
Does that need to be
double top=0.0; ...
:mad:
I've got this one little function that calculates standard deviation. All the separate peices work, but together it blows up. I simply cannot figure out why.
I'm trying to get this...
But a managed switch can be changed to allow a larger MTU.
Nevermind I found it, Thanks
Jez, I don't have to define a window or anything?
-Jack Carrozzo
The network stack takes care of frame size for you. If you allow jumbo frames, as root type:
ifconfig eth0 mtu 9000
Remember that only other computers with jumbos enable can receve what you...
Hello,
I've written a math code that does some pretty things, and outputs its data in a text file. Id like to now write a visualizer. Thus, I would simply read through my file, and have an array,...
So to recap, if I simply use a double rather than float it should work?
Sorry to not have figured this out myself.
Thanks guys, interesting discussion too ;-) | https://cboard.cprogramming.com/search.php?s=09b7b041a5bdab4c3202f23e13414453&searchid=2221931 | CC-MAIN-2019-51 | refinedweb | 594 | 85.79 |
Euler Problem 17: Number Letter Counts
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Euler Problem 17 asks to count the letters in numbers written as words. This is a skill we all learnt in primary school mainly useful when writing cheques—to those that still use them.
Each language has its own rules for writing numbers. My native language Dutch has very different logic to English. Both Dutch and English use a single syllable until the number twelve. Linguists have theorised this is evidence that early Germanic numbers were duodecimal. This factoid is supported by the importance of a “dozen” as a counting word and the twelve hours in the clock. There is even a Dozenal Society that promotes the use of a number system based on 12.
The English language changes the rules when reaching the number 21. While we say eight-teen in English, we do no say “one-twenty”. Dutch stays consistent and the last number is always spoken first. For example, 37 in English is “thirty-seven”, while in Dutch it is written as “zevenendertig” (seven and thirty).
Euler Problem 17 Definition first piece of code provides a function that generates the words for numbers 1 to 999,999. This is more than the problem asks for, but it might be a useful function for another application. The last line concatenates all words together and removes the spaces.
numword.en <- function(x) { if (x > 999999) return("Error: Oustide my vocabulary") # Vocabulary single <- c("one", "two", "three", "four", "five", "six", "seven", "eight", "nine") teens <- c( "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen") tens <- c("ten", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety") # Translation numword.10 <- function (y) { a <- y %% 100 if (a != 0) { and <- ifelse(y > 100, "and", "") if (a < 20) return (c(and, c(single, teens)[a])) else return (c(and, tens[floor(a / 10)], single[a %% 10])) } } numword.100 <- function (y) { a <- (floor(y / 100) %% 100) %% 10 if (a != 0) return (c(single[a], "hundred")) } numword.1000 <- function(y) { a <- (1000 * floor(y / 1000)) / 1000 if (a != 0) return (c(numword.100(a), numword.10(a), "thousand")) } numword <- paste(c(numword.1000(x), numword.100(x), numword.10(x)), collapse=" ") return (trimws(numword)) } answer <- nchar(gsub(" ", "", paste0(sapply(1:1000, numword.en), collapse=""))) print(answer)
Writing Numbers in Dutch
I went beyond Euler Problem 17 by translating the code to spell numbers in Dutch. Interesting bit of trivia is that it takes 307 fewer characters to spell the numbers 1 to 1000 in Dutch than it does in English.
It would be good if other people can submit functions for other languages in the comment section. Perhaps we can create an R package with a multi-lingual function for spelling numbers.
numword.nl <- function(x) { if (x > 999999) return("Error: Getal te hoog.") single <- c("een", "twee", "drie", "vier", "vijf", "zes", "zeven", "acht", "nenen") teens <- c( "tien", "elf", "twaalf", "dertien", "veertien", "fifteen", "zestien", "zeventien", "achtien", "nenentien") tens <- c("tien", "twintig", "dertig", "veertig", "vijftig", "zestig", "zeventig", "tachtig", "negengtig") numword.10 <- function(y) { a <- y %% 100 if (a != 0) { if (a < 20) return (c(single, teens)[a]) else return (c(single[a %% 10], "en", tens[floor(a / 10)])) } } numword.100 <- function(y) { a <- (floor(y / 100) %% 100) %% 10 if (a == 1) return ("honderd") if (a > 1) return (c(single[a], "honderd")) } numword.1000 <- function(y) { a <- (1000 * floor(y / 1000)) / 1000 if (a == 1) return ("duizend ") if (a > 0) return (c(numword.100(a), numword.10(a), "duizend ")) } numword<- paste(c(numword.1000(x), numword.100(x), numword.10(x)), collapse="") return (trimws(numword)) } antwoord <- nchar(gsub(" ", "", paste0(sapply(1:1000, numword.nl), collapse=""))) print(antwoord) print(answer - antwoord)
The post Euler Problem 17: Number Letter Counts. | https://www.r-bloggers.com/2017/03/euler-problem-17-number-letter-counts/ | CC-MAIN-2020-45 | refinedweb | 638 | 65.73 |
.
Deprecated since version 3.3: You may check the file path’s suffix against the supported suffixes listed in importlib.machinery to infer the same information. is returned.
Note that this function only returns a meaningful name for actual Python modules - paths that potentially refer to Python packages will still return None.
Changed in version 3.3: This function is now based directly on importlib rather than the deprecated getmoduleinfo(). is raised if the source code cannot be retrieved.
Return the text of the source code for an object. The argument may be a module, class, method, function, traceback, frame, or code object. The source code is returned as a single string. An OSError is raised if the source code cannot be retrieved.
Clean up indentation from docstrings that are indented to line up with blocks of code. Any whitespace that can be uniformly removed from the second line onwards is removed. Also, all tabs are expanded to spaces.
New in version 3.3.
The Signature object represents the call signature of a callable object and its return annotation. To retrieve a Signature object, use the signature() function.
Return a Signature object.
Note
Some callables may not be introspectable in certain implementations of Python. For example, in CPython, built-in functions defined in C provide no metadata about their arguments.
A Signature object represents the call signature of a function and its return annotation. For each parameter accepted by the function it stores a Parameter object in its parameters collection.
The optional parameters argument is a sequence of Parameter objects,.
A special class-level marker to specify absence of a return annotation.
An ordered mapping of parameters’ names to the corresponding Parameter objects.
The “return” annotation for the callable. If the callable has no “return” annotation, this attribute is set to Signature.empty.
Create a mapping from positional and keyword arguments to parameters. Returns BoundArguments if *args and **kwargs match the signature, or raises a TypeError.
Works the same way as Signature.bind(), but allows the omission of some required arguments (mimics functools.partial() behavior.) Returns BoundArguments, or raises a TypeError if the passed arguments do not match the signature.
Create a new Signature instance based on the instance replace was invoked on. It is possible to pass different parameters and/or return_annotation to override the corresponding properties of the base signature. To remove return_annotation from the copied Signature, pass in Signature.empty.
>>> def test(a, b): ... pass >>> sig = signature(test) >>> new_sig = sig.replace(return_annotation="new return anno") >>> str(new_sig) "(a, b) -> 'new return anno'"
Parameter objects are immutable. Instead of modifying a Parameter object, you can use Parameter.replace() to create a modified copy.
A special class-level marker to specify absence of default values and annotations.
The name of the parameter as a string. Must be a valid python identifier name (with the exception of POSITIONAL_ONLY parameters, which can have it set to None).
The default value for the parameter. If the parameter has no default value, this attribute is set to Parameter.empty.
The annotation for the parameter. If the parameter has no annotation, this attribute is set to Parameter.empty.
Create a new Parameter instance based on the instance replaced was invoked on. To override a Parameter attribute,'"
Result of a Signature.bind() or Signature.bind_partial() call. Holds the mapping of arguments to the function’s parameters.
An ordered, mutable mapping (collections.OrderedDict) of parameters’ names to arguments’ values. Contains only explicitly bound arguments. Changes in arguments will reflect in args and kwargs.
Should be used in conjunction with Signature.parameters for any argument processing purposes.
Note
Arguments for which Signature.bind() or Signature.bind_partial() relied on a default value are skipped. However, if needed, it is easy to include them.
>>> def foo(a, b=10): ... pass >>> sig = signature(foo) >>> ba = sig.bind(5) >>> ba.args, ba.kwargs ((5,), {}) >>> for param in sig.parameters.values(): ... if param.name not in ba.arguments: ... ba.arguments[param.name] = param.default >>> ba.args, ba.kwargs ((5, 10), {})
A tuple of positional arguments values. Dynamically computed from the arguments attribute.
A dict of keyword arguments values. Dynamically computed from the arguments attribute.
The args and kwargs properties can be used to invoke functions:
def test(a, *, b): ... sig = signature(test) ba = sig.bind(10, b=20) test(*ba.args, **ba.kwargs), or None if there are no default().
Note
Consider using the new Signature Object interface, which provides a better way of introspecting functions. values returned by getargspec() or getfullargspec().
The first seven arguments are (args, varargs, varkw, defaults, kwonlyargs, kwonlydefaults, annotations). The other five arguments are the corresponding optional formatting functions that are called to turn names and values into strings. The last argument is an optional function to format the sequence of arguments. For example:
>>> from inspect import formatargspec, getfullargspec >>> def f(a: int, b: float): ... pass ... >>> formatargspec(*getfullargspec(f)) '(a: int, b: float)''
New in version 3.2.
Note
Consider using the new Signature.bind() instead. is raised if func is not a Python function or method.
New in version 3.3..
The current internal state of the generator can also be queried. This is mostly useful for testing purposes, to ensure that internal state is being updated as expected:.
New in version 3.3. | http://www.wingware.com/psupport/python-manual/3.3/library/inspect.html | CC-MAIN-2014-41 | refinedweb | 878 | 53.37 |
Hello,
Do you guys recommend using iron python? I've tested the BPCSharedComponent.dll with it and it works, which means any other C# lib will work just fine. Wont this be nearly perfect for making audio games if the power and simplicity of python is combined with C#'s libraries which weren't available in python otherwise?
Hello,
pythonnet is good as well (i'm using it in my python environment)
you may install it into your python environment by using
pip install pythonnet
then import it like
import clr
then import any python code and add reference to them like the following:
import clr clr.AddReference("System"); from System import String
then when you compile it using something like pyinstaller, it doesn't compile to IL assembly and it won't be decompiled using .net decompilers, but with the advantage of using .net runtime libraries
iron python compiles to .net assembly
Pythonnet took a bit of querky workaround (and finally me just installing it via git+) to actually get it to install.
but the great fact about it is that when compiled into exe, it will not be a clr app which easily could be decompiled.
Yes, but it will be a Python app that will easily be decompiled either by just locating its temp extraction directory or managing to freeze execution and pull code from a dump . Really, Python is no more secure in that regard than .NET is (and .NET is probably more secure since you can actually obfuscate your code).
Hi,
At Ethin how did you manage to install it? I installed the wheel module and also tried installing from git directly but it didn't work?
@106, I honestly don't know how it worked. I have a ton of packages from pip that I don't think you'd need, but yes, you do need the wheel package. Not really sure though exactly what others you need though.
Hi, i want to use iron python. but i am not sure on the documentation of the bpc shared component.dll to see the functions...
never give up on what ever you are doing.
Oriol, you may have heard something like this before, but BGT is good, so why do you even need to switch. I agree that JS is bad, but why did you switch? But if you really want a cross-platform language, try C++, although I don't know if it is as easy to install a keyhook as in BGT.
Best regards
T-m
BGT is not good
is preferable to
I felt the passing of your wind
My my, we again came down to the level of BGT-bashing without arguments did we?
Fact is: Oriol wants to go away from BGT and JS. Both have their pros and cons, but they don't matter, since Oriol stated that he wants to use Python, so no reason to discuss this any further.
Best Regards.
Hijacker
Who said js is bad?
I enjoy using js a lot.
I was looking into python alternatives, but I'm sticking to js for now.
Python is another great choice, but js does all I need, except maybe sound encryption, but I could probalby find a way around that.
I myself have been thinking of picking up Python, So here's my question before I undertake learning another language. With all of the talk of no longer supported or updated libraries that I've seen in this thread, and no modern alternatives to them, I want to make sure that this isn't going to go the way of BGT, where basically everything is abandoned and we're just left in the lurch. Is there a better language other than Python that actually keeps their libraries up to date? My question is, if Python is so popular, why are all of these libraries being abandoned and not updated?
Oh they aren't, the thing just is that people tend to stick to the libraries they know instead of moving to new alternatives, such as pybass which is pretty damn old, sound_lib which is too, accessible_output2 and so on. Also, most of the screen reader libraries out there have several flaws, like tolk not being an actual python package and thus requires to fiddle around with its dlls. If people would instead seek for some more modern alternatives or even implement a new one, or enhance existing packages to enhance the user experience like tolk for example, things wouldn't go this way
.
I myself mentioned Bass4Py several times, which is a nice cython-based wrapper for BASS, even though its not yet entirely complete, one or two more hands helping with its development could get it finished pretty fast and thus create a new competitor for the market, but instead people simple continue to use the old but well known wrappers.
Best Regards.
Hijacker
@tmstuff000, have you ever programmed in C++? I'll tell you right now that its not something you can just pick up in a month like you can Python. C++ has far more intricacies than Python does, and some weird/unusual things as well (auto type deduction can get wonky results with things like std::vector<bool>::reference, for example). Please take your own advice and try to learn C++ and tell us about your success. I love C++, I do indeed; but I would never subject someone who wants to make a simple audio game to the headaches and confusion C++ can bring without them actually ready to take on that task. In fact, C++ seems like a bit of overkill for an audio game. A very complex one, sure, it would work, but it definitely isn't simple. I have an app (mainly written in C) that I made just to fiddle around, allowing you to press joystick keys and it would vibrate your joystick and move an oscillator around the stereo field. It had joystick feature detection too but either way, its about.... 400 lines of C code or so? Granted, I could've simplified that by a bit with C++, but that wasn't the goal of my experiment. Didn't help that I was using a binary tree though...
C++ is ultimately a powerful -- a very powerful -- language. But it is also a very dangerous language. Python will protect you from [most] of the dangers that lower-level programming offers, but C++ will not. C++ will quite happily let you drown yourself (metaphorically) in vulnerabilities, buffer overflows, extremely confusing compilation errors, and more, without a second metaphroical thought. Go ahead and learn it if you like, but when you actually want to make something with it, learn how to do things safely before you embark on your mission.
@113, yeah, Python does (usually) keep libraries up to date. I don't have a library that's out of date in my collection of.... 299 libraries.
@114, Are there actual newer, modern libraries that work? Is there a reason why people are sticking to the older ones? Is there nothing newer offered? @115, I've seen several things in this thread saying this or that is old, not being updated, or discontinued, hence the question.
@116
Python as a whole is rather healthy, exceedingly so in fact, so you have no need to worry about it disappearing tomorrow, or even in the next decade most likely. For some context there are currently more than 113,000 packages available through PyPi alone, with many more being added and updated all the time. In the case of Pybass and bass4py, both are built on top of the same underlying BASS library, and there are other alternative sound libraries like OpenAL, Libaudioverse, etc. floating around. Libraries like sound_lib and accessible_output2 were part of the continuum website and aren't really maintained anymore, being rather niche to begin with. Of those two accessible_output2 was one of the few cross platform TTS libraries available, which is unfortunate but not really a problem directly tied with python specifically, and its not like there aren't work arounds.
so, if sound_lib and accessible_output2 aren't being maintained anymore, are there newer ones that are? I guess that's my point. When packages are discontinued and no longer supported, do new ones take their place, or is there just a void then.
It depends, that could be asked of any programming language really. It depends on incentives and need, private companies can create their own libraries, and communities and developers can make their own open source alternatives, with people coming and going to maintain them as they want, its a very fluid thing that can depend in large part on a languages popularity and utility. Sometimes there can be a void, but depending on the requirements solutions will be found, or created. Programming is all about problem solving afterall, and some day you may find yourself looking for alternatives, maintaining a library yourself, or creating a solution to a problem that doesn't yet exist.
As for sound_lib, there are a few alternatives available, many of which are probably better, and while there are few cross platform TTS libraries, its not the end of the universe. You can still write platform dependant TTS code or use pre-recorded audio, its just less convenient. This isn't to say people aren't also actively looking into better solutions with machine learning or voice synthesis, you may have heard of Lyrebird or Googles WaveNet library for example. I'm sure there are plenty of people interested in those developments.
So maybe my questions will make me seem like an idiot, but I'll ask them anyway. As the name of this topic suggests, if I want to start to learn Python, what resources are the best for achieving that goal? Googling "python tutorial" gives a ton of results, but how do I know which ones are legitimately good and which ones are crap? In terms of audiogame development, since that's what I'd use it for, which tutorials and/or other resources would best help with learning how to set up and manipulate sound correctly. Also, in terms of audiogame development, which libraries/engines would be best for this, and which ones are documented well, maintained and updated, and have active communities in case of questions? There appear to be several to choose from based on what I have found on Google. Pygame, Pyglet, Ren’Py, Arcade, Cocos2d, Panda3d, probably others that I haven't seen..
python's not a bad choice. Besides, you can always learn a new programming language later with much more ease after your first one. Learning C next to python might be beneficial later, since python is relatively slow compared to C. Python can import C code though (I'm simplifying this), so if you ever need super fast performance for example when coding a huge 3d map system C might help. THen again, python has a near limitless amount of libraries already made, so you might not have to code this yourself. But you should focus on learning the basics first before going on about if a language is useful for a specific task. It's better to just learn something and later discover another tool might have a few advantages and learning it afterwards, rather than obsessing over which programming language to use and not getting anywhere in the mean time.
golfing in the kitchen
O wow. Leave it to me to write up a lengthy reply to another topic, and then post it into the wrong topic.. Never mind...
golfing in the kitchen.
Thanks for posting. As someone who has learned the basics of python, and is now looking to start making games, this gives me a much clearer idea of what I’ll need to learn.
I found plenty of tutorials online that show how to make games with the libraries mentioned in your post, however they don’t mention how To add audio. This is where I am a bit confused. Is it as simple as adding sounds into the code of the main game at the correct places, using the audio libraries you mentioned, or is it a more complicated process than that?
I would greatly appreciate any advice You can provide.
Thats totally up to you and your wanted level of profession. There are multiple attempts, one of them is just inserting the right sounds at the corresponding line of code and playing and stopping the background music at the right time, other concepts utilize a so-called event-system, where you plan all the sound specific stuff outside of code and just tell the code which event it should invoke, so the code doesn't have to worry about any of the sounds at all. It all depends on the tools you choose and the way tou want to go.
Best Regards.
Hijacker | https://forum.audiogames.net/post/418827/ | CC-MAIN-2019-26 | refinedweb | 2,160 | 68.3 |
BFS(Breadth First Search) is a graph traversal technique where a node and its neighbors are visited first and then the neighbors of neighbors. In simple terms it traverses level wise from the source. First it traverses level 1 nodes (direct neighbors of source node) and then level 2 nodes (neighbors of neighbors of source node) and so on.
Now, suppose if we have to know at which level all the nodes are at (from source node). Then BFS can be used to determine the level of each node.
Examples:
Input :
Output : Node Level 0 0 1 1 2 1 3 2 4 2 5 2 6 2 7 3 Explanation :Output : Node Level 0 0 1 1 2 1 3 2 4 2 5 2 6 2 7 3 Explanation :
C++
Python3
# Python3 Program to determine level
# of each node and print level
import queue
# function to determine level of
# each node starting from x using BFS
def printLevels(graph, V, x):
# array to store level of each node
level = [None] * V
marked = [False] * V
# create a queue
que = queue.Queue()
# enqueue element x
que.put(x)
# initialize level of source
# node to 0
level[x] = 0
# marked it as visited
marked[x] = True
# do until queue is empty
while (not que.empty()):
# get the first element of queue
x = que.get()
# traverse neighbors of node x
for i in range(len(graph[x])):
# b is neighbor of node x
b = graph[x][i]
# if b is not marked already
if (not marked[b]):
# enqueue b in queue
que.put(b)
# level of b is level of x + 1
level[b] = level[x] + 1
# mark b
marked[b] = True
# display all nodes and their levels
print(“Nodes”, ” “, “Level”)
for i in range(V):
print(” “,i, ” –> “, level[i])
# Driver Code
if __name__ == ‘__main__’:
# adjacency graph for tree
V = 8
graph = [[] for i in range(V)]
graph[0].append(1)
graph[0].append(2)
graph[1].append(3)
graph[1].append(4)
graph[1].append(5)
graph[2].append(5)
graph[2].append(6)
graph[6].append(7)
# call levels function with source as 0
printLevels(graph, V, 0)
# This code is contributed by PranchalK
Output:
Nodes Level 0 --> 0 1 --> 1 2 --> 1 3 --> 2 4 --> 2 5 --> 2 6 --> 2 7 --> 3 | https://tutorialspoint.dev/data-structure/graph-data-structure/level-node-tree-source-node-using-bfs | CC-MAIN-2021-17 | refinedweb | 383 | 68.64 |
#include <abstract.h>
Inheritance diagram for FieldAbstract:
Found in the file {linbox/field/abstract.h}. Abstract base class used to implement the field archetype to minimize code bloat. All public member functions of this class are purely virtual and must be implemented by all derived classes.
If a template is instantiated on the field archetype, we can change the field it is using by changing the derived class of this class. This allows us to change the field used in a template without having to reinstantiate it. This minimizes code bloat, but it also introduces indirection through the use of pointers and virtual functions which is inefficient. | http://www.linalg.org/linbox-html/classLinBox_1_1FieldAbstract.html | crawl-001 | refinedweb | 107 | 54.02 |
clibu 1.1.2
clibu: ^1.1.2 copied to clipboard
A tiny library to make the writing of command-line apps intuitive.
CLI 💻 #
Making the writing of command-line apps in Dart intuitive. 💻
About 📚 #
Some time ago, I wanted to make a command-line tool in Dart. This is when I noticed that I didn't have a clue about how to make a responsive and "classic" command-line app. CLI covers this need. it allows you to make command-line apps in the way you are used to from GNU programs. Flags like
--help or
--version are provided out of the box.
Installation 📥 #
Adding to your project #
To add CLI to your project's dependencies, add this line to your project's
pubspec.yaml:
From GitHub
depdencies: ... clibu: git: git://github.com/iamtheblackunicorn/cli.git
From Pub.dev
depdencies: ... clibu: ^1.1.2
The three dots represent anything else that you might have in the
dependencies section. Having done that, re-fetch your project's dependencies by running this in the project's root directory:
$ dart pub get
Usage 🔨 #
Importing #
Import the command-line API like this:
import 'package:clibu/clibu.dart';
Import the API for files like this:
import 'package:clibu/files.dart';
API #
COMMAND-LINE API
class CommandLineApp
Key command-line app class. The entire app lives in this class.
void addArgument(String argumentName, String helpMessage, bool isActive)
Adds an argument to your app!
void appHelpMessage()
"Batteries-included" app help text!
Prints help info about the app when the app
is invoked with
--help, or
-h.
void appInfoMessage()
"Batteries-included" app info text!
Prints info about the app when the app
is invoked with
info,
--info, or
-i.
void appVersionMessage()
"Batteries-included" app version text!
Prints version info about the app when the app
is invoked with
version,
--version, or
-v.
bool argumentWasUsed(List
User method to check if an argument was used!
String getArgumentData(List
User method to fetch the data of an argument!
This will only work if the
isActive flag is
true.
void runApp(List
This method runs the app!
Batteries-included flags of
info, and
version.
FILES API
void runCommand(String shellCommand)
Runs a shell command and prints the output from
STDERR and
STDOUT.
String getFileContents(String filePath)
Returns the contents of a file as a string.
Map<String,dynamic> getJSONMap(String jsonString)
Returns a JSON string as a map.
String mapToJSON(Map<String,dynamic> jsonData)
Converts a Dart Map to a JSON string.
void writeToFile(String filePath, String fileContents)
Writes a string to a file.
bool fileExists(String filePath)
Checks whether a file exists.
void testFileFunctions()
A function to test all of the file functions.
Example 📲 #
This is what a minimal example using CLI Black Unicorn would look like.
/* CLI by Alexander Abraham a.k.a. The Black Unicorn licensed under the MIT license */ import 'package:clibu/clibu.dart'; // Inherits from the original class, // "CommandLineApp". class TestApp extends CommandLineApp { @override String appName = 'Test'; @override String appVersion = '1.0'; @override String appAuthor = 'The Black Unicorn'; @override String appLicense = 'MIT license'; @override Map<String, dynamic> argumentsDataBase = {}; } // Function to execute when the option // is called. void greet(String name) { String greeting = 'Hello, $name!'; print(greeting); } // Main entry point for the Dart VM. void main(List<String> arguments) { TestApp myApp = TestApp(); myApp.addArgument('--greet', 'greets the user with a specified name', true); if (myApp.argumentWasUsed(arguments, '--greet') == true) { greet(myApp.getArgumentData(arguments, '--greet')); } myApp.runApp(arguments); // finally running the app }
Note 📜 #
- CLI by Alexander Abraham a.k.a. The Black Unicorn
- licensed under the MIT license | https://pub.dev/packages/clibu | CC-MAIN-2021-10 | refinedweb | 591 | 60.92 |
In this post, you will learn how to create an AWS Application Load Balancer (ALB) for your EC2 instances running a Spring Boot application. You will also create an Autoscaling Group (ASG) which will simplify the setup and will automatically scale-in and scale-out.
1. Introduction
A load balancer will make it possible to distribute the workload across multiple EC2 instances. A client application will connect to the load balancer without knowing which EC2 instance will handle the request. Because of this, EC2 instances can come and go without impacting your client requests. It is transparent because of the load balancer. It is easy to scale-out and thus to add more EC2 instances when traffic increases, or to scale-in and reduce the number of EC2 instances in order to reduce costs when traffic gets low.
In this post, you will learn how to setup an Application Load Balancer (note that there are other types as well, dependent on your need) which distributes traffic between several EC2 instances running a basic Spring Boot application. Next, you will learn how to use an Autoscaling Group, which will simplify things quite a bit.
In case that you do not have any knowledge how to create EC2 instances, read a previous blog. The sources used in this post are available at GitHub. The resources being used in this post will not cost you a thing, only Free Tier is used.
2. Sample Application
The basic application contains of a simple Hello endpoint which returns a message containing the host name of the machine which handled the request.
@RestController public class HelloController { @GetMapping("/hello") public String hello() { String message = "Hello AWS!"; try { InetAddress ip = InetAddress.getLocalHost(); message += " From host: " + ip; } catch (UnknownHostException e) { e.printStackTrace(); } return message; } }
Also the health endpoint of Spring Actuator is added which you can use in order to enable the health check of the EC2 instance.
In case you want to run the application locally, just run the following command:
$ mvn spring-boot:run
Verify whether the endpoints are available:
$ curl Hello AWS! From host: <name of your machine>/127.0.1.1 $ curl {"status":"UP"}
Create the jar file:
$ mvn clean verify
The jar file is uploaded to GitHub as a release in order to be able to download it to the EC2 instance.
3. Create the EC2 Instances
Create 2 EC2 instances in two different availability zones with the following User Data:
#!/bin/bash yum -y install java-11-amazon-corretto-headless wget java -jar MyAWSPlanet-0.0.1-SNAPSHOT.jar
During startup of the EC2 instance, Java 11 is downloaded and installed, the jar file is downloaded and the jar file is started.
Create a new Security Group and leave the default SSH access inbound rule for now. Name the Security Group ec2-sg. Now launch the EC2 instances.
4. Create the ALB
In the left menu, navigate to Load Balancers in the Load Balancing section and click the Create Load Balancer button. Here you can choose the type of load balancer you want to use. Choose Application Load Balancer by clicking the Create button.
In Step 1, you give the load balancer the name MyFirstLoadBalancer.
Set the listener to port 8080.
You also enable the availability zones for the load balancer. Check in which availability zones your EC2 instances are running and enable the same availability zones. Click the Next: Configure Security Settings button.
In Step 2, just click the Next: Configure Security Groups button.
In Step 3, create a new Security Group alb-sg for your ALB allowing HTTP traffic to port 8080. Click the Next: Configure Routing button.
In Step 4, you need to create a Target Group. A target group can be a number of EC2 instances the ALB will send traffic to. Name the target group MyFirstTargetGroup, set the port to 8080 and set the health check path to /actuator/health. Click the Next: Register Targets button.
In Step 5, you need to add the EC2 instances you want to include in the target group. Select both EC2 instances and click the Add to registered button. Click the Next: Review button.
At the end, you are able to review all the settings and click the Create button. You need to wait some time before the ALB is active.
Try to invoke the URL with the DNS Name of the load balancer.
$ curl <html> <head><title>504 Gateway Time-out</title></head> <body> <center><h1>504 Gateway Time-out</h1></center> </body> </html>
The time-out is expected. The EC2 instances only allow SSH traffic and no HTTP traffic. In the left menu, navigate to Security Groups in the Network & Security section. Select the ec2-sg security group and click the Edit inbound rules button of the Inbound rules tab.
Add a rule which allows HTTP traffic over port 8080. As a source, you choose the security group alb-sg of your ALB. This means that your EC2 instances cannot be reached directly over HTTP but only via the load balancer. Click the Save rules button.
Try to invoke the URL again and now the Hello message is returned. It will be more or less equally divided between the two machines when you invoke it several times.
$-21-171.eu-central-1.compute.internal/172.31.21.171 $ curl Hello AWS! From host: ip-172-31-21-171.eu-central-1.compute.internal/172.31.21.171 ...
Navigate to the Target Groups in the Load Balancing section and take a look at the health of the instances. Here you can see that both EC2 instances are healthy based on the configured health check.
In order to cleanup everything, you need to delete the load balancer, the target group, terminate the EC2 instances, delete the EC2 security group and finally delete the ALB security group. If you want to continue to follow this blog, only terminate the EC2 instances, you will need the other items in the next section.
5. Create the ASG
Creating an ALB is easy and has its advantages, but you still need to manually add and remove instances. This problem is solved with an Auto Scaling Group (ASG). Based on a template, scale-out and scale-in will be done automatically based on triggers you define.
In the left menu, navigate to Auto Scaling Groups in the Auto Scaling section and click the Create Auto Scaling Group button.
Give the ASG the name MyFirstASG. Next thing to do, is to create a Launch Template. The Launch Template will provide information about how to create EC2 instances. Click the Create a launch template link.
A new browser tab is opened where you first give the template the name MyFirstLaunchTemplate.
Fill in the AMI, the instance type and the key pair to be used just like you did while creating the EC2 instance.
In the Network settings, choose the ec2-sg security group.
Leave the Network settings, Storage, Resource tags and Network interfaces as default. Unfold the Advanced details section and scroll all the way down. Copy the User Data which you used for creating the EC2 instances before into the User data field. Finally, click the Create launch template button.
Return to the create ASG wizard, click the refresh button next to the Launch template field and select the just created launch template. Click the Next button.
In Step 2, select the three subnets and click the Next button.
In Step 3, choose to attach an existing load balancer target groups and select MyFirstTargetGroup.
In the Health Checks section, you enable ELB. This way, the ELB will automatically restart when it is unhealthy. Click the Next button.
In Step 4, you define the Group size with a Desired capacity of 2, a Minimum capacity of 2 and a Maximum capacity of 3.
With the Scaling policies, you can define how the autoscaling should take place when you choose Target tracking scaling policy. For now, just choose None, but it is good to take a look at the different options to automatically scale. Click the Next button.
In Step 5 and Step 6, just click the Next button. It is possible to add notifications and tags, but you will not use it in this tutorial. In Step 7, you can review everything and at the bottom of the page you click the Create auto scaling group button. It may take a few minutes before everything is up-and-running. You will see the following:
- In the Auto Scaling section:
- The Auto Scaling Group
- In the Instances section:
- Two EC2 instances which have been created because the minimum and desired capacity was set to 2
- The created Launch Template
Verify again by means of
curl whether the Hello URL can be accessed.
Navigate to the Auto Scaling Group and the Details tab and click the Edit button.
Increase the desired and minimum capacity and click the Update button.
Automatically, a new instance is provisioned. Wait a few minutes until the instance is marked as being healthy in the Target Group and after this, the traffic is distributed over three EC2 instances.
In order to cleanup everything, you need to delete the Auto Scaling Group (this can take a while), the load balancer, the target group, the EC2 security group and finally delete the ALB security group.
6. Conclusion
In this post, you learned how to create an ALB as a single point of access for your EC2 instances. Problem with this setup was that you needed to manually add or remove instances when traffic got high or low. The ASG will solve this problem. It will launch or terminate EC2 instances based on several scaling policies (which you did not configure in this post, but just took a look at). | https://mydeveloperplanet.com/2021/06/23/how-to-create-an-aws-alb-and-asg/ | CC-MAIN-2022-05 | refinedweb | 1,630 | 64.51 |
It's about time to turn that big
README.md file from your project into something that supports nice-looking markdown-driven documentation, such as MkDocs.
But we have the following requirements:
In this article, we are going to do exactly that.
What do we want to achieve?
We want to open
/docs and if we have a login session, see the documentation. Otherwise - be redirected to login.
First, we are going to setup our Django project and create
docs app.
$ django-admin startproject django_mkdocs $ cd django_mkdocs $ python manage.py startapp docs
And since we are going to serve the documentation as static content from our docs app:
$ mkdir docs/static
Then, we need to install
MkDocs:
$ pip install mkdocs
and start a new
MkDocs project:
$ mkdocs new mkdocs
This will create a new documentation project in
mkdocs folder. This is where we are going to store our documentation markdown files.
We need to do some moving around since we want to end up with
mkdocs.yml at the same directory level as
manage.py:
$ mv mkdocs/docs/index.md mkdocs/ $ mv mkdocs/mkdocs.yml . $ rm -r mkdocs/docs
We need to end up with the following dir structure:
. ├── django_mkdocs │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py ├── docs │ ├── admin.py │ ├── apps.py │ ├── __init__.py │ ├── migrations │ │ └── __init__.py │ ├── static │ ├── models.py │ ├── tests.py │ ├── urls.py │ └── views.py ├── manage.py ├── mkdocs │ └── index.md └── mkdocs.yml
We want to achieve two things:
mkdocsfolder.
docs/static/mkdocs_buildfolder. Django will be serving from this folder.
Of course, those folder names can be changed to whatever you like.
We end up with the following
mkdocs.yml file:
site_name: My Docs docs_dir: 'mkdocs' site_dir: 'docs/static/mkdocs_build' pages: - Home: index.md
Now, if we run the test mkdocs server:
$ mkdocs serve
We can open and see our documentation there.
Finally, lets build our documentation:
$ mkdocs build
You can now open
docs/static/mkdocs_build and explore it. Open
index.html in your browser. This is a neat static web page with our documentation.
Now, the interesting part begins.
We want to serve our documentation from
/docs so the first thing we are going to do is redirect
/docs to
docs/urls.py.
In
django_mkdocs/urls.py change to the following:
from django.conf.urls import url, include from django.contrib import admin urlpatterns = [ url(r'^admin/', admin.site.urls), url(r'^docs/', include('docs.urls')) ]
Now, lets create
docs/urls.py and
docs/views.py with some default values:
""" docs/urls.py """ from django.conf.urls import url from .views import serve_docs urlpatterns = [ url(r'^$', serve_docs), ]
and
""" docs/views.py """ from django.http import HttpResponse def serve_docs(request): return HttpResponse('Docs are going to be served here')
Now, if we run our Django, we see the response at
Now, we want to catch every URL of the format:
/docs/* and try to find the given path inside
mkdocs_build
Lets start with the regular expression that will match everything. We will use
.* which means "whatever, 0, 1 or more times"
""" docs/urls.py """ from django.conf.urls import url from .views import serve_docs urlpatterns = [ url(r'^(?P<path>.*)$', serve_docs), ]
Now in the view, we will receive a key-word argument called
path:
""" docs/views.py """ from django.http import HttpResponse def serve_docs(request, path): return HttpResponse(path)
If we do some testing, we will get the following values:
/docs/-> empty string
/docs/index.html->
index.html
/docs/about/->
about/
/docs/about->
about
Now, we are almost done. We need to get that
path and try to serve that file from
docs/static/mkdocs_build directory. This is basically static serving from Django.
We will start with adding
DOCS_DIR settings in our
settings.py file, so we can easily concatenate file paths after that.
""" django_mkdocs/settings.py """ # .. rest of the settings DOCS_DIR = os.path.join(BASE_DIR, 'docs/static/mkdocs_build')
Since we are going to serve static files, we can take one of the two approaches:
Option 1 is good for education, option 3 is more efficient, but for this article, we will take option 2, since we can easily achieve what we want.
Since we need to provide the correct path to the desired file, we need to know the so-called namespace in our
docs/staticfolder -
mkdocs_build/
We will take that using
os.path.basename:
""" django_mkdocs/settings.py """ # .. rest of the settings DOCS_DIR = os.path.join(BASE_DIR, 'docs/static/mkdocs_build') DOCS_STATIC_NAMESPACE = os.path.basename(DOCS_DIR)
Now, it's time for
django.contrib.staticfiles.views.serve:
""" docs/views.py """ from django.conf import settings from django.contrib.staticfiles.views import serve def serve_docs(request, path): path = os.path.join(settings.DOCS_STATIC_NAMESPACE, path) return serve(request, path)
Now if we fire up our server and open we should see the index page.
But we want to be even better - opening should also return the index page.
Now, if we inspect the structure of
mkdocs_build and add a few more pages, we will see that there's always
index.html for each page.
We can take advantage of that knowledge in our view:
""" docs/views.py """ import os from django.conf import settings from django.contrib.staticfiles.views import serve def serve_docs(request, path): docs_path = os.path.join(settings.DOCS_DIR, path) if os.path.isdir(docs_path): path = os.path.join(path, 'index.html') path = os.path.join(settings.DOCS_STATIC_NAMESPACE, path) return serve(request, path)
Now opening opens the index page of the documentation. And we are done.
Now, we have this
mkdocs_build string defined both in
settings.py and
mkdocs.yml. We can dry things up with the following code:
$ pip install PyYAML
And change
settings.py to look like that:
""" django_mkdocs/settings.py """ import yaml # ... some settings MKDOCS_CONFIG = os.path.join(BASE_DIR, 'mkdocs.yml') DOCS_DIR = '' DOCS_STATIC_NAMESPACE = '' with open(MKDOCS_CONFIG, 'r') as f: DOCS_DIR = yaml.load(f, Loader=yaml.Loader)['site_dir'] DOCS_STATIC_NAMESPACE = os.path.basename(DOCS_DIR)
And now, we are ready.
Now, for the final part, we can easily reuse Django's auth system and just add the neat
login_required decorator:
""" docs/views.py """ import os from django.conf import settings from django.contrib.auth.decorators import login_required from django.contrib.staticfiles.views import serve @login_required def serve_docs(request, path): docs_path = os.path.join(settings.DOCS_DIR, path) if os.path.isdir(docs_path): path = os.path.join(path, 'index.html') path = os.path.join(settings.DOCS_STATIC_NAMESPACE, path) return serve(request, path)
How you are going to handle this login is now up to you.
Now, if we want to push that to production, you will probably have
DEBUG = False. This will break our implementation, since
django.contrib.staticfiles.views.serve has a check about that.
If we want to have this served in production, we need to pass
insecure=True as kwarg to
serve:
@login_required def serve_docs(request, path): docs_path = os.path.join(settings.DOCS_DIR, path) if os.path.isdir(docs_path): path = os.path.join(path, 'index.html') path = os.path.join(settings.DOCS_STATIC_NAMESPACE, path) return serve(request, path, insecure=True)
Ow, if you have other static files, there's a big chance of having
collectstatic as part of your deployment procedure.
This will also include the
mkdocs_build folder and everyone will have access to the documentation, using
STATIC_URL.
We can avoid putting our documentation in the
STATIC_ROOT directory, by ignoring it when calling
collectstatic:
$ python manage.py collectstatic -i mkdocs_build
If you read the documentation about
django.contrib.staticfiles.views.serve you will see the following warning:
During development, if you use
django.contrib.staticfiles, this will be done automatically by run server when DEBUG is set to True (see
django.contrib.staticfiles.views.serve()).
This method is grossly inefficient and probably insecure, so it is unsuitable for production.
Depending on your needs, this can be good enough.
@login_requiredto hit docs index)
$ ./wrk -t2 -c10 -d30s Running 30s test @ 2 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 7.13ms 2.81ms 41.73ms 85.02% Req/Sec 696.29 165.57 1.00k 69.22% 35444 requests in 30.10s, 199.67MB read Socket errors: connect 10, read 0, write 0, timeout 0 Requests/sec: 1177.62 Transfer/sec: 6.63MB | https://www.hacksoft.io/blog/integrating-a-password-protected-mkdocs-in-django | CC-MAIN-2022-40 | refinedweb | 1,353 | 53.37 |
Array Help807601 Feb 16, 2008 5:40 AM
How can I take a string of numbers, and use a recursive method to put these numbers into an array?
String x = 1234
array[0] = 1
array[1] = 2
array[2] = 3
aray [3] = 4
String x = 1234
array[0] = 1
array[1] = 2
array[2] = 3
aray [3] = 4
This content has been marked as final. Show 5 replies
1. Re: Array Help807601 Feb 16, 2008 6:23 AM (in response to 807601)Look at the String class in API document.
It depends on which method to use depending on what kind of array you want to have.
2. Re: Array Help800308 Feb 16, 2008 6:45 AM (in response to 807601)Why recursive? An iterative approach is, IMHO, much more appropriate.
3. Re: Array Help807601 Feb 16, 2008 5:31 PM (in response to 800308)Of Course I know that, but we have to write a program, and it all has to be recursivly.. That is why I asked!
4. Re: Array Help807601 Feb 16, 2008 7:41 PM (in response to 807601)Think about a tail recursive approach.
For example, the word 'hello' could be thought of as h + ello
and ello could be thought of as e + llo
etc
That should be enough to get you started.
If you have any problems post the code you have so far rather than asking for the answer. As trite as it sounds you'll pick up more by having a go yourself first.
5. Re: Array Help807601 Feb 17, 2008 1:39 AM (in response to 807601)I dont want to spoon feed you. I've given you an example how it can be done using characters. Use something similar for numbers.
public class Test15 { static String str="Paul"; static char[] c = new char[str.length()]; public static void main(String[] args) { toArray(0); for (int i = 0; i < c.length; i++) { System.out.println(c);
}
}
static void toArray(int i){
while(str.length()!=i){
c[i]=str.charAt(i);
i++;
toArray(i);
}
}
}
Edited by: The_Matrix on Feb 16, 2008 5:38 PM | https://community.oracle.com/message/4943418?tstart=0 | CC-MAIN-2016-36 | refinedweb | 353 | 81.63 |
Arvind,
(Sorry, I missed this discussion.)
On Feb 28, 2012, at 10:53 AM, Arvind Prabhakar wrote:
> Please see [1] for details on why the code is like this. The short
> summary is that binary compatibility requires us to respect all
> extension points within the code.
>
> [1]
This might be prior to your involvement with Sqoop, but it was initially part of Apache Hadoop
MapReduce as a contrib module prior to being moved out to github.
Thus, does the Sqoop community also plan to maintain back-compat with org.apache.hadoop.sqoop
namespace for older users too?
I can't seem to place whether we ever made Apache Hadoop releases which included Sqoop before
it got moved out...
Arun | http://mail-archives.apache.org/mod_mbox/incubator-general/201202.mbox/%3C426284E8-91A7-4478-8DEF-DB40DB9BEF91@hortonworks.com%3E | CC-MAIN-2018-43 | refinedweb | 119 | 72.05 |
linear.py. For information
about downloading and working with this code, see Section 0.
Suppose we have a sequence of points, ys, that we want to
express as a function of another sequence xs. If there is a
linear relationship between xs and ys with intercept inter and slope slope, we expect each y[i] to be
inter + slope * x[i].
But unless the correlation is perfect, this prediction is only
approximate. The vertical deviation from the line, or residual,
is
res = ys - (inter + slope * xs)
The residuals might be due to random factors like measurement error,
or non-random factors that are unknown. For example, if we are
trying to predict weight as a function of height, unknown factors
might include diet, exercise, and body type.
If we get the parameters inter and slope wrong, the residuals
get bigger, so it makes intuitive sense that the parameters we want
are the ones that minimize the residuals.
We might try to minimize the absolute value of the
residuals, or their squares, or their cubes; but the most common
choice is to minimize the sum of squared residuals,
sum(res**2)).
Why? There are three good reasons and one less important one:
The last reason made sense when computational efficiency was more
important than choosing the method most appropriate to the problem
at hand. That’s no longer the case, so it is worth considering
whether squared residuals are the right thing to minimize.
For example, if you are using xs to predict values of ys,
guessing too high might be better (or worse) than guessing too low.
In that case you might want to compute some cost function for each
residual, and minimize total cost, sum(cost(res)).
However, computing a least squares fit is quick, easy and often good
enough.
thinkstats2 provides simple functions that demonstrate
linear least squares:
def LeastSquares(xs, ys):
meanx, varx = MeanVar(xs)
meany = Mean(ys)
slope = Cov(xs, ys, meanx, meany) / varx
inter = meany - slope * meanx
return inter, slope
LeastSquares takes sequences
xs and ys and returns the estimated parameters inter
and slope.
For details on how it works, see.
thinkstats2 also provides FitLine, which takes inter
and slope and returns the fitted line for a sequence
of xs.
def FitLine(xs, inter, slope):
fit_xs = np.sort(xs)
fit_ys = inter + slope * fit_xs
return fit_xs, fit_ys
We can use these functions to compute the least squares fit for
birth weight as a function of mother’s age.
live, firsts, others = first.MakeFrames()
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
ages = live.agepreg
weights = live.totalwgt_lb
inter, slope = thinkstats2.LeastSquares(ages, weights)
fit_xs, fit_ys = thinkstats2.FitLine(ages, inter, slope)
The estimated intercept and slope are 6.8 lbs and 0.017 lbs per year.
These values are hard to interpret in this form: the intercept is
the expected weight of a baby whose mother is 0 years old, which
doesn’t make sense in context, and the slope is too small to
grasp easily.
Instead of presenting the intercept at x=0, it
is often helpful to present the intercept at the mean of x. In
this case the mean age is about 25 years and the mean baby weight
for a 25 year old mother is 7.3 pounds. The slope is 0.27 ounces
per year, or 0.17 pounds per decade.
Figure 10.1: Scatter plot of birth weight and mother’s age with
a linear fit.
Figure 10.1 shows a scatter plot of birth weight and age
along with the fitted line. It’s a good idea to look at a figure like
this to assess whether the relationship is linear and whether the
fitted line seems like a good model of the relationship.
Another useful test is to plot the residuals.
thinkstats2 provides a function that computes residuals:
def Residuals(xs, ys, inter, slope):
xs = np.asarray(xs)
ys = np.asarray(ys)
res = ys - (inter + slope * xs)
return res
Residuals takes sequences xs and ys and
estimated parameters inter and slope. It returns
the differences between the actual values and the fitted line.
Figure 10.2: Residuals of the linear fit.
To visualize the residuals, I group respondents by age and compute
percentiles in each group, as we saw in Section 7.2.
Figure 10.2 shows the 25th, 50th and 75th percentiles of
the residuals for each age group. The median is near zero, as
expected, and the interquartile range is about 2 pounds. So if we
know the mother’s age, we can guess the baby’s weight within a pound,
about 50% of the time.
Ideally these lines should be flat, indicating that the residuals are
random, and parallel, indicating that the variance of the residuals is
the same for all age groups. In fact, the lines are close to
parallel, so that’s good; but they have some curvature, indicating
that the relationship is nonlinear. Nevertheless, the linear fit
is a simple model that is probably good enough for some purposes.
The parameters slope and inter are estimates based on a
sample; like other estimates, they are vulnerable to sampling bias,
measurement error, and sampling error. As discussed in
Chapter 8, sampling bias is caused by non-representative
sampling, measurement error is caused by errors in collecting
and recording data, and sampling error is the result of measuring a
sample rather than the entire population.
To assess sampling error, we ask, “If we run this experiment again,
how much variability do we expect in the estimates?” We can
answer this question by running simulated experiments and computing
sampling distributions of the estimates.
I simulate the experiments by resampling the data; that is, I treat
the observed pregnancies as if they were the entire population
and draw samples, with replacement, from the observed sample.
def SamplingDistributions(live, iters=101):
t = []
for _ in range(iters):
sample = thinkstats2.ResampleRows(live)
ages = sample.agepreg
weights = sample.totalwgt_lb
estimates = thinkstats2.LeastSquares(ages, weights)
t.append(estimates)
inters, slopes = zip(*t)
return inters, slopes
SamplingDistributions takes a DataFrame with one row per live
birth, and iters, the number of experiments to simulate. It
uses ResampleRows to resample the observed pregnancies. We’ve
already seen SampleRows, which chooses random rows from a
DataFrame. thinkstats2 also provides ResampleRows, which
returns a sample the same size as the original:
def ResampleRows(df):
return SampleRows(df, len(df), replace=True)
After resampling, we use the simulated sample to estimate parameters.
The result is two sequences: the estimated intercepts and estimated
slopes.
I summarize the sampling distributions by printing the standard
error and confidence interval:
def Summarize(estimates, actual=None):
mean = thinkstats2.Mean(estimates)
stderr = thinkstats2.Std(estimates, mu=actual)
cdf = thinkstats2.Cdf(estimates)
ci = cdf.ConfidenceInterval(90)
print('mean, SE, CI', mean, stderr, ci)
Summarize takes a sequence of estimates and the actual value.
It prints the mean of the estimates, the standard error and
a 90% confidence interval.
For the intercept, the mean estimate is 6.83, with standard error
0.07 and 90% confidence interval (6.71, 6.94). The estimated slope, in
more compact form, is 0.0174, SE 0.0028, CI (0.0126, 0.0220).
There is almost a factor of two between the low and high ends of
this CI, so it should be considered a rough estimate.
To visualize the sampling error of the estimate, we could plot
all of the fitted lines, or for a less cluttered representation,
plot a 90% confidence interval for each age. Here’s the code:
def PlotConfidenceIntervals(xs, inters, slopes,
percent=90, **options):
fys_seq = []
for inter, slope in zip(inters, slopes):
fxs, fys = thinkstats2.FitLine(xs, inter, slope)
fys_seq.append(fys)
p = (100 - percent) / 2
percents = p, 100 - p
low, high = thinkstats2.PercentileRows(fys_seq, percents)
thinkplot.FillBetween(fxs, low, high, **options)
xs is the sequence of mother’s age. inters and slopes
are the estimated parameters generated by SamplingDistributions.
percent indicates which confidence interval to plot.
PlotConfidenceIntervals generates a fitted line for each pair
of inter and slope and stores the results in a sequence,
fys_seq. Then it uses PercentileRows to select the
upper and lower percentiles of y for each value of x.
For a 90% confidence interval, it selects the 5th and 95th percentiles.
FillBetween draws a polygon that fills the space between two
lines.
fys_seq
Figure 10.3: 50% and 90% confidence intervals showing variability in the
fitted line due to sampling error of inter and slope.
Figure 10.3 shows the 50% and 90% confidence
intervals for curves fitted to birth weight as a function of
mother’s age.
The vertical width of the region represents the effect of
sampling error; the effect is smaller for values near the mean and
larger for the extremes.
There are several ways to measure the quality of a linear model, or
goodness of fit. One of the simplest is the standard deviation
of the residuals.
If you use a linear model to make predictions, Std(res)
is the root mean squared error (RMSE) of your predictions. For
example, if you use mother’s age to guess birth weight, the RMSE of
your guess would be 1.40 lbs.
If you guess birth weight without knowing the mother’s age, the RMSE
of your guess is Std(ys), which is 1.41 lbs. So in this
example, knowing a mother’s age does not improve the predictions
substantially.
Another way to measure goodness of fit is the coefficient of determination, usually denoted R2 and
called “R-squared”:
def CoefDetermination(ys, res):
return 1 - Var(res) / Var(ys)
Var(res) is the MSE of your guesses using the model,
Var(ys) is the MSE without it. So their ratio is the fraction
of MSE that remains if you use the model, and R2 is the fraction
of MSE the model eliminates.
For birth weight and mother’s age, R2 is 0.0047, which means
that mother’s age predicts about half of 1% of variance in
birth weight.
There is a simple relationship between the coefficient of
determination and Pearson’s coefficient of correlation: R2 = ρ2.
For example, if ρ is 0.8 or -0.8, R2 = 0.64.
Although ρ and R2 are often used to quantify the strength of a
relationship, they are not easy to interpret in terms of predictive
power. In my opinion, Std(res) is the best representation
of the quality of prediction, especially if it is presented
in relation to Std(ys).
For example, when people talk about the validity of the SAT
(a standardized test used for college admission in the U.S.) they
often talk about correlations between SAT scores and other measures of
intelligence.
According to one study, there is a Pearson correlation of
ρ=0.72 between total SAT scores and IQ scores, which sounds like
a strong correlation. But R2 = ρ2 = 0.52, so SAT scores
account for only 52% of variance in IQ.
IQ scores are normalized with Std(ys) = 15, so
>>> var_ys = 15**2
>>> rho = 0.72
>>> r2 = rho**2
>>> var_res = (1 - r2) * var_ys
>>> std_res = math.sqrt(var_res)
10.4096
So using SAT score to predict IQ reduces RMSE from 15 points to 10.4
points. A correlation of 0.72 yields a reduction in RMSE of only
31%.
If you see a correlation that looks impressive, remember that R2 is
a better indicator of reduction in MSE, and reduction in RMSE is a
better indicator of predictive power.
The effect of mother’s age on birth weight is small, and has little
predictive power. So is it possible that the apparent relationship
is due to chance? There are several ways we might test the
results of a linear fit.
One option is to test whether the apparent reduction in MSE is due to
chance. In that case, the test statistic is R2 and the null
hypothesis is that there is no relationship between the variables. We
can simulate the null hypothesis by permutation, as in
Section 9.5, when we tested the correlation between
mother’s age and birth weight. In fact, because R2 = ρ2, a
one-sided test of R2 is equivalent to a two-sided test of ρ.
We’ve already done that test, and found p < 0.001, so we conclude
that the apparent relationship between mother’s age and birth weight
is statistically significant.
Another approach is to test whether the apparent slope is due to chance.
The null hypothesis is that the slope is actually zero; in that case
we can model the birth weights as random variations around their mean.
Here’s a HypothesisTest for this model:
class SlopeTest(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
ages, weights = data
_, slope = thinkstats2.LeastSquares(ages, weights)
return slope
def MakeModel(self):
_, weights = self.data
self.ybar = weights.mean()
self.res = weights - self.ybar
def RunModel(self):
ages, _ = self.data
weights = self.ybar + np.random.permutation(self.res)
return ages, weights
The data are represented as sequences of ages and weights. The
test statistic is the slope estimated by LeastSquares.
The model of the null hypothesis is represented by the mean weight
of all babies and the deviations from the mean. To
generate simulated data, we permute the deviations and add them to
the mean.
Here’s the code that runs the hypothesis test:
live, firsts, others = first.MakeFrames()
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
ht = SlopeTest((live.agepreg, live.totalwgt_lb))
pvalue = ht.PValue()
The p-value is less than 0.001, so although the estimated
slope is small, it is unlikely to be due to chance.
Estimating the p-value by simulating the null hypothesis is strictly
correct, but there is a simpler alternative. Remember that we already
computed the sampling distribution of the slope, in
Section 10.4. To do that, we assumed that the observed slope
was correct and simulated experiments by resampling.
Figure 10.4 shows the sampling distribution of the
slope, from Section 10.4, and the distribution of slopes
generated under the null hypothesis. The sampling distribution
is centered about the estimated slope, 0.017 lbs/year, and the slopes
under the null hypothesis are centered around 0; but other than
that, the distributions are identical. The distributions are
also symmetric, for reasons we will see in Section 14.4.
Figure 10.4: The sampling distribution of the estimated
slope and the distribution of slopes
generated under the null hypothesis. The vertical lines are at 0
and the observed slope, 0.017 lbs/year.
So we could estimate the p-value two ways:
The second option is easier because we normally want to compute the
sampling distribution of the parameters anyway. And it is a good
approximation unless the sample size is small and the
distribution of residuals is skewed. Even then, it is usually good
enough, because p-values don’t have to be precise.
Here’s the code that estimates the p-value of the slope using the
sampling distribution:
inters, slopes = SamplingDistributions(live, iters=1001)
slope_cdf = thinkstats2.Cdf(slopes)
pvalue = slope_cdf[0]
Again, we find p < 0.001.
So far we have treated the NSFG data as if it were a representative
sample, but as I mentioned in Section 1.2, it is not. The
survey deliberately oversamples several groups in order to
improve the chance of getting statistically significant results; that
is, in order to improve the power of tests involving these groups.
This survey design is useful for many purposes, but it means that we
cannot use the sample to estimate values for the general
population without accounting for the sampling process.
For each respondent, the NSFG data includes a variable called finalwgt, which is the number of people in the general population
the respondent represents. This value is called a sampling
weight, or just “weight.”
As an example, if you survey 100,000 people in a country of 300
million, each respondent represents 3,000 people. If you oversample
one group by a factor of 2, each person in the oversampled
group would have a lower weight, about 1500.
To correct for oversampling, we can use resampling; that is, we
can draw samples from the survey using probabilities proportional
to sampling weights. Then, for any quantity we want to estimate, we can
generate sampling distributions, standard errors, and confidence
intervals. As an example, I will estimate mean birth weight with
and without sampling weights.
In Section 10.4, we saw ResampleRows, which chooses
rows from a DataFrame, giving each row the same probability.
Now we need to do the same thing using probabilities
proportional to sampling weights.
ResampleRowsWeighted takes a DataFrame, resamples rows according
to the weights in finalwgt, and returns a DataFrame containing
the resampled rows:
def ResampleRowsWeighted(df, column='finalwgt'):
weights = df[column]
cdf = Cdf(dict(weights))
indices = cdf.Sample(len(weights))
sample = df.loc[indices]
return sample
weights is a Series; converting it to a dictionary makes
a map from the indices to the weights. In cdf the values
are indices and the probabilities are proportional to the
weights.
indices is a sequence of row indices; sample is a
DataFrame that contains the selected rows. Since we sample with
replacement, the same row might appear more than once.
Now we can compare the effect of resampling with and without
weights. Without weights, we generate the sampling distribution
like this:
estimates = [ResampleRows(live).totalwgt_lb.mean()
for _ in range(iters)]
With weights, it looks like this:
estimates = [ResampleRowsWeighted(live).totalwgt_lb.mean()
for _ in range(iters)]
The following table summarizes the results:
In this example, the effect of weighting is small but non-negligible.
The difference in estimated means, with and without weighting, is
about 0.08 pounds, or 1.3 ounces. This difference is substantially
larger than the standard error of the estimate, 0.014 pounds, which
implies that the difference is not due to chance.
A solution to this exercise is in chap10soln.ipynb
chap10soln.ipynb
Using the data from the BRFSS, compute the linear least squares
fit for log(weight) versus height.
How would you best present the estimated parameters for a model
like this where one of the variables is log-transformed?
If you were trying to guess
someone’s weight, how much would it help to know their height?
Like the NSFG, the BRFSS oversamples some groups and provides
a sampling weight for each respondent. In the BRFSS data, the variable
name for these weights is finalwt.
Use resampling, with and without weights, to estimate the mean height
of respondents in the BRFSS, the standard error of the mean, and a
90% confidence interval. How much does correct weighting affect the
estimates?
Think Bayes
Think Python
Think Stats
Think Complexity | http://greenteapress.com/thinkstats2/html/thinkstats2011.html | CC-MAIN-2017-47 | refinedweb | 3,113 | 56.86 |
Now that Exchange 2016 has been released, let’s take a look at the Exchange 2016 installation process. We will see that there are some similarities compared to Exchange 2013 along with some differences.
Exchange 2016 can be downloaded from here.
Planning And Preparation
As with every project proper, planning and preparation is required. Deploying Exchange is no different. TechNet has an overview of the steps that are required:
Exchange 2016 system requirements
Exchange 2016 prerequisites
Prepare Active Directory and domains
Install the Exchange 2016 Mailbox role using the Setup wizard
Install Exchange 2016 using unattended mode
Install the Exchange 2016 Edge Transport role using the Setup wizard
Exchange 2016 post-Installation tasks
Lab Configuration
The lab is based on Windows Server 2012 R2 servers for both Exchange and Domain Controllers. No prior versions of Exchange have been installed into the lab, and Exchange 2016 will be deployed from scratch. The domain name is contoso.com and we will deploy Exchange 2016 using the graphical installer and also the command line. The Exchange organization will be called “Contoso”.
Preparing AD
As with many customer environments, the Exchange team will not be the ones who will update the AD schema and prepare AD for the installation of Exchange. To that end the AD preparation steps will be performed directly on the schema master, not an Exchange server.
To prep AD for Exchange 2016, .NET framework 4.5.2 must be installed onto the machine that will be used to prepare AD. Follow these steps to verify the installed version of the .NET Framework. If required, install .NET 4.5.2 and restart the machine.
Since we are using the schema master, the AD DS RSAT tools are already installed. If the AD DS RSAT tools are not installed, install then using Server Manager or via Windows PowerShell using Install-WindowsFeature RSAT-ADDS
As with prior releases, we need to first extend the AD DS Schema, PrepareAD and finally PrepareDomains.
- Setup.exe /PrepareSchema /IAcceptExchangeServerLicenseTerms
- Setup.exe /PrepareAD /OrganizationName:"<organization name>" /IAcceptExchangeServerLicenseTerms
- Setup.exe /PrepareDomain:<FQDN of the domain you want to prepare> /IAcceptExchangeServerLicenseTerms
Note that the setup file is setup.exe for both wizard and command line tasks. You may remember setup.com which was last used with Exchange 2010 unattended setup. It is also noteworthy that the full syntax has been specified, note that the shortcut syntax such as setup.exe /PS is not used. This was covered in the 6 Mistakes To Avoid With Exchange 2013 CU Command Line Installations. There are now 7 mistakes listed in the article, but if the title was changed links would break.
First up, let’s prepare the AD schema:
Setup.exe /PrepareSchema /IAcceptExchangeServerLicenseTerms
Then we will run PrepareAD:
Setup.exe /PrepareAD /OrganizationName:"Contoso" /IAcceptExchangeServerLicenseTerms
At this point we cannot add any Exchange 2010 or Exchange 2013 servers since Exchange 2016 PrepareAD was executed, and there were no prior Exchange servers. This is covered in the post Exchange Upgrades–The Point Of No Return.
If you have an Exchange 2010 deployment, and wish to move to Exchange 2016 there is no Exchange requirement to install Exchange 2013. You may wish to install Exchange 2013 for other reasons. For example applications and services that may only be supported with that version of Exchange. If you do wish to add Exchange 2013 prior to running Exchange 2016 PrepareAD, then install both Mailbox and CAS 2013 roles. A VM could be used for this purpose, but must be properly configured in terms of CAS namespaces etc.
Finally we need to prepare the domain. There is a single domain in this forest – contoso.com.
Setup.exe /PrepareDomain:contoso.com /IAcceptExchangeServerLicenseTerms
Now that AD has been successfully prepared, we can move to preparing the Exchange 2016 servers themselves. Allow time for AD to replicate before attempting to install the first Exchange server.
Install Prerequisites On Exchange 2016 Mailbox Server
Prior to installing Exchange 2016, prerequisites need to be installed onto the underlying OS. While it is possible to use the /InstallWindowsComponents that does not install the other components that are not in-box.
To install the required Windows Server 2012 R2 OS components run the below from
Note that –Restart has been added to automatically restart the server.
Once the server restarts, then we need to install:
Microsoft Unified Communications Managed API 4.0, Core Runtime 64-bit
Installing the UC Managed API will install:
Installing Exchange 2016 Using Setup Wizard
Start the Exchange 2016 setup.exe using an elevated prompt if UAC is enabled, though it is a good habit to always elevate so it is not missed.
The prior screens have been similar to Exchange 2013, and now we see a major difference. There is now a single Exchange 2016 role that combines both the CAS and Mailbox roles from Exchange 2013. Multirole installation was the recommendation for the last two Exchange releases and is the basis of the Exchange Preferred Architecture.
In Exchange 2016 the default installation path remains the same as Exchange 2013, this is C:\Program Files\Microsoft\Exchange Server\V15
Note that Exchange 2013 and 2016 will consume considerably more space in the install folder compared to Exchange 2007/2010. Plan accordingly. Creating a 50GB C: drive and installing Exchange 2016 to the default location will not end well.
Exchange 2016 has malware scanning enabled by default. This can be disabled at installation time if you plan to use a 3rd party product instead.
Once the readiness checks have completed, click install.
Exchange 2016 will then be installed.
Note that setup requests the server be restarted to complete the installation process.
To complete the install, restart the server.
Installing Exchange 2016 Using Command Line
As noted in the installing Exchange 2016 prerequisites section, the same prerequisites must be installed. Refer to the above section for installing:
- Required Windows OS components
- UC Managed API
- .NET Framework 4.5.2
The sever below has all of the above already installed, so we can focus upon the command line installation aspects in this section.
Since AD was already prepared with the Exchange organization name it is not specified in the command below. For all of the Exchange 2016 setup switches, please review TechNet.
As noted above, an elevated command prompt is used to initiate the install:
Setup.exe /mode:Install /role:Mailbox /IAcceptExchangeServerLicenseTerms
Again once setup completes the server must be restarted.
In the below screenshot, we can see the two lab servers:
Post Installation Tasks
Exchange 2016 may be installed, but there are many tasks remaining. The various URLs for the web services on the server must be configured. This must match the CAS namespace design for your Exchange deployment.
For example the below screenshot shows the default Exchange 2016 configuration for the OWA virtual directories. Typically they will require modification.
Do not leave the default self signed certificates bound to the various services. Proper PKI issued certificates should be immediately installed and configured. It is recommended that Exchange be installed into a dedicated AD site so that users in the production AD sites do not try to leverage the services of the new server prior to the correct CAS namespaces and certificates being configured. In this case, remember to change the Autodiscover Site Scope coverage to reflect the correct AD site. This must be done manually.
The same applies for mail flow as well, other servers will see the server as available for mail flow and may start to use it. Depending upon your environment this may cause issues.
Please refer to TechNet for additional post installation tasks.
Cheers,
Rhoderick
thanks
Thanks very much, very helpful.
very use full thanks
thanks lot
Thank you so much..
Very helpful article… | https://blogs.technet.microsoft.com/rmilne/2015/10/02/exchange-2016-installation-screenshots/ | CC-MAIN-2018-30 | refinedweb | 1,285 | 56.45 |
Featured Replies in this Discussion
- Any time you are having troubles with iterations like this (especially nested) your first step should be to run through it step by step and track each variable at each stage: ... This allows you to see exactly whats going on and why the results arent what you expect. I wont tell you how to rewrite your algorythm as there are plenty of examples already here, just thought i'd chip in with some advice on…
A prime number is ...
any natural number greater than 1 that has the two divisors 1 and itself
(defined in:)
Your algorithm is pretty bad, take a look at this C# code snippet at DaniWeb to help you out:
using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { bool current = false; int j; Console.WriteLine("Enter any integer"); int num = Int32.Parse(Console.ReadLine()); for (int i = 2; i <= num; i++)//2 is the first prime number. //I set i to 2 beacuse it has to print 2 firstly... { for (j = 2; j < i; j++) { if (i % j == 0)//Controls i is prime number or not... { current = true; break;//breaks for not controlling anymore... } } if (current == false) Console.Write("{0} ", j);//if i is prime number, print it... else current = false; } Console.ReadLine(); } } }
Hie guys!
Heres something i have created to display all the prime numbers from zero up to the number entered by user.
using System; class pnumber { public static void Main() { int value; // Value Enter By User! int count,count2; // For inner & outer loop int prime; // used to check prime number Console.WriteLine("Welcome To My Program\nPlease enter a number to check whether its a prime number or not!"); value=Convert.ToInt32(Console.ReadLine()); for(count=2;count<value;count++) // Outer loop { prime = 1; // to set prime value to 1 every time outer loop works for(count2=2;count2<count;count2++) // Inner loop { if(count%count2 == 0) // If Value is divisible.. { prime = 0; // Set prime to 0 } } if(prime==1) // If prime value is 1 then... Console.WriteLine("{0} Is Prime Number",count); // display count variable } } }
This code works fine for me!
Some how the program is correct but the algorithm is not efficient of the large no.
see this link
Some how the program is correct but the algorithm is not efficient of the large no.
see this link
Hie Rhohitman,
Thanks buddy for snippet link but you are on C# section & thats a C++ snippet. :D
Some how the program is correct but the algorithm is not efficient of the large no.
see this link
I wrote something similar in C++, but the concept is the same... instead of using modulus with every ineger less than num, try this:
for (int i = 2; i <= sqrt(num); i++)
once i becomes larger than the square root of num, you're doing the same work over again!
you can also restrict it to only divide by ODD numbers, which theoretically cuts your processing time in half.
i ended up playing with an array, adding each new prime number to the array, adn only testing num against those values. worked pretty well, if i recall
Any time you are having troubles with iterations like this (especially nested) your first step should be to run through it step by step and track each variable at each stage:
num = 9: i j i%j output 3 2 1 3 3 3 0 3 4 3 3 4 2 0 4 3 1 4 4 4 0 5 2 1 5 5 3 2 5 5 4 1 5 6 2 0 6 3 0 6 4 2 6
This allows you to see exactly whats going on and why the results arent what you expect. I wont tell you how to rewrite your algorythm as there are plenty of examples already here, just thought i'd chip in with some advice on solving similar issues in the future.
Give a man a solution, and his problem is solved for today; give him the tools to trouble shoot his work, and his problems are solved for a lifetime ;)
My algorithm below is similar to what slogfilet suggested: I consider using anything beyond the square root of the integer candidate to be redundant and a waste of time to use as a test divisor.
I'm not accepting user input, nor am I writing the prime numbers to the screen, but it's easy enough to add that in (actually, the writing to the screen is simply commented out). I was more interested in seeing how fast I could get the program to run, hence the use of the stopwatch. It's under 300 milliseconds to get all primes up to 1,000,000 on my machine, just about 5 seconds to go up to 10,000,000, and of course will increase significantly from there. That's the price of using nested loops on growing lists. But I'm not done working on it, and I'm not going to assume I know enough about C# to say that I've thought of the best looping structure.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; class Program { static void Main() { Stopwatch stopwatch = new Stopwatch(); stopwatch.Start(); List<int> primes = new List<int>(); primes.Add(2); for (int i = 3; i < 1000000; i += 2) { bool isPrime = true; int stoppingPoint = (int)Math.Pow(i, 0.5) + 1; foreach (int p in primes) { if (i % p == 0) { isPrime = false; break; } if (p > stoppingPoint) break; } if (isPrime) { primes.Add(i); //Console.WriteLine(i); } } stopwatch.Stop(); Console.WriteLine("-- {0}", primes.Count); Console.WriteLine(stopwatch.ElapsedMilliseconds); Console.Read(); } }
Welcome iajm,
I'm glad you found it useful. Have you noticed that the current thread is two years old? Please do not resurrect old/solved threads. If you want to ask question, start your own thread.
Read before posting-
Hi friends
Here i am using simple logic to get prime numbers from 1 to N numbers.
using System; using System.Collections.Generic; using System.Text; namespace prime { class Program { static void Main(string[] args) { int a,n,i; Console.Writeline("Enter ur number"); n=Convert.ToInt32(Console.Readline()); for(i=0;i<=n;i++) { if((i%2==0)&&(i%i==0)) { a=i; Console.Writeline("The prime numbers are",a); } } Console.Readkey(); } } }
Output:
Enter number:9
The prime numbers are :2 3 5 7 9
Explanation:
modulus will take reminder as output as all we know.
If we write only i%2==0 it won't print 2 as a prime number.
so, i wrote i%i==0 here.
Output will come if and only if both conditions are true | https://www.daniweb.com/software-development/csharp/threads/80255/displaying-prime-number | CC-MAIN-2015-11 | refinedweb | 1,128 | 71.85 |
about shared_ptr
Discussion in 'C++' started by Ernst Murnleitner, Jan 8, 2004.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
shared_ptr vs std::auto_ptrSerGioGio, Jul 3, 2003, in forum: C++
- Replies:
- 3
- Views:
- 5,661
- Alexander Terekhov
- Jul 3, 2003
boost::shared_ptr and operator->()Derek, Dec 8, 2003, in forum: C++
- Replies:
- 2
- Views:
- 755
- Derek
- Dec 8, 2003
#include <boost/shared_ptr.hpp> or #include "boost/shared_ptr.hpp"?Colin Caughie, Aug 29, 2006, in forum: C++
- Replies:
- 1
- Views:
- 920
- Shooting
- Aug 29, 2006 | http://www.thecodingforums.com/threads/about-shared_ptr.280510/ | CC-MAIN-2015-48 | refinedweb | 122 | 72.26 |
Below, I have listed the steps that I followed for running RoR app on Web Server 7.0.
Machine/OS: Sun SPARC/Solaris 10
Note: Add "/usr/local/bin" directory to your PATH variable as all the following binaries get installed in this directory.
Installing Ruby:
- Download ( ) and extract Ruby ( I tried with 1.8.4 version ) source under "/ruby".
- Run the following commands.
> cd /ruby/ruby-1.8.4
> ./configure
> make
> make install
Installing RubyGems:
- Download ( ) and extract RubGems 0.9.0 source under "/ruby"
- Run the following commands.
> on Ruby:
- Set the proxy, if behind the firewall.
> export http_proxy=http://<proxy_host>:<proxy_port>
- Install Rails.
> gem install rails --include-dependencies
Installing FastCGI Ruby gem:
- Download FastCGI GEM from to "/ruby" directory
- Install this gem. This requires FastCGI Development Kit which can be downloaded from
> cd /ruby
> gem install fcgi -- --with-fcgi-include=/fcgi-2.4.0/include --with-fcgi-lib=/fcgi-2.4.0/libfcgi/.libs
Sample "Hello World" RoR app:
Below are the steps to getting over the first hurdle: Creating a “hello world” application in Ruby on Rails.
Assumption: You have ruby and rails already installed and working on your system.
- Run the following commands.
> mkdir /ruby/samples
> cd /ruby/samples
> rails hello-world
> cd hello-world
> ruby script/generate controller hello
- Create a file called index.rhtml in app/views/hello, containing “Hello !”.
- Start the default WEBrick Web Server to test the first application.
> ruby script/server
- Navigate to in your browser and be greeted with your friendly application: “Hello !”
Changes to sample "Hello World" RoR app:
Now let us add some 'action' to our Hello World application.
- Go to 'controllers' directory of the sample.
> cd /ruby/samples/hello-world/app/controllers/
- Edit hello_controller.rb file. Add the following lines (in Red)
class HelloController < ApplicationController
def sayhello
render_text "Hello ! This is a simple example"
end
end
- Restart WEBrick server
> ruby script/server
- Access the newly added action "sayhello" by navigating to
"Hello ! This is a simple example" should be displayed."Hello ! This is a simple example" should be displayed.
Configure Web Server 7.0:
Now let us configure this application to run on Web Server 7.0 .
The configuration steps are given below.
- Edit magnus.conf and add the following line to load the FastCGI plugin bundled with Web Server.
Init fn="load-modules" shlib="libfastcgi.so" shlib_flags="(global|now)"
- Edit obj.conf as follows:
<Object name="default">
...
...
#
# Pass requests for "/dispatch.fcgi" to "rubyTest" object
#
NameTrans fn="assign-name" from="/dispatch.fcgi/*" name="rubyTest"#
# Prefix '/dispatch.fcgi/' to the original URI which does not already contain "/dipatch.fcgi",
# and resend the request.
#
<If $uri !~ '^/dispatch\.fcgi/.*'>
NameTrans fn="restart" uri="/dispatch.fcgi$uri"
</If>
#
# Set the document root to RoR sample's public directory
#
# Note: This should be the last NameTrans directive.
#
NameTrans fn=document-root root="/ruby/samples/hello-world/public"
...
...
</Object>
...
...#
# Object to handle the RoR application requests
# Here, app-path should point to dispatch.fcgi script of the RoR sample.
#
<Object name="rubyTest">
Service fn="responder-fastcgi" app-path="/ruby/samples/hello-world/public/dispatch.fcgi"
bind-path="localhost:4334" app-env="RAILS_ENV=production"
app-env="RUBYLIB=/usr/local/lib/ruby/1.8"
</Object>
- Start the Web Server and access
(assuming localhost and 80 are the Web Server host and port, respectively)
"Hello ! This is a simple example" should be displayed on the page.
Posted by 192.18.43.249 on October 31, 2006 at 11:52 AM IST #
Posted by Charles Oliver Nutter on October 31, 2006 at 08:22 PM IST #
Posted by 210.5.193.150 on November 12, 2006 at 04:59 PM IST #
Posted by Seema on November 14, 2006 at 11:37 AM IST #
Posted by hans on December 11, 2006 at 05:19 PM IST #
Posted by Chris on February 01, 2007 at 03:21 PM IST #
Posted by Seema on February 01, 2007 at 05:22 PM IST #
I have tried your article, "Running Ruby on Rails on Sun Java System Web Server 7.0". I get ruby installed successfully, but can't get gems to work with an outside gem source.
I am running Solaris 2.9. I installed all to my home directory as I don't have write permissions to other directories. (I used the --prefix command on the ./configure in the ruby source directory).
I get the following error at the end of the attempted install of ruby gems: hook /home/cookc/ruby/rubygems-0.9.0/./post-install.rb failed. I get
Do you have any suggestions?Thank you,
Craig Cook
Posted by Craig A. Cook on February 03, 2007 at 02:38 AM IST #
Your comment did not appear on the blog page. Hence I have posted your question along with my response.
Craig's comment :
I get the following error at the end of the attempted install of ruby gems: hook
/home/cookc/ruby/rubygems-0.9.0/./post-install.rb failed:
You don't have write permissions into the /home/user/lib/ruby/gems/1.8 directory..
Response:
Try following the steps listed at
Let me know if this solves the issue.
Posted by Seema on February 06, 2007 at 11:22 AM IST #
Posted by Boris Kuzmic on March 22, 2007 at 03:07 PM IST #
Posted by Seema on March 22, 2007 at 03:29 PM IST #
Posted by Shu on April 29, 2007 at 10:06 AM IST #
Which 'cc' are you using (version and location) ?
And also check this link
Posted by Seema on April 30, 2007 at 11:18 PM IST #
Posted by Gerardo M. on May 07, 2007 at 09:27 AM IST #
Hey Seema, I'm trying to follow this to get RoR working and am running into the follow error from the Ruby FastCGI handler :
[23/Aug/2007:16:32:08 :: 26479] starting
[23/Aug/2007:16:32:08 :: 26479] Dispatcher failed to catch: private method `split' called for nil:NilClass (NoMethodError)
/usr/local/lib/ruby/1.8/cgi.rb:898:in `parse'
/usr/local/lib/ruby/gems/1.8/gems/actionpack-1.13.3/lib/action_controller/cgi_ext/raw_post_data_fix.rb:45:in `initialize_query'
/usr/local/lib/ruby/1.8/cgi.rb:2275:in `initialize'
(eval):16:in `initialize'
/usr/local/lib/ruby/gems/1.8/gems/fcgi-0.8.7/lib/fcgi.rb:612:in `new'
'
/usr/local/lib/ruby/gems/1.8/gems/rails-1.2.3/lib/fcgi_handler.rb:141:in `process_each_request!'
/usr/local/lib/ruby/gems/1.8/gems/rails-1.2.3/lib/fcgi_handler.rb:55:in `process!'
/usr/local/lib/ruby/gems/1.8/gems/rails-1.2.3/lib/fcgi_handler.rb:25:in `process!'
/www/dweeb/ruby/hello-world/public/dispatch.fcgi:24
almost killed by this error
It almost looks like the 'hello/sayhello' isn't being passed to the fastcgi.rb. Is there a way to make the FastCGI NSAPI verbose log to the Fastcgistub.log file?
Posted by steve on August 24, 2007 at 02:15 AM IST #
Currently FastCGI plugin doesn't have a good logging support. So, changing the log level to verbose may not help.
Would it be possible for you to post or mail the obj.conf settings ?
Did you try running your application using the default WEBrick server ? If it worked, let me know the URL used.
Posted by Seema on August 24, 2007 at 12:24 PM IST # | http://blogs.sun.com/seema/entry/ror_and_web_server_7 | crawl-002 | refinedweb | 1,235 | 50.53 |
Given that the numbers are relatively small (~ -1 to ~ 1) and are floats and randomly generated, can you get 'floating point invalid operation' by adding too many of them? I am asking because that's what apparently happens in my program right now and very rarely too. Also, how does one disable (or avoid) this exception?
Just in case, my compiler is gcc (i686-posix-dwarf-rev0, Built by MinGW-W64 project) 5.1.0.
EDIT As requested I am providing the code. However, the addition of floats causing the error is only my conjecture, that's why I came here, to find out if that could be my problem. If I run code below, is it reasonable that I can get the error?
#include <iostream>
int main()
{
float sum = 0, add = 0;
while (true)
{
add = static_cast <float> (rand()) / static_cast <float> (RAND_MAX);
if (rand() % 2) add *= -1;
sum += add;
}
}
The IEEE 754 standard defines the following situations in which an “invalid operation” floating-point exception occurs:
Your example code will not trigger any of those cases, so the problem would be elsewhere in your code. | https://codedump.io/share/nDJcwYtQc4Fn/1/can-too-many-additions-of-floats-cause-39floating-point-invalid-operation39 | CC-MAIN-2017-17 | refinedweb | 185 | 66.07 |
This JAR file to be deployed?. If developer not aware of the details, it is difficult for them to test the compatibility issues. If the run the code with different version, runtime exception will be thrown. Before start reading this article, If you want to understand the basics of Java compiler and JVM architecture, please go through the articles Java Compiler API, JVM, JRE and JDK difference and JVM.
java.lang.UnsupportedClassVersionError: test class : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(Unknown Source)
Without knowing the Java version in which the JAR is created is difficult for the developers. There is no way to check the version in the JAR file itself. We can check the version details only using the class files inside the JAR file. Unzip the JAR and take out any one class file to be verified.
- JDK 1.0 — major version 45 and minor version 3
- DK 1.1 — major version 45 and minor version 3
- JDK 1.2 — major version 46 and minor version 0
- JDK 1.3 — major version 47 and minor version 0
- JDK 1.4 — major version 48 and minor version 0
- JDK 1.5 — major version 49 and minor version 0
- JDK 1.6 — major version 50 and minor version 0
- JDK 1.7 — major version 51 and minor version 0
These version could be different for each implementation of the compiler. The above details are specific to Oracle/Sun JDK. JDK has javap command to find the version details.
javap -verbose
You will find the following details displayed on the screen.
javapThere is simple program which can get you the Java version details. Run the below program by passing the class file name as the arguments, this would return you the major version details.
package com; import java.io.DataInputStream; import java.io.FileInputStream; import java.io.IOException; public class GetVersion { public static void main(String[] args) throws IOException { System.out.println(getVersionDetails(args[0])); } public static String getVersionDetails(String filename) throws IOException { String version = ""; DataInputStream stream = new DataInputStream(new FileInputStream(filename)); int magicBytes = stream.readInt(); if (magicBytes != 0xcafebabe) { System.out.println(filename + " is not a valid java file!"); } else { int minorVersion = stream.readUnsignedShort(); int majorVersion = stream.readUnsignedShort(); version = majorVersion + "." + minorVersion; } stream.close(); return version; } }
javap command returns the complete details of the bytecode. In that there is major version and minor version which is for identifying the Java version details. I hope this tutorial have provided nice tip on finding the version details for the Java class. If you have any more tips, please contact us, we will publish it in our website.
Reference Books:
[…] that have evolved over time as developers strive for greater flexibility in their software. Whereas class libraries are reusable source code, and components are reusable packaged objects, patterns are generic, […] | http://javabeat.net/java-compiler-version-class-file/ | CC-MAIN-2017-17 | refinedweb | 477 | 60.72 |
This example shows how to run a separate process to perform a command.
We import the classes that will be needed by the application. The most relevant
to this example is the
ProcessBuilder class.
from java.lang import ProcessBuilder, String from android.os import AsyncTask from android.view import View from android.widget import Button, ScrollView, TextView from serpentine.activities import Activity from serpentine.widgets import VBox
We define a class based on a custom Activity class provided by the serpentine package. This represents the application, and will be used to present a graphical interface to the user.
The initialisation method simply calls the corresponding method in the base class. This must be done even if no other code is included in the method.
class ProcessActivity(Activity): __interfaces__ = [View.OnClickListener] def __init__(self): Activity.__init__(self)
The onCreate method is called when the activity is created by Android.
As with the
__init__ method, we must call the corresponding method in
the base class. We use this method to set up the user interface,
registering a listener for a button that the user can press to run a
process.
def onCreate(self, bundle): Activity.onCreate(self, bundle) label = TextView(self) label.setText("ls -R /system") self.button = Button(self) self.button.setText("Run process") self.button.setOnClickListener(self) self.view = TextView(self) scrollView = ScrollView(self) scrollView.addView(self.view) layout = VBox(self) layout.addView(label) layout.addView(self.button) layout.addView(scrollView) self.setContentView(layout)
In the following method we respond to the button click by disabling the
button, so that the user can only start a single process, and adding a
message to the
TextView that will contain the output of the process.
def onClick(self, view): self.button.setEnabled(False) self.view.setText("Processing...") l = array(["ls", "-R", "/system"]) self.task = Task(self) self.task.execute(l)
We finish by creating a string array that we pass to the
execute
method of a custom
AsyncTask object.
When the process finishes the
Task object calls the following method to
publish the result. Here, we enable the button again and show the string
passed to the method in the
TextView.
@args(void, [String]) def setResult(self, value): self.button.setEnabled(True) self.view.setText(value)
We define a Task class based on the standard AsyncTask class to monitor the background process. This defines three item types that describe the three parameters of the class: Params, Progress and Result.
class Task(AsyncTask): __item_types__ = [String, int, String]
For convenience the initialisation method accepts a reference to the activity itself, storing it for later reference, and calls the base class method as normal.
@args(void, [ProcessActivity]) def __init__(self, parent): AsyncTask.__init__(self) self.parent = parent
When the task's
execute method is called, the following method is
called in a background thread. Here, we create a background process using
the array of strings supplied in the array of
Params objects -
Params
is a
String in this case - and collect the output of the process in a
list of bytes.
@args(Result, [[Params]]) def doInBackground(self, params): builder = ProcessBuilder(params) builder.redirectErrorStream(True) process = builder.start() input_stream = process.getInputStream() input_bytes = [] try: while True: v = input_stream.read() if v == -1: break input_bytes.add(byte(v)) except: pass process.waitFor() return String(array(input_bytes), "ASCII")
After the process has sent all its output, we wait for it to exit before
converting it to a string which we return. This value of type
Result is
delivered back to the main thread and sent to the following method.
Result is a
String in this case.
@args(void, [Result]) def onPostExecute(self, result): self.parent.setResult(result)
The method simply calls the activity's
setResult method which displays
the string in a
TextView. This is possible because the
onPostExecute
method is itself run in the main thread. | http://www.boddie.org.uk/david/Projects/Python/DUCK/Examples/Serpentine/Process/docs/process.html | CC-MAIN-2017-51 | refinedweb | 638 | 51.85 |
.
Level 1
This is a very interesting and somewhat hard question! Bitcoin is a money protocol, invented some years ago. When I first got to study Bitcoin, one Bitcoin was worth around $90, now it's worth around $1000 and is growing; making it obviously interesting. The basic idea behind Bitcoin is, if you transfer your money to someone else, this transaction is recorded in a Ledger, and this record is signed with a hash that is computationally hard to compute, and this Ledger is shared by a lot of people, disabling forgery. The chain of changes to the Ledger are also stored.
Now the challenge is implementing a similar protocol, simplifying it in two aspects, first the ledger is not distributed and it located on stripe's server, and second the computationally complex part is actually not that complex (something around 4 million SHA1 hashes).
The Git protocol, stores commits inside local objects. Each file is committed into a blob object, each directory structure in a tree object, and each commit metadata in a commit object. Each object is assigned an SHA1 hash, computed in some specific way. A commit's hash is computed by hashing the following data record:
- Hash of its parent commit (hence chaining commits in history)
- Hash of the tree object where it resides (root repository folder)
- Commit Metadata (username, time, description, etc.)
Now you have to add a coin to yourself, by adding your name to the ledger.txt file with the number 1 in front of it, and committing the change. To make this computationally hard, the server rejects any commits that produce a commit object hash bigger than 000001 (append with 35 zeroes if you wish).
Lets figure out how hard this computation actually is. To do this, we need to compute how many 6 digit hexadecimal strings are smaller than 00000100...., i.e they are 000000xxxx... . Each hexadecimal digit is 4 bits, thus 5 of those make up for 20 binary bits, and the final 1 accounts for 3 more bits (the last bit is already defined, 3 are left). 23 binary bits, make up for a computational space of 223, roughly equal to 103*23. For a 50% chance of breaking, we need to compute half this space, which is roughly 4 million different hashes.
Since SHA1 is one of the cryptographically secure hash functions (i.e it doesn't have a known cryptographic weakness, but does have computational power weaknesses) we can just change a number in our data and re-hash to get different (hopefully) uniform hashes, and after 4 million tries, get a hash that is below the desired threshold.
Since we can not change the parent hash, or the tree structure hash, we have to change the description, simply by appending it with a number (sequential or random, more on this later).
The initial BASH code achieves this, but as stated is too slow to be useful:
#! prepare_index() { perl -i -pe 's/($ENV{public_username}: )(\d+)/$1 . ($2+1)/e' LEDGER.txt grep -q "$public_username" LEDGER.txt || echo "$public_username: 1" >> LEDGER.txt git add LEDGER.txt } solve() { # Brute force until you find something that's lexicographically # small than $difficulty. difficulty=$(cat difficulty.txt) # Create a Git tree object reflecting our current working # directory tree=$(git write-tree) parent=$(git rev-parse HEAD) timestamp=$(date +%s) counter=0 while let counter=counter+1; do echo -n . /dev/null git reset --hard "$sha1" > /dev/null break fi done } reset() { git fetch origin master >/dev/null 2>/dev/null git reset --hard origin/master >/dev/null } # Set up repo local_path=./${clone_spec##*:} if [ -d "$local_path" ]; then echo "Using existing repository at $local_path" cd "$local_path" else echo "Cloning repository to $local_path" git clone "$clone_spec" "$local_path" cd "$local_path" fi while true; do prepare_index solve if git push origin master; then echo "Success :)" break else echo "Starting over :(" reset fi done
Before explaining the code, it is worth mentioning that they have a good code that mines Gitcoins every 15 minutes (i.e changes the ledger and pushes it to the repo), making your parent hash differ, and forcing you to start over. This means that your code should be fast enough not to be bothered by a 15 minutes threshold.
This code is divided into 4 sections, 3 of which are functions. The first section, clones the repo again in another folder (not to mess with your codes and current repo that its running on), and then calls the other three functions inside a loop.
The first function, prepare_index, adds your name to the ledger, and commits it. Those two fancy lines in it are simply adding a string to the file (its sometimes very messy to do this with bash code). The last function, reset, checks the online repo, and if things are changed (by the BOT miner), pulls those changes here. The third function, solve tries to find an appropriate hash, by constructing different descriptions.
Now these three are in a loop, because when solve succeeds, you might have gotten past your 15 minutes, and your hash may be useless, and the push might fail, so it tries again. Don't bother running this though, its small enough not to let you find a single hash, we'll get to why later.
What solve does is basically constructing the metadata string in body variable, composed of parent hash, tree hash, timestamps and some description. The description is appended with a counter, to have it result in different hashes over iterations.
Since we don't know how Git computes those hashes, we ask it to generate one for us. The following line:
git hash-object -t commit --stdin <<< "$body"
has three important components, first it computes an object-hash for us. We tell the object type by -t switch (remember those three kinds of git objects?), and without a -w switch, it just computes the hash without committing anything. We give it our data record as input. If the resulting hash from this line is smaller than 000001 (lexicographically speaking), we're good, so we commit with the description, and try to push stuff.
Remember that we need to repeat this process for roughly 4 million times, but this process is not some computational code, its bash code, and each statement in it involves a process spawn in the system! Spawning system processes is a very heavy task, and hence extremely slow. What we need to do is, reproduce this whole code (at least the part of it that is inside a loop in solve function) in another programming language. The hard part of doing that is, generating those git hashes, because git application was doing it for us before, and now we need to figure out how to do that on ourselves.
The git internals book has a section on this, but its not enough. It tells one how objects are stored and handled, and finally how one can create a blob object and hash using Ruby, but doesn't mention how commit object hashes are made, which is what we need for this problem.
The ruby code for this would be as follows:
system('./prepare.sh "checkout-url" ledger-username') #prepare.sh file later Dir.chdir Dir.pwd+"/temp" #change directory to temp tree =`git write-tree`.delete("\n") # no need to know internals of this, its only run once parent=`git rev-parse HEAD`.delete("\n") # same here timestamp=`date +%s`.delete("\n") difficulty="000001" counter=0 base = "tree #{tree} parent #{parent} author CTF user <me@example.com> #{timestamp} +0000 committer CTF user <me@example.com> #{timestamp} +0000 Give me a Gitcoin " # moving this out of the loop makes the code run much faster, because this part never changes require 'digest/sha1' while true do counter+=1 content=base+counter.to_s+"\n" store = "commit #{content.length}\0" + content # this part is important sha1 = Digest::SHA1.hexdigest(store) if (sha1<difficulty) print sha1+"\n"; break else if (counter%100000==0) print counter.to_s+"\n"; STDOUT.flush() end end end File.open("t.txt", 'w') {|f| f.write(content) } #put the commit body in file t.txt sha2=`git hash-object -t commit t.txt`.delete("\n") # make sure that git computes the same hash if (sha1==sha2) system("git hash-object -t commit -w t.txt") # and commit the changes system("git reset --hard '#{sha1}' > /dev/null") else print "Invalid!\n" end
After running this code, the correct SHA1 hash will be printed, and the changes will be committed. The code provides visual feedback every a hundred thousand hashes, which takes a tenth of a second on my machine.
The major problem with writing this code is figuring out the line commented as "important". There is no mention of how the string needs to be generated and then hashed, so we seek help from reverse engineering.
To figure this part out, add your name to the Ledger.txt file, commit the changes using git commit -m "MSG" -a, and note the resulting hash (it only outputs a few digits of that hash, because the odds of two hashes colliding is so small). For example lets assume that the output hash was abcdef01, run the following code:
python -c "import zlib,sys;print repr(zlib.decompress(sys.stdin.read()))" < .git/objects/ab/cdef01...
Pressing tab fills the rest of filename. We need this python code snippet to unzip the file, which is apparently zipped using zlib as mentioned in the internals book (a command line zlib tool is hard to come by). The result should be like:
commit 198\x00tree 10fe092c5bbdd19a8550f6b1976babc659fe82f0\nparent 000000d9e8c29e90b3f7341da2a4b0a72043ea48\nauthor AbiusX <git@abiusx.com> 1390441995 -0500\ncommitter AbiusX <git@abiusx.com> 1390441995 -0500\n\nMSG\n'
As you can see, its almost like the blob format, but instead has the word commit in place. You might be lucky and get it with trial and error, but I wasn't.
This is the contents of prepare.sh:
#! rm -rf temp git clone $clone_spec temp cd temp perl -i -pe 's/($ENV{public_username}: )(\d+)/$1 . ($2+1)/e' LEDGER.txt grep -q "$public_username" LEDGER.txt || echo "$public_username: 1" >> LEDGER.txt git add LEDGER.txt
I have a fast connection, so I just clone the repository everytime I want to run this script, instead of checking for changes. This whole hash generation process takes a few seconds on my system, then all I need to do is type git push in terminal, and I'm good to go. In case you're not, re-do the whole process (the 15 minutes BOT just killed you). If you're system is too slow, change counter=0 with counter=rand(1000000000) in your Ruby code, and spawn 4 or more ruby processes (as many cores as you have).
Bonus round: After you win this, you're granted access to a repo which is shared between all contestants, all mining off it. The fastest code prevails. If I wanted to win for sure, I'd use a OpenCL SHA1 implementation, like Cryptohaze. That thing is at least a thousand times faster (taking a tiny fragment of a second to solve this problem); but who cares, its not real money.
Level 2:
This is a simplified DDOS scenario. Your role is to implement a proxy that detects BOTs and drops them, because everything that reaches the backend server consumes a lot of resources. Everything is in Javascript, to give it a fragile touch too. Just uncomment the takeRequest method inside shield file:
Queue.prototype.takeRequest = function (reqData) { // Reject traffic as necessary: if (currently_blacklisted(ipFromRequest(reqData))) { return rejectRequest(reqData); } // Otherwise proxy it through: this.proxies[0].proxyRequest(reqData.request, reqData.response, reqData.buffer); };
You essentially need to implement the function currently_blacklisted, but we're gonna two more steps to further optimize code. You might even get past this challenge just by implementing currently_blacklisted as follows:
function currently_blacklisted(ip) { return (Math.random()<.5); }
That's because you drop 50% of the load, and the server is no longer overloaded that much! But lets get serious and see some real candy (add the following code):
var ip_pool={}; var total_reqs=0; var unique_reqs=0; var active_requests=0; function currently_blacklisted(ip) { total_reqs++; if (ip in ip_pool) ip_pool[ip]++; else { unique_reqs++; ip_pool[ip]=1; } if (ip_pool[ip]<total_reqs/unique_reqs/5) return false; return true; }
What we do here is three things, if the IP is new in our list, we add it to list, and count a unique IP, if it existed, we increase its number, and count total requests, and if this IP has used more than a fifth of per-capita requests, its a benign one. The 5 factor was computed with experimentation (though it was the first thing I tried).
Don't get too excited now, no one will ever be able to stop a DDOS attack using this approach. There are 4 billion IP4 addressed on the Internet, and your computer can not have enough memory to account for all of them; this is just a game.
Now lets further enhance this code, by changing the uncommented part of the existing code to the following:
Queue.prototype.takeRequest = function (reqData) { // Reject traffic as necessary: if (currently_blacklisted(ipFromRequest(reqData))) { if (active_requests>2) return rejectRequest(reqData); } // Otherwise proxy it through: active_requests++; this.proxies[Math.floor(Math.random()*2)].proxyRequest(reqData.request, reqData.response, reqData.buffer); }; Queue.prototype.requestFinished = function () { active_requests--; return; };
Two things are new, first we implemented load balancing (we had two servers, remember?) by just randomly passing the request to one of the two. Second, we account for the number of active requests (those being processed by the backend), so we know how busy the backend is, and if the active requests are none or only one, even if they are DDOS requests, we pass them along to the backend just to prevent idleness!
Overall, these few changes should give on a score of 150+.
Level 3:
Let me warn you before we get to this level, if you've never done functional programming, or Scala programming, or both, and if you're not familiar with ways to write functional code that parses strings, you're doomed and stuck at this level for good. I'm no gonna give you a copy-paste solution, because I don't have one.
This level has an Scala application, with a bunch of file structures spanning multiple text files. The Scala code lists each file in the directory structure in an Index object, and when searching for a word, uses this list to read each and every file and scan every single line of that file for the text. Now I have a super-fast SSD drive, and without any changes, I got a score of near 100 (identical to the benchmark code), but to pass this level, you need to get at least 400.
Before getting there, know that this project uses SBT, a Scala project management system, which is very ugly, stupid, slow and stupid. I've used it forever, and hated it forever. Before running the harness code on this, you should try to run a single server-instance, just to have SBT install all dependencies, and compile everything. Whenever you change a single line of code, you should do this again to compile and update the structure, and then run the tests. I personally think that Scala has all the badness of Java, plus all the worst of functional programming languages in one places. With all those nice and fast systems, why would someone want Scala!? (I've done a bunch of trainings on it a few years ago)
You need to deal with two files out of all the files in the project, namely index.scala and seacher.scala. The Index file, is in charge of listing the files. You should change it to store more information (preferably a shared inverted index, because there's not enough memory to have an inverted index per file). The Searcher class, searches for a word by checking to see if its inside a file's whole content, and if it is, breaks the file to an array of lines, checking each one to figure out the line. This should be changed, to just perform a lookup on the inverted index hash map, and return the filename and line number.
The first link gives you a short inverted index solution in Scala, but you need to generate it for each file, and merge the indexes for the set of files. The second link gives you a simplified solution, helping you understand, and then deals with its race conditions (yes they exist in functional code too!). Fortunately, the base code uses the Broker pattern to reduce race conditions. After a proper indexation, you can change searcher's tryPath method with the following:
def tryPath(path: String, needle: String) : Iterable[SearchResult] = {
return index.find(path,needle); }
It's too late tonight, but I'll update this post later with the updated code for this level, and the next level.
Tags: CTF, Stripe, stripe 3 ctf writeup, stripe ctf 3, stripe ctf v3, stripe ctf v3 writeup, stripe ctf3 writeup, stripe distributed systems, stripe writeup, write-up, writeup
Trackback from your site.
anon
| #
update plx
Reply
Ava
| #
Using your example for Level0, simply setting the Hash without nils will make it run faster.
t= Hash.new{0}
Reply
rpu
| #
I agree with you that scala is the combination of all the worst features in
languages. For me it is the ‘C++’ in the Java World. I wast a lot of time
with the level 3. My machine was to slow for the overbloated sbt build process. I give up to the ctf3. I tried inverted index, trigram and suffix arrays but the memory blows up or it was even to slow (under 100).
Good Luck !
Reply
David W
| #
rpu: Have you tried modifying bin/start-server? Change the -Xmx argument to something larger such as -Xmx400m
Reply | http://www.cs.virginia.edu/~an2wm/stripe-ctf-v3-writeup/ | CC-MAIN-2017-47 | refinedweb | 2,996 | 69.52 |
Remove duplicates from a list in-place
How can one make a list contain only unique items while preserving order AND updating it in-place?
I know that a set can be used, but it will not guarantee ordering.
Use a supporting set, and a while loop:
def unique(arr): tmp_set = set() i = 0 while i < len(arr): if arr[i] in tmp_set: del arr[i] else: tmp_set.add(arr[i]) i += 1
The above will update the array in-place, and preserve ordering of the elements as well.
Python - Ways to remove duplicates from list, Recommended Posts: Remove duplicates from an array of small primes � Search an element in a sorted and rotated array with duplicates � Remove duplicates� Remove any duplicates from a List: mylist = ["a", "b", "a", "c", "c"] mylist = list (dict.fromkeys (mylist)) print(mylist) Try it Yourself ».
Enhanced CodeSpeed solution.
lst = [1, 2, 2, 1, 1] seen = set() length = len(lst) - 1 i = 0 while i < length: if lst[i] in seen: del lst[i] i -= 1 seen.add(lst[i]) i += 1 length = len(lst) print(lst)
Remove duplicates from sorted array, We need to take a list, with duplicate elements in it and generate another list which only contains the element without the duplicates in them. Examples: Input : [2, 4,�:
You can use a
loop to iter over the entries in the list one by one and insert them into a new
list only if they are not present in the new
list.
lst = [3,3,3,1,2,2,4] my_list = [] for i in lst: if i not in my_list: my_list.append(i) my_list # Output - [3, 1, 2, 4]
I hope this helps.
Python Remove Duplicates from a List, The common approach to get a unique collection of items is to use a set . Sets are unordered collections of distinct objects. To create a set from any iterable, you� DeDupe List Remove duplicate lines from a list Paste lines into the field, select any options below, and press Submit. Results appear at the bottom of the page.
Removing duplicates in lists, Learn how to remove duplicates from a List in Python. Example. Remove any duplicates from a List: mylist = ["a", "b"� Given a sorted array, the task is to remove the duplicate elements from the array. Examples: // C++ program to remove duplicates in-place . #include<iostream>
How to remove duplicates from a Python List, Dictionaries in Python cannot include duplicate keys, and so if we convert our list to a dictionary, it will automatically remove any duplicate values. Method to handle dropping duplicates: ‘first’ : Drop duplicates except for the first occurrence. ‘last’ : Drop duplicates except for the last occurrence. False: Drop all duplicates. inplace bool, default False. If True, performs operation inplace and returns None. Returns Series. Series with duplicates dropped.
How to Remove Duplicates from a List in Python, In this article, we'll look at how we can remove duplicate elements from List in Python. There are multiple ways of approaching this problem, and we will show. | http://thetopsites.net/article/54587455.shtml | CC-MAIN-2020-40 | refinedweb | 506 | 62.68 |
This is PQ representation for a single representation. More...
#include <pqRepresentation.h>
This is PQ representation for a single representation.
This class provides API for the Qt layer to access representations.
Definition at line 47 of file pqRepresentation.h.
Returns if the status of the visbility property of this display.
Note that for a display to be visible in a view, it must be added to that view as well as visibility must be set to 1.
Set the visibility.
Note that this affects the visibility of the display in the view it has been added to, if any. This method does not call a re-render on the view, caller must call that explicitly.
Returns the view to which this representation has been added, if any.
Returns the view proxy to which this representation has been added, if any.
Renders the view to which this representation has been added if any.
If
force is true, then the render is triggered immediately, otherwise, it will be called on idle.
Simply calls renderView(false);.
Definition at line 99 of file pqRepresentation.h.
Fired when the visibility property of the underlying display changes.
It must be noted that this is fired on the property change, the property is not pushed yet, hence the visibility of the underlying VTK prop hasn't changed.
called when the display visibility property changes.
Called by pqView when this representation gets added to / removed from the view.
Reimplemented in pqPipelineRepresentation.
Definition at line 121 of file pqRepresentation.h. | https://kitware.github.io/paraview-docs/latest/cxx/classpqRepresentation.html | CC-MAIN-2021-49 | refinedweb | 251 | 59.7 |
On 02/01/2012 07:01 AM, Hendrik Schwartke wrote: > There are some changes made to the conf code to handle some new tags > like <title> or to add some "metadata" to a domain description in the > last days. > I'm wondering if there is a more generic (read better) way to handle > those extra data in xml config elements. > I think it would be far more reasonable to use xml namespaces to enable > libvirt to differentiate between config options which it has to process > and data which are unknown to libvirt and thus are only stored without > any further processing. That's precisely what we have already started implementing. We added the <metadata> element that takes one sub-element per namespace for use by however the application sees fit: and have a pending API to allow runtime manipulation of that metadata: > > This distinction could then by easily used for expansions like the > <title>-tag, to store the x- and y-position of a guest in a graphical > visualization of virtual networks, to add some names of administrators > which are responsible for the guests, to add some hints to interfaces > concerning quality of service, etc... > > I don't know how complex such in implementation would be but I think it > would be worth the effort. Right now, the only missing part is wiring up the proposed API from Peter's series to actually modify the XML from Zeeshan's patch, if you'd like to help with that. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library
Attachment:
signature.asc
Description: OpenPGP digital signature | https://www.redhat.com/archives/libvir-list/2012-February/msg00046.html | CC-MAIN-2016-22 | refinedweb | 267 | 51.62 |
In this assignment you will be creating a graph from an input data file called graph.txt. The first line in that file will be a single integer v. This number will denote the number of vertices to follow. The next v lines will be the labels for the vertices. There will be one label to a line. Assume that the labels are unique. The next line after the labels for vertices will be a single number e. This number will denote the number of edges to follow. There will be one edge per line. Each edge will be of the form - fromVertex, toVertex, and weight. If the weight is not given, assign a default weight of 1 to that edge. After the list of edges there will be a label for the starting vertex. This will be the starting vertex for the Depth First Search and Breadth First Search as well as the starting vertex for the Dijkstra's shortest path algorithm.
Here is the outline of the code that we developed in class that you will be modifying. You will be adding and Edge class. You will be adding the following functions to the Graph class and the following test cases to your main program.
def Graph (object): # get index from vertex label def getIndex (self, label): # get edge weight between two vertices # return -1 if edge does not exist def getEdgeWeight (self, fromVertexLabel, toVertexLabel): # get a list of neighbors that you can go to from a vertex # return empty list if there are none def getNeighbors (self, vertexLabel): # get a copy of the list of vertices def getVertices (self): # determine if the graph has a cycle def hasCycle (self): # return a list of vertices after a topological sort def toposort (self): # prints a list of edges for a minimum cost spanning tree # list is in the form [v1 - v2, v2 - v3, ..., vm - vn] def spanTree (self): # determine shortest path from a single vertex def shortestPath (self, fromVertexLabel): # delete an edge from the adjacency matrix def deleteEdge (self, fromVertexLabel, toVertexLabel): # delete a vertex from the vertex list and all edges from and # to it in the adjacency matrix def deleteVertex (self, vertexLabel): def main(): # test depth first search # test breadth first search # test topological sort # test minimum cost spanning tree # test single source shortest path algorithm main()
The file that you will be turning in will be called Graph.py. The file will have a header of the following form:
# File: Graph.py # Description: # Student Name: # Student UT EID: # Course Name: CS 313E # Unique Number: 53580 # Date Created: # Date Last Modified:
Use the turnin program to submit your Graph.py . We should receive your work by 11 PM on Monday, 05 May 2014. There will be substantial penalties if you do not adhere to the guidelines. You may submit the assignment a day late with a 10 point late penalty. The student assistant in charge of this assignment is Devin Sandhu (devin.sandhu@gmail.com). | http://www.cs.utexas.edu/~mitra/csSpring2014/cs313/assgn/assgn15.html | CC-MAIN-2016-40 | refinedweb | 497 | 67.69 |
Threads in Java help in multitasking. They can stop or suspend a specific running process and start or resume the suspended processes. This helps in increasing the speed of the processes.
In Java programming, Java Virtual Machine (JVM) controls the lifecycle of thread.
Different states of Thread during its lifecycle are:
While the thread is in Running state, it can further enter into non-runnable states:
Methods of Thread
Example of Threads in Java:
public class Threads{ public static void main(String[] args){ Thread th = new Thread(); System.out.println("Numbers are printing line by line after 5 seconds : "); try{ for(int i = 1;i <= 10;i++) { System.out.println(i); th.sleep(5000); } } catch(InterruptedException e){ System.out.println("Thread interrupted!"); e.printStackTrace(); } } }
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Discuss: Threads in Java
Post your Comment | http://www.roseindia.net/java/thread/threads-in-java.shtml | CC-MAIN-2014-52 | refinedweb | 153 | 57.77 |
Truffaut
A humble tool to help you presenting ideas by writing Swift.
Requirements
- macOS 10.13, could also work on earlier versions as long as Xcode 9 works
- Xcode 9, the embedded
swiftcwill be used to interpret the slides manifest file
- Ruby gem rouge, this gem will be used to provide syntax highlighting for source code blocks
Usage
Get Truffaut.app
Clone this repo or download the pre-built app here.
Create slides manifest
Create a Swift file:
$ touch slides.swift
Import the supporting module:
import TruffautSupport
Initialize a presentation with pages:
let presentation = Presentation(pages: [ Page(title: "Hello World", subtitle: "A Swift Slide"), ])
Caveats and Known Issues
- Currently the syntax highlighter is assuming ruby is installed at path
/usr/local/bin/rubywhich is usually true if the ruby is intalled with homebrew.
- If this is not true for you, you can modify the ruby path in the preference (
⌘ + ,).
- Export to PDF is not implemented. | https://iosexample.com/a-humble-tool-to-help-you-presenting-ideas-by-writing-swift/ | CC-MAIN-2022-33 | refinedweb | 155 | 52.9 |
What is devices like microcontrollers, cellphones, computers, set-top box etc. OTA updates are generally sent for updating the software, resolving the bugs, adding some features etc. With the increasing use of IoT devices OTA is transferred using frequency bands having low data transmission rate (868 MHz, 900 MHz, 2400 MHz).
Here in this tutorial, we will send OTA update to ESP8266 NodeMCU to blink an LED.. It can be programmed with Arduino IDE. On board NodeMCU has CP2102 IC which provides USB to TTL functionality. To learn more about ESP8266, check other ESP8266 based projects.
Components Required
- NodeMCU ESP8266
- Micro USB Cable
-. Connecting NodeMCU ESP8266 with Wi-Fi takes some time as it checks the Wi-Fi credentials. If the SSID and password are correct then NodeMCU ESP8266 will get connected to Wi-Fi and IP address of the ESP will display on the serial monitor.
ESP8266 Blinking LED program for OTA Transfer
Complete code for transferring the blinking LED program through OTA is given at the end, here we are explaining some important part of the code.
Importing the required libraries is the first step in writing the code. ESP8266WiFi.h library provides ESP8266 specific Wi-Fi routines needed to connect to a network. Also it provides methods and properties to operate ESP8266 in station mode or soft access point mode. ESP8266mDNS.h allows sketch to respond to multicast DNS queries.
#include <ESP8266WiFi.h> //provides ESP8266 specific Wi-Fi routines we are calling to connect to network. #include <ESP8266mDNS.h> #include <WiFiUdp.h> #include <ArduinoOTA.h> //OTA libraries
Define variables for SSID and password of the Wi-Fi network to which ESP is to be connected. We have to connect our PC and ESP to the same Wi-Fi network.
#ifndef STASSID #define STASSID "your-ssid" #define STAPSK "your-password" #endif const char* ssid = STASSID; const char* password = STAPSK;
ESP8266 is set as station mode and Wi-Fi connection is initiated by giving credentials. It takes some time for ESP to connect to Wi-Fi module. If SSID and password are correct it gets connected to Wi-Fi and if SSID and password are not correct then it will reboot in every 1 second.
Serial.begin(115200); //Set Baud Rate to 115200(); }
IP address of the ESP is printed on the serial monitor as connecting to the Wi-Fi module. WiFi.localIP() gives the IP address of ESP.
After uploading the code successfully open serial monitor at 115200 Baud Rate. Press the reset button and after few seconds you will be able to see ESP IP address on the Serial Monitor. Now you will be able to upload the firmware wirelessly.
Blinking the LED on ESP8266 through OTA update
Before uploading the next and ESP is powered by some power source.
After uploading the code successfully, LED on NodeMCU ESP8266 will start blinking every 1 second. You can also set host name and password in the sketch for security while uploading firmware on ESP.
);
}
May 22, 2019
Hi.
Thank you for your great work!!!
I have tried to use this on an ESP8266 D1 mini and it works fine.
The awkward thing is, that the Code is full of Serial.print commands, which means, that you still have to connect the device with USB to Serial monitor to see the messages.
But the download of the sketch over WiFi works well. | https://circuitdigest.com/microcontroller-projects/esp8266-ota-update-programming-using-arduino-ide | CC-MAIN-2020-50 | refinedweb | 563 | 65.12 |
Is there a way to count the characters in a string without using strlen? I was trying to write my own string length function by using the array....Do
I have to initialize the char array? Can someone
just give me a hint please??? Here are part of the
code I have so far, very very ugly.......I don't
even know what am I doing anymore......
#include <stdio.h>
int length(char[]); /* prototypes the function for the length of the char */
char oddeven(int); /* prototypes the function for even or odd char length */
int main(void)
{
int x;
char array[26];
length(array);
x=length(array);
printf("\nThe length of the string is %d characters\n", x);
return 0;
}
int length(char array[]) /* define the function of char length */
{
int i = 0;
/* char array[26];*/ /* 25 characters + 1 for the \0 */
char c;
printf("\nPlease enter a string, maximum of 25 characters;\n");
c = getchar();
while(c != '\0' && c != '\n')
{
array[i++] = c;
return i;
}
} | http://cboard.cprogramming.com/c-programming/2140-count-length-strintg-without-using-strlen-printable-thread.html | CC-MAIN-2016-18 | refinedweb | 166 | 69.72 |
This article is remaining half of my last article, it would be better if you do check this one before continuing (in last article I wrote about what is Java how it works etc) You Must Know This About Java - I, in this I'll conclude the other remaining basic terms of Java making it as short as possible.
Let's begin...
Let's talk about Java code
So Java code is a well organized code. Java code must be written in Class or Interface or in enum body. Writing code in any of these bodies gives security and connectivity or mapping [one to one, one to many, many to one, many to many]. When developing applications in Java, hundreds of classes and interfaces will be written.
- Java is case sensitive
- Java is strongly typed
- White-spaces are ignored by compiler
- Unlike JavaScript and Python you have to take care of semicolons (;) in Java.
Object
Look around and you'll find many examples of real-world objects like your car, your computer system, your bag etc. They all have state and behavior. Car have state(fuel level, wheels, gears etc) and behavior(accelerating, changing gear etc). Objects have states and behaviors. Object's state is stored in fields(variables) and behavior is shown via methods.
Class
When you look around you'll find many individual objects of same kind for example BMW, Audi, Mercedes etc all are car, it means they all have 4 wheels, gears etc that means they all are built from the same set of blueprints and therefore contains the same components. A class is the blueprint from which individual objects are created. A class can have any number of methods.
This will be more clear from this picture:-
Object class
Object class is present in java.lang package. Every class in Java is directly or indirectly derived from the Object class. If a Class does not extend any other class then it is direct child class of Object and if extends other class then it is an indirectly derived. Hence the Object class is the parent class of all the classes in java by default. In other words, it is the topmost class of java.
Runtime class
Java Runtime class is used to interact with java runtime environment. Java Runtime class provides methods to execute a process, invoke GC, get total and free memory etc. There is only one instance of java.lang.Runtime class is available for one java application.
There are 4 different access levels-
- default - Accessible within same package
- public - Accessible everywhere
- private - Accessible within same class
- protected - Can be accessed after importing
Method
Method represents some specific task in a program. Methods allow us to reuse the code without retyping the code. The most important method in Java is the main() method.
- Method describes the mechanisms that actually perform its tasks.
- Method hides from its user the complex tasks that it performs.
- Method call tells method to perform its task.
Two methods of same class can be called together
Syntax:-
method().method();
Instance Variable
Instance variables is used by Objects to store their states. Instance variables are declared in a class, but outside a method, constructor or any block. They are called so because their values are instance specific and are not shared among instances. Each object have a unique set of instance variable.
Packages
A Package can be defined as a collection of similar types of classes, interfaces and sub-packages in the form of directory structure. In simple words you can say a Java package is used to group related classes together. To use any package import keyword is used. There are two categories of packages, built-in packages and user-defined packages. You can think of it as a folder.
Description of package importing -
Importing a particular class from the package
import java.util.Scanner;
Importing all classes of the package
import java.util.*; //this is bad practice
The Number of words in a package are called identifiers.
Framework
A Framework can be defined as collection of built-in objects and libraries to create any real world application. In other words, Java Framework is a collection of predefined classes and functions that is used to process input, manage hardware devices interacts with system software. It provides more security, more networking features, and no server down on huge traffic.
Java provides us some very strong frameworks -
- Spring
- Hibernate
- Grails
- JavaServer Faces(JSF)
- Struts
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/ritvikdubey27/you-must-know-this-about-java-ii-1k21 | CC-MAIN-2021-31 | refinedweb | 745 | 64 |
DataGridView is very powerful and flexible control for displaying records in a tabular (row-column) form. Here I am describing a different way of databinding with a DataGridView control. Take a windows Form Application -> take a DataGridView control.
Follow the given steps.Step 1 : Select DataGridView control and click at smart property. Look at the following figure.Step 2 : After clicking, a pop-up window will be open. Step 3 : Click ComboBox.Step 4 : Click at Add Project Data Source (Look at above figure). A new window will be opened to choose Data Source Type.Step 5 : Choose Database (By default it is selected) and click the next button. A new window will be open to Database Model.Step 6 : Select DataSet (By default it is selected) and click the next button. A new window will be open.Step 7 : Click at New Connection button.
Step 8 : Write Server name, User name and Password of your SQL server and select Database name. Look at the following figure.
Step 9 : Click "ok" button. After clicking ok button, you will reach the Data Source Configuration Wizard.
Step 10 : Click the next button.
Step 12 : Click on Table to explore all tables of your Database. Step 13 : Click on the selected Database table to explore all columns.
Step 14 : Check the CheckBox to select columns.Step 15 : Click the Finish button. You will note that the DataGridView will show all columns of the table (Here, "Student_detail"). Run the application. Output
Now we bind the DataGridView with database by code. Take another DataGridView control and write the following code on the form load event.
using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Windows.Forms;using System.Data.SqlClient; namespace DatabindingWithdataGridView{ public partial class Form1 : Form { public Form1() { InitializeComponent(); } SqlDataAdapter dadapter; DataSet dset; string connstring = "server=.;database=student;user=sa;password=wintellect"; private void Form1_Load(object sender, EventArgs e) { dadapter = new SqlDataAdapter("select * from student_detail", connstring); dset = new System.Data.DataSet(); dadapter.Fill(dset); dataGridView1.DataSource = dset.Tables[0].DefaultView; } }} Run the application. Output will be same as above.Here are some related resources
©2016
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/718fc8/databinding-with-datagridview-in-ado-net/ | CC-MAIN-2016-30 | refinedweb | 374 | 53.47 |
10 June 2010 20:35 [Source: ICIS news]
HOUSTON (ICIS news)--US jet fuel prices usually surge through June with the start of the summer travel season but this year prices will likely fall short amid price volatility and increased supplies, a jet fuel trader said on Thursday.
In four out of the past five years, jet fuel prices have climbed from the Memorial Day holiday to the Fourth of July holiday by an average of nearly 10 cents/gal.
“The school season is ending, it’s the beginning of summer, and the beginning of the travel season in a good economy which would mean more people booking flights,” said the New York Harbour (NYH) trader said.
But the volatility of prices this year, along with unavoidable and unpredictable events, has increased jet fuel inventories and pared gains in prices.
“Prices have been moving like any other distillate with major weather affecting the pipeline, volcanic ash cancelling flights, and more,” the trader said.
Volcanic ash in ?xml:namespace>
For the week ended 28 May, New York Harbour (NYH) jet fuel averaged $2.009/gal with US Gulf (USG) coast jet fuel averaging $1.966/gal.
The next week started with Memorial Day and ended on June 4. Prices in the week for NYH jet fuel averaged $2.071/gal and USG averaged $2.019/gal. And this week, NYH prices fell down to $2.054/gal with USG prices at $2.007/gal.
Furthermore, crude prices are maintaining a stable range near $70/bbl, so a boost from higher futures for light sweet crude is not expected to push jet fuel higher. Traders suggested a bump of 5 cents/gal for jet fuel prices in June was a more realistic expectation. | http://www.icis.com/Articles/2010/06/10/9366907/june-us-jet-fuel-prices-seen-to-fall-short-of-expectations.html | CC-MAIN-2015-06 | refinedweb | 291 | 70.02 |
@jameshuntJames Hunt
R&D at Stark & Wayne, finding software solutions to customer problems and changing them into executable best practices.
Stretching as far back as version 1.8 (in September of 2017), Kubernetes has supported a fine-grained access control mechanism called RBAC. Nothing gets done via the Kubernetes API that isn’t governed by some sort permission or another, and there are a lot of them.
Couple that with per-deployment service accounts, named user access credentials, and project-specific namespaces, and you’ve got the makings of a complex authorization scenario.
At times, you’ll wonder precisely which permissions you, or a service account you use, have been granted – that’s when you should reach for
kubectl auth can-i
.
To see everything you can do:
$ kubectl auth can-i --list Resources Non-Resource URLs Resource Names Verbs *.* [] [] [*] [*] [] [*]]
You can also just ask the API to see if a given action is allowed:
$ kubectl auth can-i get pods -n default yes $ kubectl auth can-i get pods -n kube-system yes $ echo $? 0
These commands exit 0 if such access would be allowed, and 1 if not, making them handy for use inside of shell scripts or other automation:
if ! kubectl auth can-i create secrets; then echo >&2 "You cannot create secrets. Please contact your k8s admin." exit 4 fi # etc.
Want more? Curious what happens when an unprivileged
ServiceAccount
is involved? Then check out the video and learn you some access control!
Previously published at
Create your free account to unlock your custom reading experience.
read original article here | https://coinerblog.com/kubernetes-explained-simply-3-what-do-i-have-permissions-for-qt1831qd/ | CC-MAIN-2021-04 | refinedweb | 264 | 64.2 |
Amazon’s Latest Volley well, as long as you have the ground assumption that Amazon is the only distributor of books that publishers or authors (or consumers, for that matter) should ever have to consider. If you entertain the notion that Amazon is just 30% of the market and that publishers have other retailers to consider — and that authors have other income streams than Amazon — then the math falls apart..
2. Amazon’s math of “you will sell 1.74 times as many books at $9.99 than at $14.99” is also suspect, because it appears to come with the ground assumption that books are interchangable units of entertainment, each equally as salable as the next, and that pricing is the only thing consumers react to. They’re not, and it’s not. Someone who wants the latest John Ringo novel on the day of release will not likely find the latest Jodi Picoult book a satisfactory replacement, or vice versa; likewise, someone who wants a eBook now may be perfectly happy to pay $14.99 to get it now, in which case the publisher and author should be able to charge what the market will bear, and adjust the prices down (or up! But most likely down) as demand moves about.
.)
Bear in mind it’s entirely possible that Amazon sells 1.74 times as many books at $9.99 than at $14.99,.
3. I’ve said this before and I’ll say it again: I think it’s very likely that if $9.99 becomes the upper bound for pricing on eBooks, then you are going to find $9.99 becomes the standard price for eBooks, period, because publishers who lose money up at the top of the pricing scale will need to recoup that money somewhere else, and the bottom of the pricing scale is a fine place to do it. Yes, the mass of self-published authors out there will create a tier of value-priced books (this has already been done), and I’m sure in a couple of years Amazon will release another spate of numbers that will show how much more profitable $6.99 eBooks are as compared to $9.99 eBooks, and so on. But at the end of the day there will be authors and publishers who can charge $9.99, forever, and they will. If you destroy the top end of the market, the chances you destroy the bottom end go up, fast.
4. I think Amazon taking a moment to opine that authors should get 35% of revenues for their eBooks is a nice bit of trying to rally authors to their point of view by drawing their attention away from Amazon’s attempt to standardize all eBook pricing at a price point that benefits Amazon’s business goals first and authors secondarily, if at all. The translation here is “Look, if only your publisher would do this thing that we have absolutely no control over, then your own income wouldn’t suffer in the slightest!” Which again, is not necessarily true in the long run.
To be clear, I think authors should get more of the revenue of each electronic sale, although I’m not necessarily sanguine about letting Amazon also attempt to set what that percentage should be. Increasing authors’ percentages of revenue on electronic sales is an exciting new frontier in contract negotiations, he said, having walked to that frontier himself several times now..
5. While this is not going to happen because this is not the way PR works,.
Look: As Walter Jon Williams recently pointed out, if Amazon is on the side of authors, why does their Kindle Direct boilerplate have language in it that says that Amazon may unilaterally change the parameters of their agreement with authors? I don’t consider my publishers “on my side” any more than I consider Amazon “on my side” — they’re both entities I do business with — but at least my publisher cannot change my deal without my consent. Which is to say that between my publisher and Amazon, one of them gets to utter the immortal Darth Vader line “I am altering the deal. Pray I do not alter it further” to authors doing business with it and one does not.
(I notice in the WJW comment thread someone opines along the lines of “Oh, that’s like EULA boilerplate and it would probably not be enforceable in court,” which I think is a really charming example of naivete, not in the least because, as I suspected, the boilerplate also specifies (in section 10.1) that disputes between Kindle Direct users and Amazon will be settled through arbitration rather than the courts.)
Authors: Amazon is not your friend. Neither is any other publisher or retailer. They are all business entities with their own goals, only some of which may benefit you. When any of them starts invoking your own interest, while promoting their own, look to your wallet.
Update, 8/9/14: Amazon tries a new tactic, addressing readers (and authors who use Kindle Direct Publishing). I comment on it. Spoiler: Still not especially impressed with the logic; Amazon still not your friend.
Disclosure: I deal with Amazon’s subsidiary Audible for several of my audiobooks, including the upcoming Lock In. I am delighted to say that Audible has always treated me wonderfully and I am thrilled to be working with them (I will also note that I have contracts with them that they cannot alter without my consent).
This is your reminder that it is possible to do business with a company, happily, and yet be critical of other aspects of their business. Amazon does things I like, and things I don’t like — just like any other publisher of mine, or business I work with.
I’ve come to regard Amazon as the internet Wal-Mart: a bloated, vicious corporate entity that has gone from using scale as a means to deliver product well, to using it to damage everything that isn’t itself. I will no longer willingly buy anything there if I can get it somewhere else.
Why are those binding arbitration clauses EVEN LEGAL?
It’s a contract, drafted within the context of the law, that states that any disputes as to its terms must be settled WITHOUT access to a court of law?
(Shakes head.)
As for break-shrinkwrap-to-accept-terms licenses, IIRC an English court (hint: jurisdiction uncomfortably close to home for me) ruled in the early 00s that because they’d been around since the 1970s they were “customary” and hence had become binding by default. Despite them being strictly anything but, prior to their sneaking in through the back door …
I find it remarkable that a corporation would so blatantly advertise something reeking so substantially of anti-trust. Sure, on its face, it’s not price-fixing that hurts the consumer directly (i.e., fixed *higher* prices), but it’s still price-fixing. Presumably, as you note, John, it’s price-fixing that would help them push out competition.
Hopefully everyone will (metaphorically) stare at Amazon calmly for a minute, mentally say, “Enh, to hell with you,” and go about their business.
Reblogged this on Sheryl Nantus and commented:
More sane words on the Amazon/Hachette situation…
I think everyone will agree, that the market will undergo a massive transformation in the future.
Amazon wants the change to happen fast. Due to their agile structure, they would profit from fast changes where it’s competitors are lagging behind.
For exactly the same reasons, the traditional book market wishes the change to happen as slowly as possible. They want time to adapt.
The authors and consumers are the battleground (consumers more, authors less) on which the fight occurs. Any approach towards them is tactically motivated. I agree completely with John here.
I am primarily a consumer and leaning heavily towards Amazon. But that is mostly due to my experiences with the German SF&F book market, which can only improve by being torched and rebuilt ;-).
@Charlie, I suspect it’s an effort to limit all the class-action-happiness going around these days. Stopgap thing until some rational tort reform happens.
I worked with Amazon on something non-book related in my former day job. They are very, very smart and on top of revenue, and I would trust their numbers more than I would trust those from other corporations. I get the sense that publishing houses have no idea how price affects sales.
That said, I think general book buyers are much more price sensitive than many of the people posting here. I have a good disposable income and I spend a lot on books, but I’m willing to bet I’m an outlier, especially in this economy. I am price-sensitive with indie books where I don’t know the author, though. I just don’t want to pay $10 for an ebook that might be a total flop. If I read someone’s first book and like it, I’d pay $10 for the next ones, though.
Love the post. Plan on tweeting the crap out of it.
One thought on Production costs (which include the entire supply chain costs). Not sure what the cost is to print & ship a hardback. Let’s say $7.50. There is likely a slight mark-up to account for anticipated unsold books (a cost that is averaged out over all books sold.) Let’s say that is $2.00. So $9.50 is the extra cost for a hardback. It would be less for a paperback as they cost less to make ship & their unsold bit would be less.
The point would be to subtract that from the average hardback sale price. That is usually $25-30 or more. That leaves us around $15. All in all, not that bad. The author’s share should go up though as the value from the words and ideas in each book sold does not change.
And of course, what the market bears matters. Classic economic theory.
Keep writing good sir.
@piewords
@onibabamama, this is one reason why I think Scalzi is so smart for having arranged with Tor (by whose ever initiative or idea) to publish the first five chapters of Locked In for free: the risk of buying a flop goes way down if you can get your teeth into a solid sample of the goods.
Thank you forever and ever, John Scalzi. This is the best rebuttal ever.
@Laurence: My wife and I have talked about the curious thing that is the used book market. A hardback that’s $30 new can be had for $1-$15 used, depending on demand. The real costs of making the physical book object are borne by the first-purchasers; the value of the contents is then to some extent seen in the resale price.
Onibabamama:
There is no doubt many readers are price sensitive, for various reasons, and publishers know it — this is why books come in multiple formats, to address various levels of the market.
One of the great things about eBooks is that it can allow a greater amount of price fluidity to attract readers and to serve various market segments — a book of mine was dropped to $2.99 just yesterday for a day to attract readers, for example. Which is why I do worry about the higher end price cap — it will greatly diminish the desirability of flexible pricing for publishers by eliminating one tier of possible income.
In Holland any bits in a contract you sign that are against the law are invalid. I’m not even remotely trying to suggest our laws are superior on the whole (we have many stupid laws) but this one seems sensible. Not that it would help any Dutch author signing a contract with Amazon but as a matter of civilised principle it seems logical that the law of the land should trump anything an individual or company tries to force on those dependent on their custom or favour.
I know we don’t live in a logical or even sane world but one can dream…
It surprises me still that content providers do not learn napster lesson. If you price things beyond what your consumers find acceptable, they will find a way to steal it from you. They will wait for the library to get the book, a friend to be done with it, or just download it from one of a million sites offering it free.
Publishers have every right to charge what they want for something. But it’s a different world now, people see corporations swindle the average joe everyday with impunity, so they have little empathy for “lost revenues” when they copy a song, movie, or book. You can fight that reality with little success (RIAA) or you can provide something the consumers will buy at a price they consider reasonable.
Personally I won’t buy any book priced over $10, and I buy ALOT of books. I am old enough to know that supporting my favorite authors (usually) keeps them writing the fiction I want to read. Buying used books or waiting for it to show up at the library does not support my favorite author. So as noble as folks want to sound for taking that route, it’s not much different from the publishers perspective then downloading it.
I’m also a big fan of Amazon and their Kindle App. I know they are the big bad corporation everyone loves to hate, but they bring me what I want at a good price with good service. The Kindle app is simply the best reading app I’ve used on my iPad.
Above, you say that Amazon is suggesting “35% of net” when you probably meant to say 35% of gross. You can also say 50% of net, which is double what most publishers pay. There’s quite a difference.
On a lighter note, I got a shiny, real-world copy of “The Last Colony” last week, and discovered that I had failed to read an ENTIRE CHAPTER the last time I read it. Just a really, really important chapter at the end. Since I discovered what a doofus I am, I thought I should share it, and take the opportunity to thank the author one more time. Thank you, Mr Scalzi, it’s a great series of novels.
John, youhave been forthcoming of financial details in the past; what are *your* experiences with ebook pricing vis a vis sales, earnings, and the effect on print sales?
I am, as a consumer, heavily influenced by ebook prices. I never buy hard covers, but am willing to dish out $12.99 for a new book I really want to read, $14.99 is too much. If I’m not so keen on the content, I’ll wait until the price goes down to $9.99 or less. I have actually purchased way more new books as ebooks than I ever did for print.
But, that’s just me.
-Matt
Hugh:
Fixed! Note the argument before and after the fixing is essentially the same.
Jeff:
There was more to the music industries’ problems at the turn of the century than simply price.
Dang. Posting on FACEBOOK is not searchable In The Cloud. I’m scrolling and searching foir what I posted just 2 weeks ago, #4 of the below:
My 5,000 Facebook friends have now seen these world-in-a nutshell from me, in chronological order:
(1) FIRST THING TO KNOW, in Mathematical view of Reality… WHAT IS A DYNAMICAL SYSTEM? [18 hours ago]
(2) FIRST THING TO KNOW, in Literary view of Reality… WHAT IS A NOVEL GOOD FOR? [15 hours ago]
(3) FIRST THING TO KNOW, in BIOLOGICAL view of Reality… HOW DO DIFFERENT SPECIES COMPLETE IN A NICHE? [15 hours ago]
(4) FIRST THING TO KNOW, in ECONOMICS view of Reality… HOW DO CORPORATIONS INTERACT WHEN COMPETING FOR A MARKET? [14 hours ago]
(5) FIRST THING TO KNOW, in SOCIOLOGICAL view of Reality… When is a cognitive agent justified or warranted in accepting the statements and opinions of others? [14 minutes ago]
There will be more of these, in subjects where I am peer-review published.
The Napster lesson is that there are a lot of people who feel entitled to things other people create for free. First it was, songs are only sold on expensive CDs and you only want a couple. Then, it’s laden with DRM. Now, songs sold individually without DRM, “record labels abuse their artists so I’m not going to help fund them”. Each of these excuses does have some validity, but fixing them doesn’t seem to stop people from pirating, they just go to the next excuse. If record labels suddenly became all socialistic saints and gave the big bulk of all income to the artists – the excuse would probably be that the artists are too rich.
@ Jeff
The Napster lesson, indeed the lesson that piracy sites keep demonstrating over and over again, wasn’t that record companies were charging too much, it was that if you make something people like and make it limitless and free no one’s going to give a shit about how much money the industry loses as a side effect.
The lesson isn’t “publishers should mark down their prices.” The lesson is “people are freeloading jerks.” Fears about piracy shouldn’t affect publisher’s decisions. If people want to steal stuff off the internet, they’ll do it. It seems (to me, at any rate) that content providers just need to factor it in as a part of the cost of doing business in the digital age.
This has been a fantastic read for me, because this is one of those (rare) topics where I haven’t made up my mind and I’m still collecting a lot of info.
There are a lot of points being made on both sides of Amazon v Hachette, and I’ve had resonance with some from both. Amazon is probably right in a lot of ways to stand up to Hachette on some issues – I leaned toward their position early when the talk was purely about the agency model and the various collusion that set book prices in a frankly non-competitive way. I was frankly disappointed that there wasn’t a bigger legal and settlement fallout from some of the revelations about price fixing.
On the other hand, I’m still greatly bothered by the most current, and Amazon’s in particular, eBook models and ecosystems.
Mr. Scalzi says:
. ”
In my opinion, there SHOULD be a massive gulf between eBook and physical book pricing, but not for anything to do with “costs” of the two media types, but because of the VALUE. Most popular eBook ecosystems offer what I consider to be a poor value proposition when you consider that you’re really just licensing content. The fracases in recent years with Amazon revoking user’s Kindle copies of various works where copyright or ownership was in dispute put a pretty harsh light on what you’re really getting in most eBook transactions – a license to view, provided that the ecosystem survives. More directly without right of loan or transfer to another party, you down really own it, even if the “software” is still available. If I don’t own it, it should be a damn sight cheaper for me to “rent it” that to actually own it, so even $9.99 feels like too much for an eBook. For this reason alone, I’ll stick to physical books for the time being. I’m sure this won’t break hearts, but I think it would be a mistake for authors, publishers, or sellers to gloss over this aspect of the industry and chalk it up to a quiet victory for the “old way” instead of understanding the weakness in the new model.
I’m going to disagree somewhat with the group here but let me say that I am open to hearing evidence to back up the supposition (even if it’s well-reasoned) that makes up this post.
The thing is that authors should absolutely look out for themselves first, as you say, and the evidence that has been presented (as in numbers and figures, which admittedly could be suspect but it’s what we have) suggests that Amazon looking out for itself is better for most authors than Hachette looking out for itself. Unless of course you are one of a handful of mega bestselling authors. Also, the John Scalzis of the world might be able to negotiate things like posting free chapters of his books (or better contracts in general), but the vast majority of authors can’t get those kinds of concessions, even though at this point it seems almost unarguable that they help sales and name recognition.
You claim publishers would make up for ebook price caps at the expense of the midlisters or new authors, but I’ve personally already seen plenty of unknown authors with ebooks priced quite high, which I would argue hurt both their immediate sales and their career trajectory. I know I’ve read blog posts making precisely that assertion from some of them.
If nothing else, it seems like higher prices disproportionately reward the most popular authors at the expense of everyone else. This is a bit simplistic, but I think it’s fair to say your average person has some kind of limit on how much they want to spend on books in a given month or year. If they are huge fans of Patterson and Baldacci but don’t mind gobbling down a few Russell Blakes as well (granted, he is self published), they may shut Blake out if they have to spend their budget on $15+ ebooks from their very favorites. Of course, now I’m supposing too.
When I read the latest info on the whole Amazon/Hachette dealy, I was struck very powerfully by the fact that what Amazon seems to be trying to negotiate is something that’s not actually within their purview. They come across as all nice and kind and concerned by saying that they think authors should get x-percentage, but to me, that seemed a lot like a retail store telling Kellogg’s that unless they pay their factory workers more, the stores will no longer stock Corn Flakes. And only their factory workers, because of course it’s not like anyone else is involved along the product chain (editors and artists don’t exist, of course, and thus don’t need to be paid…) It’s not the distributor’s job to tell one of their clients how much said client should be paying their own clients!
While I accept your viewpoint, I think there is a fundamental flaw that print-to-electronic people (books & comics) have that really is a blindspot.
When you get an e-version of a book or comic book, all you get is the story. You don’t get a hard object that can be sold, given, cherished, etc. So by very definition it is a LESSER PRODUCT.
So is it not fair to say that reading a book or comic on an electronic medium is a lesser experience? Convenient, yes. But e-readers still seem lacking to most. That having an actual hard copy in your hand is the better experience? A hard copy has value, where an electronic copy is just bits in the ether?
Then why should an electronic copy be the same price? Especially in comics, where resale value is actually a thing (not like it was, but it’s there). After an e-version is read, it has ZERO value.
That’s why it is viewed to need to cost less.
And while you already dismissed the “Electronic versions cost nothing because it isn’t trees-shipped-to-bookstores” theory, and I defer to your better knowledge of the medium on why that doesn’t hold water… you can’t argue that at face value the perception is there and rampant.
Plus, you can’t argue that in the comic industry, it is nothing but price fixing to protect comic stores. Could there be a similar thinking in book publishing to help sustain brick-and-mortar bookstores to a certain degree?
Yes, it is clear that Amazon is using its muscle to price fix… but isn’t there a bit of wiggle room here? More of a reasonable cap on e-book pricing, and to have e-books discounted over a real copy?
MIke H, that’s a really good point. I think along those same lines when I contemplate ‘buying’ something on Amazon Streaming Video.
Re piracy, the main challenge of content producers is pricing it such that the cost represented by the effort of pirating it (for the majority of customers) is greater than the cost paid in money. Identifying a good price point and developing a good delivery mechanism are two big challenges of the business sector.
Also re piracy: no DRM will *EVER* work. For every DRM developer slaving away at content producers the world ’round, there are hundreds or thousands of DRM hackers who love the challenge of breaking new DRM as soon as it’s released. Hopeless. Endeavor.
I don’t want any one corporate entity setting the top limit on a book, period – because no two books have the same amount of time and effort put into them, or the same amount of talent or audience appeal. Unfortunately Amazon makes lock-in easy, so a lot of people don’t avail themselves of other options, even other eBook options which are easy if you use a tablet or phone to read on – that’s why I purchase both Nook and Kindle books, and a lot of my SF directly from Baen eBooks where I can sideload them to my iPad.
Is this the wrong place to grouse about eBook DRM? Because one thing I love about Baen’s eBooks is that there’s no copy protection, so if I decide to switch eReaders I can do so easily….
I’m not sure about $9.99 becoming the standard price and cheaper ebooks falling away (point 3).
Of course, if prices pushed down, for any reason, publishers and authors will want to maximise their profits. But surely that’s true anyway?
I’ve always assumed that cheap ebooks are priced that way because publishers and authors figure that’s how they’ll make most money. They hope they’ll sell enough extra books to make more profit on volume. Or they’re trying to indirectly sell more expensive books (e.g. selling an author’s older work cheaply so we go on to buy their newer — and more expensive — titles).
I’m not sure why pressure at the higher end of the market would change that.
“This is where many people decide to opine that the cost of eBooks should reflect the cost of production in some way that allows them to say that whatever price point they prefer is the naturally correct one.”
Sure, agreed. I will still argue that ebooks should cost less than physical books *because you get less.* I can’t lend my ebook to a friend. I can’t sell it, or donate it to the church bazaar. I can’t even transfer it from my kindle to my nook, so buying a new reader can orphan my complete collection.
Rethinking eBook DRM could help with some of these problems, but in the meantime, these limitations are why I feel like I’m being gouged when a retailer inflates the price of an eBook.
Mike H. also makes a great point about the value of an ebook vis a vis a physical book. They are pretty obviously worth a lot less as a product for the reasons he states. As for the Napster argument, it has been shown in many cases that while the record labels lost a lot of money, most musicians are doing just fine, ESPECIALLY the midlisters and unknowns, who from what I can tell are doing better than ever. Multiple studies have even indicated that piracy can even have a net positive effect on the sales of music or movies. Even some heavy hitters (Trent Reznor comes to mind) have self-published albums and made tons of money off of them. I don’t hate the big publishers, but white knighting them is at least as silly as blind adulation of Amazon.
As Scalzi pointed out, there are different kinds of price sensitivity. I’ll pay more to immediately read a book I’ve been eagerly awaiting than I will to try an unknown author. If an author and publisher have worked together through prior writing, marketing and publicity to get me really excited about a particular book then more power to them if I’m willing to pay more for the privilege of reading that book than I would for a book by a less-successful author-publisher team.
That being said, I’m less concerned about “OMG Amazon’s going to ruin books” because I remember being scolded decades ago for briefly working for and thereby enabling Barnes & Noble, which was at that point the heartless colossus bestriding the publishing industry. If history teaches us anything, it’s that arrogant “We own this market and can dictate terms” companies will have their lunch eaten while they’re not looking. (Remember being worried that Microsoft had an unassailable market share for Internet Explorer?) Amazon’s powerful and all, but there’s a strong whiff of Lannister to their reign.
Both sides use economic theory to try and predict the various benefits, and frankly, I tend to look at all economic predictions with a very skeptical eye because it is almost always blinded by political or self interest, or tries to argue “pure” economics that looks real good on paper, but doesn’t take into account that humans rarely act completely in their actual best interest since emotions, ignorance, etc all get in the way.
That being said, as a reader, I know I can’t trust the publishers to act in my best interest: they did collude with Apple to successfully raise eBook prices and were slapped down for it.
However, one can’t really trust Amazon to continue to act in the reader’s best interest: when they’re the only player left in the game, the game is going to become rigged both for the authors and the readers – someone is going to get squeezed. Here’s to hoping the government would not allow such a monopoly, and like the publishers got slapped, so should Amazon should it abuse the market share.
I do find the arguments that somewhat cheaper books will hurt creative output of said books to be shaky – if that held up, music output would have shrunk as album sales declined. The opposite has happened – we have more music today, from a variety of sources, than we ever did before. Its also cheaper and much easier to access. Its not a one to one comparison (as John has argued) album sales aren’t the only income source for musicians. I don’t know if anyone has done a study on whether the overall income of musicians has decreased/increased over time. I think that’d be an interesting one to see and be much more relevant that the current focus on album sales.
I’m of the opinion that taking sides in the matter is pretty difficult to do for a reader: we can’t honestly predict the outcome long-term for us. We’re going to have to wait for hindsight to really tell us if we won or lost in this battle.
All that can really be said is: I’d rather these titans fighting it out, rather than colluding. Consumers always lose on collusion.
I actually wanted to contribute to this discussion what I worked on for an hour, paring, polishing, double-checking citations: “FIRST THING TO KNOW, in ECONOMICS view of Reality… HOW DO CORPORATIONS INTERACT WHEN COMPETING FOR A MARKET?” — but I can’t help you with Amazon because Facebook seems to have made my old content inaccessible. I probably DID back these up, but my motherboard friend earlier this month. So I’m using my son’s PC, while waiting for him to be free to swap my terabyte drive (which had most of my documents) into the carcass of my PC. Why do the chips on PC motherboards last such a short time? Why do Amazon and Facebook not care how many hour of work they are intentionally undervaluing by professional content creators? What ever happened to “Content is King”?
@Austin
I used to be one of those “free loading jerks” until things became more reasonably priced. There will always be people who make copies over buying the original, it’s been that way since recordable tape and xerox machines were invented.
For me is was about either price or accessibility. I wanted to listen to my music digitally so if it wasn’t released as an mp3/4 I would just find a download. When eBook readers started hitting the scene(Palm Pilot anyone?) I got into the crazy easy. There were very few publishers on board with eBooks and so I again resorted to the internet to find content. A lot of people don’t know that eBooks were heavily pirated pre-Kindle. People would acctually scan and OCR the books, there was no DRM to break.
Anyways, now days prices are more reasonable. I pay for a Pandora subscription, a Netflix subscription, a Google Music subscription, and Kindle Unlimited(just switched from Scribd subscription). I still buy a quite a few books from amazon, when they go on sale, or when the mood strikes me. I don’t pirate anymore, it’s inconvenient for me,
My opinion has always been this: if Amazon wants to charge $9.99 for ebooks they should be able to do that, even if it means they lose money, but they have no right to expect publishers to subsidize that decision. Granted, this may force other retailers to do the same, even if they financially can’t afford, but that’s how the free market works. It’s always been survival of the fittest.
typo: ” my motherboard friend earlier” should be ” my motherboard fried earlier” Odd, did I type “friend” because of the misleading term “Facebook friend?” Back to lurking now…
I stumbled upon this blog, I don’t agree with what is written here. Let me start with #1 and go from there.
1a. Amazon makes up far more than 30% of the market. Whether it be digital or paper, amazon is the dominant bookseller. Independent booksellers make up less than 10% of the market. When it comes to publishing and distributing Amazon is always the elephant in the room, please don’t suggest otherwise.
1b. I think that is the most noble defense of agency pricing that I’ve read. But big publishers don’t care at all about independent booksellers. Agency pricing is about making money not running a charity. This is not a moral story, please don’t frame it that way.
2. I used to think as you do, but the writing is on the wall. Books are a fungible commodity. The proof is in the rapid shift of bestseller content in the eBook domain. Self published and mid-list authors that price their books competitively are starting to dominate the eBook landscape, despite the big 5 published books having professional editing, typesetting, design and marketing are losing ground. There are a few authors like Stephen King that can buck that trend, but the majority can not.
3a. It already is fixed at $9.99, because $9.99 is the price of the premium mass market paperback. Since the general consumer consensus is that ebooks are not worth as much as paperbacks, people are unwilling to spend more than what they are willing to in paper form. This is precisely why more revenue is generated at $9.99 than above it on average.
3b. The supply demand curve does not work the way that you think it does. Arbitrarily dropping prices will not increase demand. Amazon will not keep pushing down the price of eBooks because the demand will not increase, and thus revenue would drop. They’ve recently been encouraging their KDP authors to RAISE their prices.
4. Isn’t this all wishful thinking? You don’t get to personally decide on the pricing model for your books or how much royalties you’ll take.
Saying that Amazon, publishers and retailers are on no one’s side but their own is perhaps the only reasonable thing written in this blog entry. Yet I don’t know when it became so evil to be out to make money in a capitalist society. This is yet another strangely perverse and hypocritical “Amazon is evil for wanting to make more money, even though I’m only writing this because I want to make more money as well.”
I’m frankly tired of well known authors attacking Amazon for threatening their fortunes. Get over yourselves, you’ll make a ton of money no matter what. And don’t pretend that you’re sticking up for the little guy, the authors that haven’t made it yet. You’re not.
It’s really be nice if Hachette would go to the trouble of making an official statement on the subject.
@Jimmy I don’t think they expect publishers to subsidize anything. Since the production cost on an ebook is next to nothing, it’s not like the publishers lose money when they are priced lower, they just make less of it per unit.
From everything I’ve read, it appears that publishers make the majority of their profits from the “big name” authors. Since high prices disproportionately reward said authors, it’s no wonder that’s the direction they would prefer to go.
@n1quigley The variable cost, the cost of selling the 10,001st ebook after having sold 10,000 ebooks is next to nothing. But there are production costs in the editing, layout, illustrations, and so forth.
David Whitbeck:
“Get over yourselves, you’ll make a ton of money no matter what.”
Thus proving that Mr. Whitbeck has not the slightest idea of what he’s talking about, but it’s more than willing to tell authors what to do, nevertheless.
You’ve stumbled upon this blog, Mr. Whitbeck. You might want to stumble back out of it.
@ n1quigley – I’ve known plenty of underground and midlist musicians from before and after the Napster wave, and while it wasn’t too notable for most of them at first, over the years it does seem like it’s gotten more difficult to earn a living as a musician.
This is all anecdotal, of course. I’ve seen numbers that argue both ways. Make of it what one will. :shrug:
@ Jeff – I was a freeloading jerk, too. I still don’t do right by many of the artists I like by actually putting down the money for their albums, which is still the most viable way for musicians to earn a living. But my mind did start to change regarding all this stuff when I started seeing how it was affecting my musician friends.
Unfortunately, it doesn’t seem like most people give much other thought to the price of content than, well, the price. Piracy persists even when music, games, books, etc. are literally given away for pennies. Despite the many advantages physical books (and physical stores) present for consumers, tangible products continue to lose ground to cheaper (or free!) digital products.
Ultimately, this is why I feel more or less fatalistic when it comes to Amazon. No matter what, Amazon will always be cheaper, and no one producing a quality product (or paying content creators equitably) will ever be able to compete with that on a serious level.
Of course, when Amazon finally does complete its monopoly/monopsony, maybe I won’t feel so bad about pirating again.
At this point, I exclusively read ebooks. I still buy dead tree for my daughter. If I want the ebook on the day it is released in any format, and for many authors I do, I am willing to pay up to the cost of the new hardback book price. Yes, this many times means I am paying for two hardcover books. But I find that it is well worth the cost to be able to support my favorite authors and to be able to more widely share books I love. Occasionally, I even pay extra to get a book before it is officially released (Thanks, Baen). I feel this is absolutely fair. Amazon telling Publishers that they should not be able to charge me what I am willing to pay is not fair. Period.
I skimmed the comments and may have missed a prior mention.
But didn’t Apple just pay a big fine for colluding with the Big 5 publishers to fix ebook prices? Wasn’t one of the primary objectives of their collusion to prop up ebook prices above what the market would bear?
(Just the first link I found – not uniquely informative.)
I am all for a free market. I even agree that Amazon is working in their own interest to increase market share and to increase the size of the market by pressuring for lower ebook prices. Thus far, Amazon has been working to make my reading habit less expensive, so their interests and mine coincide for a while.
Given that the Big 5 have pursued a course that undermines the functioning of that market, perhaps we need to be a bit more skeptical of their arguments against letting book sellers respond to the market when setting prices.
To flip the analogy, Kelloggs shouldn’t be able to tell every grocery store what their profit margin should be in the same way that the grocery store shouldn’t be able to tell Kelloggs how much to pay their employees.
Regards,
Dann
Can someone explain exactly what Amazon’s ongoing costs are that require them to take nearly the same cut of revenue as the author and the publisher? Given the whole post-scarcity thing? Is it just data centres and whatnot?
I believe that ruling has been overtaken by events: I think that EU law makes the position untenable.
It’s a general principle in law that you cannot contract for an illegal act: extreme example, if you hire someone to kill your wife, you can’t sue for your money back if he misses!
But what’s against the law varies, and I imagine that all Amazon’s contracts are governed by US law.
It’s also noteworthy that in many jurisdictions, business-to-business contracts allow for more leeway than do business-to-consumer ones, as there is a presumption that a business will take appropriate advice before signing.
Dann:
However, the discussion here is Amazon’s most recent communication. I agree that skepticism all around is a fine plan – I say so in the entry. But let’s stay on topic, please.
@gadgetdon Yes, I don’t mean to discount the production costs of a book in general (editing, formatting, marketing, etc.), but printing books adds a nice chunk of change on top of that, is all I was saying. This becomes increasingly relevant over the long haul, especially since many authors never get their rights back if ebook sales limp along at a certain level, making it hard for them to personally profit from the backlist.
@Austin, I think that’s fair. The main thing that probably has changed for the small guys is that since they may have even more trouble than before accessing the marketing power of the labels (although like with authors I’m sure this has always disproportionately gone towards the big names), they have to act more like small business owners. I don’t think piracy is hurting their sales directly, but if they are unable/unwilling to market themselves, get on social media, make a cool website, disseminate sample tracks for free, etc., etc., they may have trouble making it. Of course, in general, “indie” artists of any kind will always struggle to make it. I would still argue the power of the internet makes this less true now than ever before (but still true, of course)
Amazon’s math of “you will sell 1.74 times as many books at $9.99 than at $14.99″ is also suspect
It is even more suspect than you suspect, I suspect. $9.99 is a popular price for many goods for a simple reason: it is a “psychological price point” that makes people think something is a bargain even when it isn’t. There have been studies showing that you can sell more units of things priced at $99.99 than you can if you price the same thing at $89.99 (weird but true). Thus, that 1.74 advantage is probably extremely narrow; it probably drops off fairly quickly if you change the selling price to $8.99 or $10.99 or even $9.50.
An addendum to the second-to-last paragraph in my last post, one that Dann made me consider:
Another reason I feel fairly certain that Amazon won’t be stopped is the fact that Apple offering a pricing model that charges higher for eBooks will be busted for collusion, but Amazon’s practise of selling at a loss in order to drive out competitors won’t be called out for dumping.
Yes, I don’t mean to discount the production costs of a book in general (editing, formatting, marketing, etc.), but printing books adds a nice chunk of change on top of that, is all I was saying.
That may not be a correct thing to say, however. In past discussions, people within the industry have pointed out that this “nice chunk of change” is not nearly as big as people outside the industry think it is.
Quoted: “Please stop making the cost of production argument for books and apparently nothing else in your daily consumer life. I think less of you when you do.”
Why do you assume people make this assumption only for books? I see people make this argument all the time for a wide variety of things.
And to say just because someone has paid more for something at one time means that they give up that argument is a very narrow point of view.
I see people make this argument all the time for a wide variety of things.
It’s still not a very good argument.
While this is not going to happen because this is not the way PR works, I really really really wish Amazon would stop pretending that anything it does it does for the benefit of authors.
When Hachette does it, I will start criticizing Amazon for not following suit. Hachette never fails to mention its suffering authors when complaining about the situation, and its suffering authors – at least when they dare to speak in public under their real names – never fail to blame Amazon first, last and entirely for their suffering. Until then Amazon, even assuming they are acting in complete bad faith, is only fighting fire with fire.
Rather than speculate, let’s look at an example of author numbers where Amazon is not the sole seller:
@gwangung no doubt that’s true, but my point stands, especially about backlisted titles. Plus, don’t forget that printing is not the only cost associated with selling a physical book. You also have shipping, warehousing, and return processing expenses.
Austin H. Williams@12:30 …Apple offering a pricing model that charges higher for eBooks will be busted for collusion, but Amazon’s practise of selling at a loss in order to drive out competitors won’t be called out for dumping. Too right.
Marc Cabot:
Two stupids don’t make a smart.
no doubt that’s true, but my point stands, especially about backlisted titles. Plus, don’t forget that printing is not the only cost associated with selling a physical book. You also have shipping, warehousing, and return processing expenses.
This may or may not stand, depending on what actually goes on in the real world. I recall that these costs actually included distribution—but I could be wrong. Details matter; if the associated costs make up a tiny fraction of costs, then I think the point is irrelevant.
Another thing to consider about Amazon’s ability to unilaterally change the terms of KDP vs the “locked in” rates of a publishing contract: authors can opt out of KDP at any time. They can’t opt out of their contracts with a big publisher, even if they realize that some of the terms are not so hot or if their backlist is stuck in limbo forever because it’s limping along above some horribly low bar of ebook sales.
While I absolutely agree that books are not commodities and will be priced differently based on all sorts of factors — I think the one place you are wrong is that Amazon will try to press prices downward from 9.99. There’s a history here.
We already see that market forces have brought most prices for most traditionally published books down to paperback prices. Not as a discount, just as list. (Then the books go on discount for 3.99 or even 1.99.) Amazon already knows that 9.99 is the upper limit of the optimal range — not a magic price point that all books should be listed at. The range they identified a long time ago is 2.99 to 9.99. They put as much pressure on those pricing lower than that range as they do on those pricing above.
Amazon is not really focused on the books, so much as the audience. (Pause for disclaimer, I am an investor in AMZN — and some of their competitors.) When you look at their history and financials, you see that they are not actually in a race for the bottom on pricing. They are data geeks — they are all about optimizing the price, and yes, having different prices for different items. They don’t even see their company as a retailer, but rather as a search company.
The thing to keep in mind about press releases such as this is that Amazon hates to reveal any of their data. In this case, they released a tiny factoid that is good PR — but that’s not what they are basing their pricing positions on. They have a LOT more data than that. They not only have data as to how different prices work for different books, they have data on what different audiences will pay, and how and when they pay it. They have data on whether customers bother to read books they got free or cheap. (This is, obviously, for ebooks.) They know when readers stop reading a book, and whether they go back and finish it later.
They are not at all confused about what those numbers they release mean. They don’t see it as an optimal price point which all books should adhere to. It’s just a psychological upper limit. They’re not going to set a lower “upper limit” because that line really is where the data shows a huge jump in purchasing. While other price points between 2.99 and 9.99 may offer even better sales, the difference is not enough for it to matter. It would be leaving money on the table for them to cut off any part of that range.
Camille
I love when people who don’t know what the hell they’re talking about try to tell a successful author about the publishing world.
BTW, John: Usually you or a reader has caught the occasional typo fairly quickly, but this instance seems to be flying under the radar. The first time you typed “boilerplate” you switched the “i” and “o.” Unless a “biolerplate” is something with which I am just unfamiliar. Sorry. I’m powerless against my obsessions.
@gwangung The point is that there are higher margins in ebooks sales, especially over the long-haul, with a much lower cost of entry. I mean, it’s rather self-evident. Just look at the explosion of successful self-published authors. Some of them even pay editors and cover artists and formatters!
Shayde —
When a new Playstation game comes out, I can buy the physical copy for $60, or I can get the digital download for $60. Hell, with store discounts, I might even get that physical copy for $55. I wouldn’t suggest that that the digital download of the game is a “lesser” product just because it lacks physical media.
“I really really really wish Amazon would stop pretending that anything it does it does for the benefit of authors.” Looks like you got your wish John.
From the Amazon website: With a mission “to be Earth’s most customer-centric company, where customers can find and discover anything they might want to buy online, and endeavors to offer its customers the lowest possible prices,”
Amazon is concerned with one thing, their customers. Because their customers are how they make money. When companies like Hachette collude to keep prices high, see price fixing lawsuit that Hachette lost, it is the consumer that is hurt.
Besides, this whole story ignores the fact that if Amazon is such a bad company why doesn’t Hachette and the other big old fashion publishers sell their books somewhere else and pull all their products from Amazon? Oh, that’s right, because Amazon has the largest online customer base that they pay to advertise too.
Amazon may not be the perfect company, probably because one does not exist, but they are on the side of the reader and authors and publishers will have to either agree to play by Amazon’s rules or go somewhere else. They can no longer bully the readers and retailers like they used to, and that is a good thing for readers.
Thank you John for provided an intelligent discussion about this topic. Your insight, as always, is appreciated.
@ PhilRM – On the plus side, maybe the market is finally beginning to call them on it?
Regarding Amazon’s math of “you will sell 1.74 times as many books at $9.99 than at $14.99″ –
I read this as “John Scalzi will sell 1.74 times as many copies of Locked In at $9.99 than at $14.99.”
Which sounds like the sort of thing that the biggest book retailer on the planet would *know*, after a couple decades of slinging books left and right.
(If this was not true, I’m surprised that Amazon would say it. The Forbes article seems to support that. Anyone else have other math?)
Concerning books as ‘commodities’ – I agree that I – me – I read with definite preferences for some books over others. I do not shop just on price, and I think hardly no one does – if that were so, the ‘free book’ stacks at the library would never have anything in them.
That doesn’t mean I don’t practice price discrimination, and won’t delay buying a new book until the price falls into the right range. In this, one might draw an analogy to ‘eating out’ vs ‘home cooking’ – just because you’re not going to cook doesn’t mean that you’re going to eat at Ruth’s Chris. Chinese takeout might well be the default choice.
People who price their goods – clothes, cars, books, whatever – like Ruth’s Chris is the default for their customers are not selling to me.
I should also point out that Amazon has never pretended to be on the side of authors. They have always said — in word and deed — that they are on the side of their customers. In this particular press release, the bit about the authors is a snarky aside in reference to the authors who have come out in support of them.
I am reading as many of the comments as I can while still doing some stuff in the breathing world, meaning I had to skip some, so forgive me if someone else already mentioned this – also, it is very much an aside but one reason that I am not prepared to pay (almost) the same amount for an ebook as for a paper book is that I am buying the actual book but only the right to look at the digital book.
That is not meant as an excuse for piracy. I will just buy the paper version or do without.
Downloading shit for free against the wishes of those who have the legal right to sell it is stealing. Same as a person stealing your bicycle or a crooked money man stealing your gran’s life savings. Stealing is stealing and folks who dress it up in Robin Hood clothes are either fooling themselves or morally & intellectually dishonest.
I don’t buy the argument that a physical book has any more inherent value than an ebook. If you prefer physical, great enjoy, I do too. Just don’t discount the value of convenience, instant delivery and doesn’t take up space in your house. In general used books are more valuable as pulp than anything else. I wonder sometimes how often the people who talk about the value of lending books to other people actually do it. If they sell their used books or donate them to charity/friends/the recycling bin.
How much does a night at the movies cost? How much does dinner out cost? How much entertainment value do those things have over a good long book? Can you lend either of those things out? Does that make them less valuable?
Despite medium you are getting the same exact art work. It is the art of it that makes it valuable not the medium. Both have their upsides and their downsides and they make their choice based
This is where many people decide to opine that the cost of eBooks should reflect the cost of production in some way that allows them to say that whatever price point they prefer is the naturally correct one […]
Another analogy: I don’t buy a painting based on the cost of the paint, just like I don’t buy books based on the cost of the paper or the bits.
Books are unusual in that there is a definite ‘time cost’ to them as well, beyond most other entertainment content – the main hurdle for me is not the price of the book, but if I think I will enjoy it. Sinking many hours into a novel I don’t enjoy has a negative cost that outweighs the material cost.
Furthermore, pricing may also have a signaling component – my concern on quality level grows when I see a unknown author with a low-cost book. And part of that is exactly why the publishers need to get paid – they provide a valuable service by curating, and that will naturally be part of the book price. (This isn’t to say there are not good authors like Hugh Howey that are self-published; just that the signal-to-noise ratio is lower, which makes it harder to find those authors).
However, for what its worth, I would argue that ebook prices from big publishers are too high not from the perspective of production costs, but from the perspective of value. While convenient, I don’t actually own any of my ebooks – I just have a license to view them, and no rights of first sale. At times I wonder, once I am dead and gone, will my children even bother to browse through my digital collection of books? Could they, if they wanted? Also, DRM limits the usefulness of ebooks – I would love to buy ebooks from the publisher, and load them on my reader of choice. But that won’t happen until the publishers sell books DRM-free. So in practice, I wrestle with purchasing an ebook at the same or higher price than the paper book (which I will get to actually own).
For those referencing the Apple lawsuit – the issue was that Amazon was buying ebooks from the publisher at 70% of their listed price but discounting heavily. The agency pricing was to stop that, and was deemed illegal.
Part of the evidence was an email from Steve Jobs to James Murdoch included (as one of their options):
“Keep going with Amazon at $9.99. You will make a bit more money in the short term, but in the medium term Amazon will tell you they will be paying you 70% of $9.99. They have shareholders too.”
The medium term has arrived. Whatever you think of the legality, hard to argue the foresight.
@John Scalzi: Two stupids don’t make a smart.
Couldn’t agree more.
However, this is about perception. Assuming what people thinks makes any difference – I’ve seen estimates that this thing may have cost Amazon something like 5-10% of book sales among certain demographics who don’t like what Amazon is doing, which is not nothing, if not exactly crippling – if Hachette is standing on the “we care about authors” moral high ground, Amazon is pretty much duty bound to point out why they are the ones who should be viewed as the Author’s Friend. Even though for both of them it’s a load of horse hockey.
“Killing off Amazon’s competitors is good for Amazon; there’s rather less of an argument that it’s good for anyone else.”
Competition is meant to encourage innovation and lower prices, neither of which some traditional publishers seem keen on. It’s not really “competition” if we’re propping up entities that can’t “compete,” so of course the real deal isn’t good for them.
At least in this small, singular case, you are preaching to the choir, John. Preaching to the choir.
Skimming, so perhaps this was addressed but…
When you get an e-version of a book or comic book, all you get is the story. You don’t get a hard object that can be sold, given, cherished, etc. So by very definition it is a LESSER PRODUCT.
NO. You get a DIFFERENT product. Not lesser. I have probably a thousand physical books. I love books. Mom worked in libraries as I grew up. But I mostly buy ebooks because I like some of the advantages they give (availability of a library on one device, searchability, ability to highlight a word and Google or Wikipedia it). Some people like the physicality of paper books and don’t value these as much as I. That’s fine… but let’s not conflate tradition or personal preferences with better and worse.
On topic more… I’m confused by people who rail against spending $12.99 for a new release ebook but feel $9.99 is a good deal. $3 isn’t that much money once you’ve decided to spend $10. If it is, wait for the price drop when that book comes out in paper and the ebook price drops. That’s what i did most of my life – I don’t like the size and weight of hardcover books and generally didn’t want to spend that much money so I waited and bought paperbacks. Much of the kvetching about ebooks and price seems to come down to people saying “I want it, I want it now and I want it on my terms!!!” That’s fine but the world doesn’t often give us everything we want when we want it and how we want it. If it did, I’d be typing this from my Tuscan villa…
Here is a simple info graphic on why publishers and author have zero leverage against Amazon. Yes they could care less you are whining.
Amazon may not be trying to set a hard ceiling of $9.99. They may just be asking for a higher percentage on prices above $9.99, just like they do for self-published books on KDP. In that case, the publisher would still be able to experiment with higher prices, but they would have to pay more to Amazon in those higher price ranges.
I am playing devils advocate on this… so John don’t go crazy.
First point is aimed at responses and not at John’s post.
Technically amazon is helping the consuming by driving down prices. I believe under the anti-trust acts, monopolies break the law when they hurt consumers by driving up processes and limiting new technology/competition. At 30% market share, I don’t think there is a monopoly case against them anyway. Even if their competitors are smaller.
This is actually good for consumers because it lowers prices. I don’t think there is any case against amazon even if it drives B&N out of business which is what I think they want to do.
My 2nd non-troll devils advocate point… uh.. the devices you use like the nook, your laptop, your TV, etc… and even most of your cloths are built overseas on sweat shops. This lowers the price. This cost alot of blue collar people their jobs. Why should we as consumers care more about your income than we do about the blue collar factor workers that lost their jobs? Books may not be ‘interchangeable’ , but your stake in the game is business. You are concerned about your income. As customers, I’m not sure we should be. I work in IT. I have had my job offshored and onshored (they bring someone in with a VISA at 1/3 of what they pay me)… I have to look out for myself.
End Devils Advocate position
Non-Devils Advocate discussion… found one point really interesting.
John argued that if $9.99 becomes the new price point it will be the price point for all books. Even older books, etc since publishers need to raise prices to make up for their loss on new releases. I’m not so sure this will work real well. Libaries now let you check out ebooks. Overdrive is not as good of an app as iBook (though not real impressed with this) or the Nook, etc… , but your not going to have a library. Its also pretty easy to get older, less popular books. You generally don’t have to wait. Now you don’t even have to go to the library. Its as much effort as buying from amazon. To renew you just click a button. Even after your 9 weeks is up, you just check out again with a button click.
The value in ebooks purchases is getting it now and there is a long wait at the library for new releases plus when there is a wait you can’t renew. This could increase your sales to libraries, but since many people read a book you sell once, I don’t think this will be a net positive.
My last comment… ok so what are you going to do about it other than complain in your blog? Do you have any options? It sounds like you feel helpless and you don’t think there is much you can do. Not sure its practical to drop Amazon as a vendor in future contracts.
@Phillip McCollum Competition is meant to encourage innovation and lower prices, neither of which some traditional publishers seem keen on. It’s not really “competition” if we’re propping up entities that can’t “compete,” so of course the real deal isn’t good for them.
I think you’re blurring the difference between competition between entities providing similar things and hustling for competitive advantage between entities that depend on each other to sell stuff. Amazon aren’t really competing with publishers, they’re competing with other (e)booksellers.
@jtc –
Given that Hachette often cites print/storage/distribution costs to justify a larger share of p-book revenues in their negotiations with retailers, the minimal costs for producing and distributing e-books is a legitimate point to be considered..
Regardless of their motives, Amazon is perfectly right when they say authors should get a bigger share of e-book sales as the publisher contributes far less of the book’s overall value.
Sorry John, but this post is full of fail. Let’s go point by point:
1. You suggest that the math works for Amazon but doesn’t work if you look at other distributors. Why would the buying behaviors of consumers be different through other distributors? Do you think customers who shop at Barnes and Noble like to pay more? Hidden in this point of yours is the truth of what the publishers are trying to do – forestall the adoption of ebooks – thus protecting their relevancy which is entirely dependent on print distribution. So yes publishers are trying to protect the print industry – but why should authors care if they are making more money with the transition to digital (at least those smart enough not to get stuck in onerous traditional contracts)?
2. Your ground assumption has nothing to do with Amazon’s data. They are talking about the same book at two different price points. You say that Amazon selling 1.74 times as many books is good for Amazon, but not for authors – again how is it bad that authors are making more money? You say as the focus tightens, the general rules stop being applicable. I thought math was pretty absolute. Amazon said $9.99 generates 16% more revenue. Why would this same 16% increase not also apply to how much the author earns (unless there is some special discount clause in the traditional publishing contract that lowers their royalty – they wouldn’t dare do that – would they)?
3. What money do publishers have to recoup if they are making 16% more? How does making more money destroy the top end of the market? How does this have any effect on the bottom of the market?
4. You mean Amazon is trying to detract from the fact that they are trying to save customers money. Yeah, they wouldn’t want that to get out. Unlike Hachette, who isn’t doing anything to obfuscate their true purpose of keeping prices high, like launching a massive PR campaign to say that Amazon is hurting authors and culture. And again you make a statement that just boggles the mind. How would a higher royalty rate for authors not be better in the long run for authors?
5. Try replacing Amazon with Hachette in this point, and it reads a lot more truthfully. You are right that Amazon is trying to do what is best for them. And they seem to have learned an interesting lesson – that by treating authors fairly (70% royalties) they can generate more money for themselves. In other words, they discovered it is actually in their interest to help the interest of authors. Too bad traditional publishing hasn’t learned that lesson.
People seem to be confusing the free market and Amazon.
Amazon, according to this blog, only sells 30% of books. They aren’t telling anyone what they can and can’t sell and at what price.
I can’t march into my neighborhood 7/11 and insist that they sell my hand-drawn Yeti’s for $20 because of the free market.
Amazon can put any restrictions they want on what they sell.
If I self-publish a book through Amazon, sure I accept some fine print that says they can change the deal. That’s not such a big deal because I am free to leave any time I want.
I have a question to ebook fans in general. I have an ipad. I can run a nook app and a kindle app on the ipad and use books purchased from those vendors on the ipad. Can you use books purchased from other vendors on the Kindle?
if so bigger authors could drop amazon and customer who purchased a kindle can still read their books on their kindle. I know most authors couldn’t afford to do that. However, really big fish can. That being said, really big fish make enough money where I am not sure they have to care.
Bezos gave some good advice not long ago, suggesting that if customers aren’t happy with things like the games Amazon is trying to play with Hachette, perhaps they should avail themselves of other sellers (my words, not his). So I did just that. I needed a new tablet, so I bought a Samsung Tab 4 and immediately installed the Nook and Kindle readers. I plan to buy at least some of my books from Barnes and Noble from now on, particularly if I have the least bit of difficulty buying them from Amazon. Thanks, Jeff! As Mitt Romney’s much wiser father George once said, there’s nothing as vulnerable as entrenched success.
does anyone know why amazon decided to do this with hachette? It makes sense that amazon will only go after 1 publisher at a time. Is there some contract that publishers have with amazon that may be up and they are renewing?
Amazon “decided to do this” to Hachette because their contract lapsed and, according to the only information we have, Hachette refused to negotiate a new contract, or even talk to Amazon, until they killed the pre-order button and stopped keeping stock on hand.
Adam Lipkin-
A digital version is a lesser product because of what you can do with after consumption. Your videogame example actually makes my point, because you can sell that physical copy at gamestop for a buck or two. A download can’t.
Only if they don’t tell Amazon you’re gone. I don’t know if there’s a limit to how long Amazon will keep an account open with no purchases (and I suspect the same goes for other vendors).
But once your account is gone, so is the library, unless….
Amazon’s DRM is actually trivial to defeat – you can back them up using Calibre, and de-DRM them at the same time: this means they can then be read on multiple devices.
Personally, I don’t feel any regret about this. If they want me to pay the same sort of money for an ebook as for an actual, paper book, then I damn well do expect to be able to lend it to my partner to read, without lending her my Kindle.
And I confess, I am very price-sensitive with ebooks, because it’s a rental, albeit a long-term one (I hope!). I will wait for the price to drop. $9.99 is just under£6, which is about as much as I will pay for an ebook. And the Amazon maths is right: I spend probably 5 times as much on ebooks under £3 as those under £6, and 50 times more than those over that point. I’ve never spent over £10 on an ebook (other than technical manuals for work, where the paperbacks can be £30+) and I suspect I never will.
For some authors, there is a limited reach for their books, and dropping the price won’t automatically make them more money. But for some, I suspect they could have serious multiples of revenue if they dropped to the right price point.
As a consumer only, I will never pay over $9.99 on an ebook. Ever. I really wouldn’t care if it was the last book in a series that I’ve been bent on finishing for years, I still wouldn’t buy it. Not on an e-reader or in a physical format. Hachette is shooting themselves and their authors in the feet and kneecaps by not settling with Amazon. Even though I may love an author and all the work they’ve come out with, I’m not going to pay for one e-book what I could pay for one paperback, which I still wouldn’t do, because I can get 3 other e-books that I’m looking forward to reading just as much. If the battle were between Penguin and Nook, I’d feel the same way.
Well. There are a number of questions and considerations, certainly. Who sets book prices? The retailer or the publisher? In most areas, there’s a set cost (more or less), then the retailer decides what they will sell it for in order to accommodate their profit margins. I find it interesting that the publishers, rather than offering a “suggested pricing” for ebooks are apparently trying to control that rather than the retailer. Rather like when Borders & B&N started dramatically discounting hardcover prices on bestsellers. Independent bookstores were not thrilled, but had very little say in the matter. I remember discussing the matter with the owner of a now-gone indie bookstore when she commented that she could buy the hardcover of a bestseller in the same stripmall she was in at Kmart for less than she could order to sell at her store.
On a more this-is-how-it’s-always-been-done arena, publishers are having a tricky time coming up with a business strategy to accommodate releasing ebooks simultaneously with hardcovers. Formerly, they released the hardcover, then about a year later released some sort of softcover version at a reduced price. But with a significant chunk of readers shifting to e-readers, the question becomes: What do we do? Do we release ebooks at the same time as the hardcover, but discount the ebook? In reality, many of the people who have switched to ebooks (such as myself in about 99.9% of my reading) would not be terribly interested in paying $25.95 for an ebook, nor would it force them to run to the bookstore (or Amazon) to thus buy their favorite authors in hardcover. Sorry. When they priced Stephen King’s “Under the Dome” at $25.95+, I opted not to buy it at all and wait for a reduced price. I actually got it for 99 cents (and haven’t read the damned thing). In our heads, at least, readers tend to view ebooks as a different entity with a different price point than a physical book, especially hardcover. An argument that books are all about content (on either side of the argument) doesn’t seem to hold up in terms of reader perception.
Reblogged this on Relentlessly Reading… And Writing About It and commented:
Wonderful explanation of the Amazon/Hachette battle from one of my favorite authors…
Yes- you can natively read .mobi files on the Kindle, (AZW is just a version of Mobipocket format with some custom Amazon tweaks, including some not-very-effective DRM).
You can buy .mobi books from e.g. Baen, and just load them up. You can also convert DRM-free books from other formats (e.g. ePub) using Calibre.
There are many other vendors on the web. and no few free ebooks too.
Baen seems to have a system where perhaps one book in a series of three or four is paid-for, typically about $6. Which means two things:
1) Average cost of about $2
2) You can try the first book and see if you want to follow the series.
Daniel Knight:
“You say that Amazon selling 1.74 times as many books is good for Amazon, but not for authors – again how is it bad that authors are making more money?”
You’re making the assumption that because Amazon asserts it sells 1.74 times as many books in aggregate that I or any other particular author will sell that many more books in aggregate, which may or may not be the case. I might sell as many books at $14.99 as I would at $9.99, because I might have motivated buyers, in which case Amazon would be forcing me to leave money on the table. Or I might not! Rather than letting Amazon decide for me, I would rather have (through my publisher, who knows rather more about my sales patterns) my own choice in the matter.
Likewise, you seem to be making the assumption that Amazon’s purported increase in books sales will be equally distributed across all book publishers, which is again is an assertion not in evidence. A publisher might run their own numbers and make the decision that it’s better for their business to price eBooks at another price point.
Again: What’s good for Amazon is not necessarily what’s good for publishers or for authors — or, for that matter, consumers, since if Amazon (or any other retailer) gets to a monopoly or near-monopoly position, they have no economic reason to price their products in a consumer-friendly manner.
“I thought math was pretty absolute.”
I’m not convinced you are an expert on this matter, however.
@Jantar: The same is basically true here in the States… almost all contracts have a boilerplate clause that says something along the lines of “if a paragraph is deemed unenforceable, the rest of the contract stays in effect.”
However, the American legal system can be very easily gamed to the advantage of those who are patient and willing to spend more money than the other side. You can be clearly in the right in a dispute, but you’ll have to wait 3-5 years and spend over $100000/USD to be proven right in a court of law.
Just a couple of comments, without being too contradictory and certainly without getting into the swirling vortex of what Amazon “might” do someday (not arguing whether they will or not, but any party in the greater world of publishing “might” abuse any additional power it gets).
First, the Amazon figures are for all ebooks, which means mathematically, the universe of authors earns more at $9.99 rather than $14.99. If you feel Amazon is lying, that is one issue, but if you believe the figure is correct (and no one has a store a priceless data like Amazon), I can’t see any argument that, on ebooks at least, more authors make more money at $9.99.
I won’t speak to preserving hardcover sales, except to note that markets don’t take well to being told to pay what they feel is egregious for products, even when they want them badly. I can’t think of anything better for the piracy industry than $14.99 and $19.99 ebooks. Almost everyone in the marketplace thinks that is ludicrous, and they’re not going to change their minds because some authors want to shore up hardcover sales for a few more years. Standing in the way of the market is foolish in virtually all cases.
Aside from the hardcover issue (and assuming no “Amazon is lying about the numbers” conspiracy theories), more money goes to authors overall at $9.99. Yes, there are authors in those conglomerated numbers who might make less at $9.99, just as there are many who would make more. But more money overall goes to authors, at least on Amazon.
As far as BN and other ebook vendors, lacking data (which could certainly vary from Amazon’s) from them, the best possible estimate is that their results are similar. Markets are markets. Maybe BN has a less price-sensitive audience, but lacking any data to point to that, you would expect similar results from different retailers in the same space.
Let me be clear. Hachette has the right to sell their product at whatever price they want, just as Amazon has the right to stock what they choose. While I consider the notion of stunting the fastest growing platform (ebooks) to justify high prices for hardcovers to be a poorly conceived strategy, I am not Hachette’s CEO.
As an author, I can say I would be very upset if my publisher priced my ebook over $9.99. I don’t think it pays in the long run. Others may disagree, but I don’t see the data to support that argument (again, aside from propping up hardcovers).
And regarding the comment on one of the responses about Amazon’s demand matching that of the price-fixing scandal, it is not remotely the same thing. The publishers didn’t get in trouble for saying, “we want our books priced this way.” They got in trouble because they all got together and agreed to do exactly the same thing (price-fixing).
re: amazon’s math. There are three kinds of lies: lies, damn lies and statistics.
Cherrypicking your data points is trivially easy now.
@Mord Fiddle:
Given that Hachette often cites print/storage/distribution costs to justify a larger share of p-book revenues in their negotiations with retailers, the minimal costs for producing and distributing e-books is a legitimate point to be considered.
I tend to disagree with this – as a consumer, I am looking at it from the perspective of value, not raw material markup or production costs..
I agree that authors likely deserve more, but that is between the publisher and the author, per their contract. Not something to be dictated by a 3rd party. And again, there is value in what the publishers do – and if there isn’t, the market will take care of that; authors won’t sign with publishers, and readers won’t buy books from those publishers.
Regardless of their motives, Amazon is perfectly right when they say authors should get a bigger share of e-book sales as the publisher contributes far less of the book’s overall value.
*Shrug*. Sure. I’m just not sure I want Amazon being the one that dictates author / publisher contracts. That said, I am frustrated by the DRM that (most) publishers insist on for ebooks, and I think their use of DRM gives them less leverage with Amazon.
I find that most people just flat out don’t have any idea how much it costs to manufacture things. Like, not even a little.
I work in comics, where the idea that digital comics should be .99 because print comics are 2.99 -3.99 , with the idea that bulk of a comic’s price is in the printing of the physical object.
It isn’t. One of my trade paperbacks costs about 1.50 to print. The single issues cost about a a quarter. And for bigger publisher, it’s going to be even cheaper. The printing and shipping of the physical objects accounts for at most 10% of the cover price for any of the books published by the large publishers.
I would guess that the same sort of scale is true for prose books. I do know, for sure, that what you’re paying for when you buy a hardcover at full price isn’t the object. You’re paying a premium to get it FIRST.
“I feel perfectly justified in considering your cost of production position vis a vis publishing as entirely hypocritical.”
Not hypocritical in my case, just ignorant. Thanks for this explanation. I hadn’t considered e-book pricing this way. (To be fair, big publishers don’t generally *explain* it this way. They claim that ebooks don’t actually cost less to produce, ship, store, etc., which is a specious claim.)
Completely off-topic, but I felt it was curious given your recent visit to Houston:
On the subject of DRM, this is where I note that none of my fiction from Tor or Subterranean Press has DRM on it, because I think DRM does nothing useful, and my publishers agree with me (the latter is not true with all my publishers worldwide, although I wish it were). I do believe that if you’ve bought an eBook with my name on it, you own it.
Jay:
“First, the Amazon figures are for all ebooks, which means mathematically, the universe of authors earns more at $9.99 rather than $14.99.”
That’s nice for the “universe of authors,” but yet again it does not mean that individual authors cannot do equally well at $14.99 (or any other price point above $9.99) than they would at that $9.99 price point, nor should that assertion mean that publisher should be required to sell their wares at that price point if they choose not to. I don’t mind Amazon making the assertion, but it does not follow that because the assertion is made, that the price point should become a requirement.
While we are on the subject, let’s also note that we are also buying into Amazon’s framing here, i.e., that there’s somehow a $14.99 price point and a $9.99 price point, with nary a price point inbetween. Well, right now on Amazon, my book Lock In is preselling for $10.67, which is neither of those two price points. If memory serves, when Redshirts first came out, its price point was either $11.99 or $12.99; it sold better than the hardcover, which sold well enough to land me on the New York Times Best Seller list.
Which is to say that we need to ask the question of why Amazon decided to use the $14.99 eBook price point — which to my knowledge none of my eBooks has ever been priced at — to contrast with the $9.99 price point. Rather than, say, $11.99 or $10.67. The answer might be instructive.
@joeiriarte… it’s a specious claim because it’s not true. BUT the costs to print/ship/etc aren’t as expensive as we all speculate, so the profit margin we all expect to be lower for print isn’t really a big factor. They’re getting killed on charges for managing e-books because they got rooked early on by smart people who told them it would be expensive.
So the cost is really similar due to bad infrastructure decisions. At least that’s how it has been explained to me.
So anything negotiating the overall cost of the book directly takes dollar-for-dollar out of profit pretty evenly for both E and Print.
I don’t want to spam up John’s blog with all the math, and besides I’m really, really tired of having to lay it all out again and again. So I’m just gonna put it this way:
I am a book production manager. This is my career of TWENTY YEARS (almost exactly–I started in the first week of August, 1994). I have worked on everything from mass market paperbacks to fancy coffee table books. There’s very, very little I don’t know about what it costs to print, bind, and ship a book. (My knowledge gap: kid’s picture books.)
When it comes to novels, the average mass market paperback costs between $1-$2 to print and ship. Trade ppk between $3-$4. The average hardcover, $4-$5.
(If you’re printing POD, it will be higher. Publishers save by printing in bulk on offset presses.)
So if you want the publisher to give all that savings back to the reader (instead of, say, the writers), the best savings you should be demanding is that the ebook price be $2 less than the mass-market list price, or $4 less than the trade list price, or $5 less than the hardcover list price.
Go do some comparisons of list price (not discounted bookseller price) for print vs e- editions. See how close these numbers come. I think you will find many, many cases where the publishers are discounting the ebook editions more than this, particularly on hardcovers.
———–
(slight topic change)
Now, you may personally feel that an ebook isn’t worth the price because you can’t lend it, etc etc. The simple answer to that is: Don’t buy it. I decline to purchase all sorts of things because I think they don’t deliver enough value for my needs. Or I purchase an alternative (store brand over-the-counter ibuprofen instead of Motrin brand, for example).
Remarkably, publishers still supply that value-added thing called a “print book.” If you would prefer to have a book you can lend to friends or resell or use to balance a wobbly chair, you can buy a print edition.
Oh, what’s that you say? You like the convenience of having multiple books in a dimensionless space in your pocket or backpack?
Hello, you have just discovered the value add of an ebook. You give up tradeability and resell for portability. Surely that’s worth something to you?
@znepj:
I’m aware of Calibre and de-DRM; however, it is problematic for many reasons, and I would prefer my books to be sold without DRM. And the ease of Calibre and de-DRM makes its presence all the more ludicrous; those that will pirate books can currently do so easily.
And it hurts the publishers, because _any_ frictional barrier like that will make sure that most of your consumers won’t buy an equivalent product directly from you (e.g. something they can read from either a kindle paperwhite, nook, etc.). When buying an ebook, I shouldn’t have to worry about what my reader is at all, aside from technical limitations (color, etc..).
But I think you and I are essentially in agreement; lack of ‘ownership’ and presence of DRM lowers the value of ebooks (for us, at least). For me though, that is partially made up by the increased convenience of ebooks.
@John Scalzi
Thank you!
As a consumer my interests are in me. Cheaper is better. I have always thought ebooks should be <$10 and hated Apple for trying to change that. Printing and paper is expensive and not so eco friendly for those tree huggers around.
John, I’ve been following this whole thing since I wrote about how iTunes was going to affect the music industry in 2008. I’ve been computing the incremental cost of delivering a copy of an ebook or iTunes song all that time. We know that Wishpernet costs 15¢ a megabyte, so even with Whispernet, the incremental cost of a book as big as _Atlas Shrugged_ is 45¢. The other cost, compute platform and server, is in the order of 10^-6 cents.
So, with a $9.95 ebook, you have a margin of someplace in the neighborhood of $9.00
If Hachette can’t figure out how to make a profit on 90 percent margin and still pay the author a decent royalty, it’s because they’re fools. If they want more than 90 percent margin (remember that’s a fixed cost) before they’ll pay a decent royalty, it’s not Amazon that’s screwing you.
John Scalzi said:
“You’re making the assumption that because Amazon asserts it sells 1.74 times as many books in aggregate that I or any other particular author will sell that many more books in aggregate, which may or may not be the case.”
No, I’m assuming that the 1.74 is an average, which if we assume a bell-curve distribution means that most authors will sell more books at this lower price point. It is true that some particular author might sell more books at the higher price, but they will be in a minority.
And yes I’m assuming that the increase in book sales will be fairly evenly distributed because that is how natural bell-curves and averages work. This is also supported by evidence gathered by the self-publishing community.
Your Amazon monopoly thesis is another strawman argument. Amazon is nowhere near a monopoly and hasn’t shown any evidence of practicing the behaviors you think will happen in the future. If they did they would become subject to the laws restricting such behaviors.
Finally your assessment of my mathematical skills as stated by you “I’m not convinced you are an expert on this matter, however,” is quite irrelevant. As a high schooler I competed in mathematics competitions on the local, state, and national level, winning first place awards at all levels (in subject matters all the way up through what colleges call Calculus 2). I attended undergraduate school on a National Merit Scholarship (full ride) and graduated with a bachelor’s degree in mechanical engineering. My final GPA was a perfect 4.0, a feat that according to the Dean of Engineering had not been completed in the previous 15 years that he had been Dean (in any engineering discipline). I got my Master’s degree in mechanical engineering from Carnegie Mellon University on a full-ride from a National Science Foundation Fellowship (after turning down a full-ride offer from MIT). It’s true I didn’t get a perfect 4.0 in graduate school – I actually got one A-. I’ve since worked on projects including the space shuttle main engines, maglev haptic systems, warehouse automation systems, inspection robotics, and electron microscope equipment. I have an IQ that puts me in the top 0.1% of all people on Earth. That being said, I’m still wrong from time to time – just not this time.
I have always thought ebooks should be <$10
Of course, that’s kinda irrelevant if publishers can’t make money at that point.
I keep on pointing out that printing and distribution costs are a far lower part of the price than people assume (and other posters have been pointing that out as well). I feel that gas should be less than a buck a gallon, but I’m not going to find any outlets that’ll sell at that price.
Charlie Martin:
Because everything else associated with book production — the editing, design, marketing and PR — comes for free? Not to mention the cost of the writer’s own work (usually quantified by the advance)? I’m not 100% behind your police work there, Mr. Martin. Certainly the longer a book sells (and sells successfully), the marginal cost of each unit goes down. But that’s a long haul, and most books don’t earn out for their authors (which is different than earning out for publishers, but many of them don’t do that, either).
Mind you, this is once again trying to bring the conversation around to the idea that there is some particular price books should be, based on production costs, and once again I say bah to that. If publishers can sell an eBook at a price point above $9.99 to willing purchasers, why should they not be allowed to do that? It doesn’t matter whether the profit marging is 9% or 99% — what matters is whether there are buyers.
I’m genuinely flummoxed why so many people seem to be having difficulty with the idea of a free market for eBooks, with the prices set by the manufacturers and demand (or not) provided by consumers.
Daniel Knight:
“It is true that some particular author might sell more books at the higher price, but they will be in a minority.”
And so fuck them, there’s not enough of them to count? I’m not 100% behind that police work, either.
“Finally your assessment of my mathematical skills as stated by you ‘I’m not convinced you are an expert on this matter, however,’ is quite irrelevant.”
Well, no, it’s not. Be aware, Mr. Knight, that I could claim to you that I am a super mega math wizard of the 63rd level, plus also a Prince of Andorra. Anyone can claim anything in a comment thread. However, just because you assert something doesn’t mean I am obliged to lend it credence. Likewise, I know I have two decades experience as a professional writer, and that what you’re suggesting is largely aside the point to the way things work in the real word. So, no. I’m still not convinced that you are an expert in this matter.
IQ scores and GPAs, Mr. Knight?
Heh heh.
You do know that you’re arguing with someone with extensive experience in both self publishing and traditional publishing?
Heh heh heh.
If production costs for books really are as low as several of you claim, it’s quite amazing how little of the remaining pie goes to authors. Multiple big name self-published authors have laid out their budgets for professional editing, formatting, cover design, etc., and it tends to amount to a few thousand dollars, sometimes less. In their case, it seems very obvious that Amazon’s position is more advantageous to their bottom line and thus it is rational for them to support Amazon, even if some of the criticisms leveled by individuals like John are true. Unless I’m missing something crucial, that seems more or less undeniable.
I’m sure it does get muddier when it comes to traditionally published authors, although it’s disingenuous to claim or even to imply that midlisters or new authors are impacted the same as bestselling authors by these sorts of fights. Again, it seems fairly obvious that they are not. Note, I am not saying that this proves Amazon’s position is advantageous to them, only that they should be considered separately.
That said, while I don’t know that there is data to back this up, my own gut instinct here is that higher prices push unknowns out of the to read list for book buyers on a budget because because a) they can’t afford to buy Patterson AND Joe no-name and/or b) they don’t want to risk a $10+ investment on an author they don’t know. Thus my instinct is that only the most popular authors have a self-interest in siding with the big publishers (even if they support better % on ebooks for all authors as John does here).
For the record, I am not criticizing anyone for looking out for themselves. Also, this equation is sure to change in the future no matter who “wins” this time.
I just took a quick glance at my kindle purchases and the most I’ve spent on an eBook was $12.99 but I was averaging around $6-8 for novels which is basically why I think capping at $9.99 is silly. There are people who will buy above that and people who don’t want to can wait for the price to drop just like with physical books.
For the most part I’m very behind in my reading so I don’t normally buy books right away but for some authors I have no problem paying that “just released” price, especially if it’s a book I’ve been eagerly awaiting. I actually find I’m buying more at that “paperback” price because with physical books it’s a lot easier to impulse buy a book I know I’m going to read eventually whereas with digital I just buy when I’m ready to read (unless it’s on sale) so I don’t buy it when it’s released.
@joeiriarte – A big concern on the part of publishers is retail-based print on demand (POD). If they give ground on e-book pricing due to lower input costs, they’ll lose a like share of p-book revenues when retail POD becomes prevalent.
Gwangung said:
IQ scores and GPAs, Mr. Knight?
Heh heh.
You do know that you’re arguing with someone with extensive experience in both self publishing and traditional publishing?
Heh heh heh.
And that gives him expertise in and the ability to evaluate the expertise of others in mathematics, statistics, and logic – exactly how?
I’m genuinely flummoxed why so many people seem to be having difficulty with the idea of a free market for eBooks, with the prices set by the manufacturers and demand (or not) provided by consumers.
But isn’t it true that Hachette is trying to manipulate the market just as much as Amazon, if not more (and I’m not referring to the earlier price-fixing)? Aren’t retailers traditionally free to be the ones to set the final price on their stock? Direct me to a source if I’m wrong, but it’s not like Amazon is selling at a loss here. If they want to set a price at a level that reduces their own profits per unit, there is no reason that is acting in contrary to a free market. If Hachette wants to sell their books for more, they are welcome to use other retailers (as they have). If the market thinks their product is worth it, their strategy will be rewarded. If not, not.
I’m flummoxed why so many people seem to think otherwise. Sure Amazon has a large market share, but there are plenty of other options out there. At worst, they are the online equivalent of Walmart, but even Walmart doesn’t sell all the things to all the people and put all the other retailers out of business. I’m no Amazon apologist (although I’m sure you probably think so by now), but I think you have chosen a side more categorically than you claim.
@n1quigley – Multiple big name self-published authors have laid out their budgets for professional editing, formatting, cover design, etc., and it tends to amount to a few thousand dollars, sometimes less.
There’s been more than a few indie films that have made millions in revenue with a starting budget of a few grand. That’s evidence that it can be done, not evidence that cash, every movie cost $2,184.
You need to include the set of people who spent that self-pub average and didn’t become big names, then compare with the midlist success or otherwise of trad published authors before you can come to some kind of rule – and even then it will just be an average.
John,
No argument that there are many price points that can be compared, and Amazon (just like Hachette or anyone else) is going to offer one that supports their point of view. It is possible, for example, that $11.37 is the optimal price for maximizing aggregate ebook revenue. Or maybe $8.74. However, in the absence of alternate data, I don’t think much is served by presuming that non-existent information will contradict the overall point. Anecdotally, it’s pretty obvious that most readers/buyers don’t think ebooks should be priced above $10. That doesn’t mean they all do, or that those that feel this way won’t make exceptions and buy something they really want. But again, when we look at the whole market (and what else can we look at when deciding what is best for “authors” as opposed to author A or author B individually?
You can’t know now what your Redshirts sales would have been with a $9.99 price point (or any one other than what you had). Perhaps you would have sold more, maybe significantly more. Or not. Or maybe your hardcovers would have fallen. Or not. It’s idle speculation at this point. Because a book was a success doesn’t mean it couldn’t have been more successful if it had been handled differently (or less so). That doesn’t mean you’re not happy with how it worked out, but I assume you’d have been even happier with greater sales.
Whether you think YOU would do better at $9.99 ebook pricing is something for you to decide in planning your career (to the extent you can control pricing). But assuming Amazon’s numbers correct, more authors would gain (and more total sales in aggregate) at $9.99 over $14.99. It would be great to have other comparisons at different data points (and Hachette has all these figures too, at least for its own books…I am always suspicious of silence and hypothetical arguments against actual data when on side doesn’t release any). But lacking that expanded information, a reasonable application of statistics in crafting an estimate would strongly suggest that price points between $9.99 and $14.99 would gradually become closer (e.g. more revenue at $13.99 than $14.99, but less difference than the 14.99-9.99 comparison). Certainly there could be statistical wiggles in there, but it’s a pretty good bet it would be close.
n1quigley:
“Direct me to a source if I’m wrong, but it’s not like Amazon is selling at a loss here.”.
Mind you, occassionally selling things at a loss to grow market share or awareness is not bad thing in themselves — “loss leaders” are common in every retail segment. You get people in the store with cheap things you lose a little money on, and you get them to buy other things you make money on.
With regard to books, in Amazon’s new subscription business, there are apparently some books that Amazon offers on subscription basis where each download is automatically credited as a sale; these are effectively loss leaders for Amazon as it attempts to build that market and its market share.
Does Hachette (or any other publisher or retailer) want to control its market as much as possible? Obviously they do — again, this in itself is in my opinion neither good nor bad in itself. Where it becomes problematic (or at least interesting) is in the impact it has on other players to do their own thing — in this case, with authors and publishers, to set their own prices.
Jay:
“However, in the absence of alternate data, I don’t think much is served by presuming that non-existent information will contradict the overall point.”
I have absolutely no doubt Amazon would agree with you. However, when I don’t see data here, my initial assumption is not that they are not relevant, or indeed “non-existent”. My initial assumption is that there is a reason the data are not there is because they are not useful to Amazon’s argument, and that if Amazon has data on these two sales points, it has them for every other sales point as well. Given the wholesale amount of spin I see in Amazon’s press release, it would be foolish of me to assume that this price point construction is not primarily designed for spin purposes as well.
Bear in mind also that from my point of view I may in fact sell more units at $9.99 than $14.99 (or any other price point) — but that it’s possible I might sell even more by selling at $14.99 first, then at $9.99, and then at $6.99. Amazon’s contention does not appear to offer any insight into this; Amazon does not, it appears, seem to want to consider the idea that offering work at multiple price points above and below $9.99 might be useful to me or to my publisher, because it has decided that $9.99 serves its own purposes sufficiently well.
Terms are being bandied about. Pirate sites charge. I have seen my books for more than I charge. They just keep it all. Bit torrent sites had all 3 Shades of Gray books free within days, if not hours, of their release.
This resource was available on reputable sites. And if you “buy” an ebook, you don’t own it. You acquire the licensed privilege to access it.
I am aware that that has been Amazon’s SOP in the past, but we’re only talking about ebooks here. I believe Amazon’s subscription service credits a sale only when 10% (some speculate that might increase further) of the book has been read and the majority of available titles are self-published, so that’s a different animal, at least for now. Authors don’t get to set their own prices if they are with a publisher anyway, so if you are self-published, Amazon is far more pro-market. If you are traditionally published you have no control either way, although you would certainly want to know which price would be more advantageous to your pocketbook, which is why this discussion is interesting. Certainly, Amazon is more market-friendly from the perspective of self-published authors.
Nevertheless and again, Amazon is a retailer and retailers have always been able to set their own prices, which is largely where I come down. Of course, I acknowledge that it’s trickier here because ebook profits are split on a % basis, so there is no instance of the publisher making the same with each sale regardless of price. But then why should we give Hachette more say in what to price their product than Amazon, when both have an equal desire to make the most money possible? Maybe the model should change so that the publishers charge a fixed amount to Amazon for each title sold (although I suspect Amazon would object to that). One thing is for certain: author’s should definitely have more clout here, but they don’t, alas.
John,
I’m always interested in your perspective on this, but you’ve gotten some legal questions here wrong, and in a way that I think kind of misleads your readers.
Amazon doesn’t have the right to unilaterally alter Kindle Direct contracts. It’s not naive to say that such a provision would be unenforceable, with or without an arbitration clause — it actually seems to be the position of Amazon’s lawyers as well, as they’ve gone to great lengths to get as close as they can to that state without crossing over into unenforceable contract land.
It might see as though I’m arguing how many angels can dance on the head of a pin. After all, Amazon’s contract essentially requires authors to watch its terms like a hawk, and then take down their book within thirty days if there’s a change they dislike and don’t want to be bound by. But these little distinctions are, legally speaking, important.
Contracts aren’t magical. If someone is being screwed by a publisher — be it Amazon or a traditional publisher — they should at the very least consult a lawyer. They should never, ever assume that just because the other party wrote the contract that they have no recourse. Maybe, under the terms of the contract, they do. Or maybe the contract’s unenforceable. Maybe the arbitration clause is unenforceable. Maybe the arbitration clause IS enforceable, but the arbitrator finds fault with the publisher.
The reason people balk at paying the same – or more! – for e-books as physical books is because there’s an apparent choice to get one or the other, and the e-book comes with slightly less overhead and significantly fewer rights.
Your soda analogy doesn’t work because there’s no option to get that soda either with or without the cup and associated costs of production. In fact, it’s a bad analogy because most places offer the product for free once the production costs are handled.
This is a perception issue in that there is one product available in two formats and one costs less to produce, so people expect it to be cheaper.
correct me if I am wrong — but the “cost” of a book isn’t fixed (in time) it is highest at publication date and generally lower, the further from that date. In the “old” days (IIRC) this had a lot to do with how royalties and contracts to authors were structured. Authors may have had an advance that publishers had to “make back” in hardcover sales within “x” number of days from release date. Books would stay ‘in print’ for a certain number of weeks or months and unsold copies would be ‘remainderd’ back to publishers (who then liquidated them- they ended up in the ‘former bestsellers” or “bargain bin” at your favorite bookstore) If a book was particularly popular, there might be multiple printings, reissues in multiple formats (paperbacks etc.). Now with e-books, publishers STILL need to make back advances paid to authors, they still need to pay the royalties- but the ‘old’ way of doing things (Initial printing of hardcovers covering the costs of acquisition or the ART/content) may be gone. IMHO what publishers want to be able to do is still do things the “old way” when it meets their business needs (big name authors who have advances to make up). What Amazon wants to do is make the transactions/price less dependent on time from initial release (by setting a cap on price of $9.99 – one would presume that $9.99 on 1st day of release and still $9.99 30 or 45 days out)
I am all for amazon selling whatever for whatever $ amount they want – but they are not involved in how much the content COST the publisher (how much the author is being paid) J.K Rowling’s latest or the newest in the DaVinci code series may cost a publisher considerably MORE$$ than average (in advances or per copy royalties or % royalties) and Copyediting a 1000 page book costs more than a 400 page one – Amazon isn’t just saying that they want to sell ebooks for no more than $9.99, they are saying they want to pay no more than (whatever %) of $9.99 per copy sold. Amazon would be in a position of making their margin on every ebook sold, and publishers would be stuck with the lion’s share of risk and investment and not break even until “x”th book is sold. I can totally see where publishers are coming from – and how Amazon is a disruptive force in/ to the publishing industry. Do not forget, Amazon wants to make money on kindle hardware, content, and merchandise for a to z, They want their customers to come to them for EVERYTHING. Driving down the cost of ebooks may be good for customers – but it is part of an overall strategy to get their customer to spend more at amazon.com (not just for ebooks). anyway that’s my $.02
@John – When Amazon sells at a loss it is Amazon that takes the loss – not the publisher or author. They get paid at the contracted per-unit rate.
.”
Interesting side note… I used to work at Borders HQ in the mid to late 90’s. I distinctly remember at several all staff meetings, the CEO gladly boasting that Amazon had yet another quarter with no profits, their venture capital would soon run out and show that online retail couldn’t turn a profit, and that brick and mortar stores were still the future.
Yeah, I think that was the guy they brought back from retirement because they liked him so much. That sure turned out well for Borders.
Mord Fiddle: yes – but that is what Amazon is trying to negotiate away for ebooks, Amazon wants to pay no more than a % of $9.99, ever
@jonjason That is largely true, but also why many argue that Hachette is looking out more for the interests of the big names, potentially at the expense of the majority of working authors. Of course, those big names have just as much right to look out for themselves as the small guys do. Only thing is there are far more small guys in the picture. If someone can provide compelling evidence to me that the midlist (and below) authors will likely make more financially under Hachette’s model (and I’m talking long-haul, not just in a given year), I’ll be more inclined to support them. Ultimately, as a fan of the arts, I’m in favor of whatever model gets the most money into the pockets of the most authors (not just the bestsellers). I am willing to sacrifice the profits of the top 1% of authors if it means more is left for the little guys. I guess that means I’m a socialist :)
This also leads us into discussions on the value of diversity and experimentation in the book industry, both of which are (in my opinion) unduly punished by the traditional publishing model.
n1quigley:
“Amazon is a retailer and retailers have always been able to set their own prices”
Well, no. Retailers frequently have contractual obligations on any number of points, sales price among them; likewise distributors have contractual obligations to retailers (for example, not being able to offer another retailer a lower price for an item being one of them). You should assume that between to companies — and especially between two large corporations — everything that can be legally sorted out in their business relationship is, including prices. This is not new; it’s how business is done. Indeed, that negotiation is what’s going on right now between Amazon and Hachette.
KG:
“Amazon doesn’t have the right to unilaterally alter Kindle Direct contracts.”
However, Amazon quite evidently disagrees with you, since the Kindle Direct Publishing agreement says, rather unambiguously, “We reserve the right to change the terms of this Agreement at any time in our sole discretion.” I’m not sure that we can assume Amazon does not in fact mean what it is clearly and unambiguously stating in a legal document it requires everyone participating in its Kindle Direct Publishing program to agree to, although you could ask Amazon’s lawyers about that.
That Amazon can change its agreement and the only recourse for authors is to take down their work (or not) means that in fact, that Amazon can unilaterally change the agreement. It’s nice that Amazon lets the authors have a window of time to decide whether to continue after that point — but of course Amazon could change that, too, unilaterally, if it wanted. So. Yeah.
(Note: I edited the above to desnark it a bit, since I was originally way more snarky than KG merited. Sorry about that, KG.)
Mord Fiddle:
“When Amazon sells at a loss it is Amazon that takes the loss – not the publisher or author.”
I’m not sure where I suggested that Amazon has done otherwise. However, even in those cases the publishers (or authors) may have reasons why they prefer not to have that done — if, for example, they believe Amazon is trying to crater the brick-and-mortar bookselling industry with deep discounts, eventually leaving publishers no choice in who they deal with in terms of sales, or if they simply don’t want consumers to get used to the $9.99 price point for initial release, or what ever. At which point, again, it becomes part of negotiations.
My perspective (and stake) in this whole thing is somewhat different.
I live in Kenya. What that generally means is if Scalzi releases a new book and I want a physical copy I’ll have to apply for it at my favorite book store. It’ll take forever to get here. Probably not until there’s a mass market paperback or there’s a lot of people asking for it. Importing would be an option but the shipping costs and taxes can bring the books price up to 50-70 dollars, so….no way.
To get around all this, many Kenyan book worms are buying kindles. There’s three in this household alone. Its a really elegant solution even if it means less physical books around.
Here’s the problem.
I don’t necessarily think that ebooks should have a set price cap but there really is something disturbing about paying more for an ebook than I would for a hardcover. Call it hypocritical but I don’t think that’s particularly fair. All made worse for me because I can’t even get the hardcover and forget about the situation. So if someone has to get an ebook for whatever reason, they’re paying more or the same for less.
I could be wrong but I think a large part of the draw of ebooks was portability and assumed savings in the long run. I’m not pro amazon’s plan but i’m not sure I entirely agree with how things are now either,
Note: this Hardcover being the same price or lower than the ebook price is not something I’ve ever noted with a John Scalzi book. And it is occurring a lot less across the board.
‘Likewise distributors have contractual obligations to retailers (for example, not being able to offer another retailer a lower price for an item being one of them).’
I’m aware that ‘correct me if I’m wrong, but…’ is most often shorthand for ‘I’m wrong, but…’, but… correct me if I’m wrong but didn’t the whole price-fixing settlement preclude MFN clauses for the near future?* And I do know, so don’t correct me, that the EU at least is looking at these again.
Of all the things that distress me in this whole saga, these no-compete, MFN things… aren’t they the very antithesis of free market competition? Especially when people smugly ask why publishers don’t just sell ebooks themselves cheaper without the retailer’s cut.
*Answer: Correct me if I’m reading this wrong but yes(?) – it was easier to find than I thought: ‘For at least five years, they may not enter into an agreement with an e-book retailer that includes a Price MFN. ‘
I think your idea of buying bottled water at the store counters your point. When you buy water at the store, you are paying for the brick, mortar and manpower of the store. You are paying for the transportation of the water through inefficient distribution systems, as well as the packaging of the water. It is much cheaper when distributed from a common, automated infrastructure at your tap. The same should be true of e-books.
This is sort of a tangential question, but given the changes in the industry, etc, lately, do you know if there are updated metrics on what makes a person buy a book? In 2002 at Clarion PNH said the leading factor was word of mouth, and I’ve been wondering if that’s still the case. (I mean, I don’t know what I think it could have changed to, but I’m in a curious mood.)
And for the person asking about reading non-Amazon books on a Kindle, it’s even possible to email books to a kindle or kindle ap–there’s an email address under the ‘manage my devices’ section in amazon. I check ebooks out of the library for my mom and convert and email them to her with no trouble.
Kevgach
I’m having problems following your maths. You don’t buy hardcovers because it would cost you a great deal of money to get them shipped to your home, yet you say that the ebook costs you more than the hardcover does. This seems unlikely, to say the least.
Equally, there are plenty of ebook sellers who don’t require you to buy a Kindle; Amazon probably regards selling 3 kindles to your household as elegant but it’s completely unnecessary if all you want to do is to read ebooks. You seem to be propagating the myth that reading ebooks requires you to buy a kindle from Amazon, but it remains a myth, albeit one which Amazon does its best to foster…
No, agency pricing was not deemed illegal. What was deemed illegal was collusion between Apple and the publishers. Agency pricing has always been legal.
It’s called domain knowledge.
Also, too: This whole ‘lower prices is good for the consumer’ thing is just fine when it’s milk. Not necessarily so much when it’s something like books and you begin to damage the ecosystem that supplies good books.
I find it increasing difficult to find good books on Amazon unless I already know what author I’m looking for, because I’m buried in a tsunami of crap and Amazon makes no attempt to distinguish any of it in any way, other than their recomendations (which are largely gamed, broken or paid for anyway). The Amazon review system is horribly broken. I’ve encountered ‘books’ with 5 star ratings that would fail eighth grade english class.
@John Fair enough. My understanding of that aspect of things is obviously simplistic. I admit to limited retail experience and to parroting the words of others in this instance. Of course, what you are saying seems less free-market than what I was saying, so… Additionally, even granting what you say, retailers still have final say on pricing in that if they don’t want to price something at a level demanded by a distributor, they can simply refuse to sell it. So again, why are Amazon’s actions any more anti-market than Hachette’s? At worst they are both just doing business as business is done. Not sure there is much to criticize, then.
In any case, that wa far from my most compelling point on the issue. Nor is it a particularly important one, since personally, I care only for what deal will most benefit all authors (bestsellers, midlisters, and self-published alike). Anything that stifles creative competition is just as objectionable to me as something that stifles economic competition.
While I respect your stance, my main problem is that your arguments are almost entirely based on suppositions of what Amazon might do rather than what they have done or any actual numbers or evidence. Those agreeing with you (and you to some extent) also seem overly reluctant to criticize any actions by Hachette (or other big publishers). As you yourself have said, there are no sides here other than self-interest. I dunno. I’m not trying to pick a fight with you, I just want to get a more nuanced view of the issue and it feels like everyone is just talking past each other.
“Which is to say that we need to ask the question of why Amazon decided to use the $14.99 eBook price point.”
Because when the big publishers colluded with Apple to raise prices under the Agency model, $12.99 and $14.99 were the price points for ebooks. As Hachette wants to return to Agency pricing levels, Amazon is using them as an example for comparison.
John – ~$14.99 was the price the big five wished to set as the price of all their e-books in the price fixing scheme with Apple:
“Apple and the five publishers.”
From:
Rod Rubert:
“The same should be true of e-books.”
Why? When you are buying an ebook, you’re paying for the retailer as well, unless you think Amazon’s offices, people and servers manifested out of thin air. You’re also paying for the publisher (and, ahem, the author). Which makes the bottled water analogy rather on point, actually, at least as regards your formulation.
Although again, it’s neither here nor there with regard to how eBook should be priced, which should be: at however much they will sell at.
Mord Fiddle:
“$14.99 was the price the big five wished to set as the price of all their e-books in the price fixing scheme with Apple”
Okay, that makes sense to me. Thanks. It still leaves open the question of whether it’s a useful metric for comparison, if many books are sold below that price point (but above $9.99).
@stevie
I mean, I can check on amazon and see the price of the hardcover is more than the ebook. Which doesn’t make much sense to me. Or alternatively wait for a hardcover to be available and pay less. The general point is that the pricing system doesn’t make much sense to me. If the two are priced similarly, i’d rather get the hardcover but since that’s not an option…
Kindles are just the easiest choice of ereader to acquire here. And i’d rather not read on something with a backlight.
Two funny things on this. Number one is that the average cost of e-books in the U.S. is around $6-7. The majority of e-books are below $9.99 in price point, and certainly the majority of SFFH titles, which are mainly put out first in mass market paperback, are below that price point, with a popular price being $6.59 or lower, which is cheaper than average mass market paperbacks.
The e-books that are above $9.99 in price are a much smaller group of first run initial bestsellers that are coming out in print first as a hardcover and long running bestsellers which continue to sell well in hardcover. This is pretty much what is done with any highly desired, creative product — music albums, designer fashion, movies, latest editions of games, tech gadgets, etc — when they first come out, it’s a high price point. If you want them first, you pay the premium for it. And then the price goes down over time and you can get it cheaper if you wait — usually way cheaper than $9.99 since the publishers usually sync it with print paper edition prices.
But that initial premium for getting it fast helps cover the costs of e-book production — because there are actual staff and tech and resources costs to putting out thousands of e-books at once in multiple formats — and the costs of initially marketing and publicity/advertising of the e-book. And it doesn’t just fund the costs for the bestselling title — it funds it for the many more titles the publisher puts out that don’t have hardcover pub and have low e-book prices thereby and aren’t bestsellers. Because that’s how publishing works — they float the money they’ve got in to finance the whole list, which allows them to take chances on books and authors that aren’t bestsellers and may be entirely new. And the author gets more in the initial burst, after which the price comes down and you hope to continue to do well through volume at the lower price point.
So Amazon isn’t talking about most of Hachette’s international book list. They are talking about a tiny percentage of Hachette international booklist — the newest or most successful and fairly recent of their bestselling authors — the ones where the price is highest (although still much less than the hardcover edition,) for the premium of getting the e-book right away. But they’re pretending that it’s the entire book list, and that e-books should all be priced at $9.99 “or lower.” Which means again as Scalzi pointed out that this would end up raising e-book prices overall and the average would probably move upward. Amazon gets to have its wholesale price for first run bestselling hardcovers — even though e-books isn’t a wholesale market, that’s print mass market paperback — and raise prices for others in the market and control the price for e-books in the market, as they do with the self-published authors.
Second, Amazon natters on about how really they don’t want to increase their share — even though that’s been the news up till this moment about the negotiation. But they interestingly don’t mention anything about co-opt advertising fees. Special placement, list placements, displays, etc., are things that Amazon usually gives good selling self-published authors for free, because it suits their purposes, but they make the publishers pay for them. And the publishers have to pay the co-opt costs on their bigger and mid-list books to get the orders at all — Amazon doesn’t let them opt out. So Amazon increases its cut through the fees — increasing the cost of e-book marketing and advertising but without necessarily the premium in the initial price to cover that increase. That increases its revenues and also its control of the market.
This has, again, been exactly what Amazon has been negotiating for the last ten years, before there even was an e-book market and a Kindle. They’ve been squeezing publishers to pay more fees, as well as price discounts and better return terms on print books, giving Amazon more of sales. Small presses, who may have up to 30% of their sales with Amazon have been faced with having to sell on Amazon at a loss per sale or not at all. When the Independent Publishers Group, the distribution company for a host of small presses, was negotiating with Amazon on this, Amazon marked all the e-books from IPG as unavailable for sale — the same thing it’s done with Hachette and other companies.
So Amazon’s newest spiel is full of fake math. It might actually end up helping authors in their contract negotiations with their publishers over the e-book split, because they and their agents can use Amazon as a whine wedge effectively. (It will not, however, help self-pub authors with Amazon.) But Amazon’s wish list of price control and extra fees are not going to help authors and improve either the wholesale print or e-book markets.
Because here’s the thing, books don’t necessarily sell more at a lower price. If that was true, then every author put out in mass market paperback would outsell even bestselling hardcover authors. And they don’t. They can sell a high volume, but the wholesale market did collapse in the 1990’s and has not fully recovered, so it’s a lot less guaranteed. That’s why the average author makes fewer sales than they did in the past — the grocery store market isn’t there much no more. And for bestsellers, they will sell a lot of e-books at the premium price and then at the lower price later in conjunction with their paperback editions. Bestsellers already have volume. Amazon’s insistence that they forsake the premium in e-books doesn’t necessarily guarantee them more volume. And Amazon knows this. So again, fake math.
It’s a rather risky strategy, since Hachette can divulge a lot about what Amazon demanded. Be interesting to see what happens next.
I think Amazon should either settle with Hachette for a short-term contract, perhaps to the end of Hachette’s next fiscal year, allowing Hachette whatever it wants in terms of pricing. The result, I think, would certainly convince Hachette that Amazon was right about the optimal pricing of ebooks.
Or, since Hachette’s not happy with Amazon’s business model, perhaps Amazon should delist them.
Either way, I think the result would be about the same. Amazon would be doing about the same business. B&N would be doing much better. Hachette, though, and Hachette’s authors, I think they’d be either willing to give Amazon terms, or they’d be sold.
I’m a serious reader, and influence other serious readers, and I will have only paid more than $9.99 for three ebooks this year. As for the others I might like to read, but are now priced at $12.99 to $14.99, I will, as my much-missed former father-in-law put it, “let them age.”
I am not an expert on anything, but am just offering my point-of-view as one of your customers.
“Although again, it’s neither here nor there with regard to how eBook should be priced, which should be: at however much they will sell at.”
Isn’t that the crux of the argument though?
Hachette wants the book to sell at $14.99.
Amazon says sure. You’ll sell 100000 copies at $14.99. But if you price your book at $9.99 you’ll sell 174000 copies. So why wouldn’t you price it at $9.99?
Amazon won’t be the major player forever.
“It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.”
Adam Smith
TL;DR
I”m sure I’ve heard all your arguments from other sources. $9.99 vs $14.99? Who wouldn’t take the $9.99 price?
It’s called domain knowledge.
So you are equating working in the publishing industry to expertise in mathematics, statistics, and logic? Clearly, both you and John are lacking in the logic domain.
Truthfully, I think John is smart enough that he knows his arguments go against the evidence and facts. John, and other authors like Preston, Turow, and Patterson aren’t letting real facts get in the way of their emotional pleas. They fear the reduction in power and relevancy of a system (namely traditional publishing) that has helped them reach their elite status.
John,
As I said, it’s up to you to decide what’s best for your career. It is always possible to throw out new scenarios ($16.35 pricing for six weeks followed by 11.95 followed by $6.99 followed by 5.99, etc.). Amazon put out a press release, which is a form of communication not typically accompanies by 10 pages of charts and statistics. They picked a data point that seems to support their target pricing. Hachette is free to respond with data as well to counter this. It will be interesting to see if they are willing/able to do so..
Also, what Amazon is doing is called a Monopsony. Courts aren’t as harsh as they are with a monopoly.
Andrew:
Yet again, because what Amazon asserts to be a general truth may not be true in a particular case, and because there are more price points between those two to consider.
Folks, we’re clearly at the point where people are coming in not having read the rest of the comments in the thread. Please do, so we can avoid needless repetition of points.
Daniel Knight:
“I think John is smart enough that he knows his arguments go against the evidence and facts.”.
Run along now, please, Daniel.
Jay:
“Amazon doesn’t have the right to tell you or your publisher how to price your book. Neither you nor your publisher have the right to tell Amazon what they are willing to stock.”
However, I for one have never made the latter argument. Amazon appears to be making the former argument.
Andrew: Because Hachette may owe the author “$X” per copy (in the form of recouping an advance or as terms of a contract) for the first 100000 copies and selling at 9.99 is an unfavorable profit margin (maybe negative) – the publisher may be OK with “letting it age” because they may owe a lessor % in royalties after the first 60 days or whatever. The point is the cost to the publisher is not fixed (the cost per unit in royalties to the author, creation costs of the ebook etc) but amazon wants to pay a fixed price.
Exactly what I’ve been trying to say and much better stated. .
John – The $14.99 price point is useful for comparison in that it represents the pub industry’s preferred price point and represents the anchor point for Hachette’s negotiating position.
@jonjason Well, perhaps if this becomes the new norm, enormous advances will become less common and there will be more money to go around for the less known authors because the publishers are even more afraid of a flop, so they start hedging their bets. Plus, readers would then not be spending their entire budget to just to buy this month’s Patterson. That assertion doesn’t seem any less likely than the hand-wringing doomsday scenarios being posited by some. Even the claim that publishers will try to make up the money on the low end is suspect, because it’s not like there aren’t already tons of new authors who debut title is priced around $10 anyway. Of course, millionaire authors might lose out a bit in this scenario, and gross profits might even go down for the publishers, and they both have a right to complain about that, but I’m not going to feel bad for them.
Or maybe I’m wrong. It will be interesting indeed to see if Hachette produces its own data.
What’s the story on discounting? I used to hear that publishers were angry about Amazon selling at a discount and ‘devaluing’ titles in the process.
Now it seems like stopping the discounting policy is a thuggish practice that, ummmm…. anti-devalues the titles?
Mord Fiddle:
“The $14.99 price point is useful for comparison in that it represents the pub industry’s preferred price point”
Does it? As I’ve never been priced at that point, and I am just the sort of author who is a candidate for it — one with a committed and reasonably large fan base — I am unconvinced, at least based on my anecdotal experience. I certainly grant that at one time it may have been considered an ideal price point, but it does not follow that what was true in 2010 is true in 2014. I certainly can see how using that particular price point works for Amazon’s rhetorical purposes, however.
Q:
“Well, perhaps if this becomes the new norm, enormous advances will become less common and there will be more money to go around for the less known authors”
The first of these does not lead directly to the second, I’m afraid. What is rather more likely is that fewer authors will get very large advances, while the amount given to newer authors remains what it is.
Amazon is not telling Hachette how to price its books, it’s simply refusing to sell them at that price (or rather, requiring a bigger cut if they do). Hachette is free to go elsewhere with its business. Hell, maybe the best case scenario here is that they do leave Amazon and stoke some competition on other online retailers. I would be thrilled at that scenario, actually. More competition is never a bad thing.
Q:
“Amazon is not telling Hachette how to price its books, it’s simply refusing to sell them at that price”
Which is to say, it’s telling them how to price their books, and relying on its own market share to enforce the demand.
Please note I am not saying Amazon is not within its rights to do so; Amazon gets what Amazon negotiates for. But let’s not be coy about what’s going on, either.
John,
No worries, I only saw the desnarked version anyway. (I should be clear that I’d never, ever suggest signing a contract on the grounds that you think a provision you don’t like is unenforceable, even if the entire Supreme Court, Scalia and Thomas included, told you before hand that yeah, that would never hold up in court. I’m thinking of people already stuck in bad contracts.)
I realize in practical terms “submit to the new terms or take your book down” (or, even worse, “not even notice the new terms and be stuck with them after thirty days,”) might not seem any different from “Amazon has the power to unilaterally alter the contract,” but in legal terms there’s a real distinction. Legally, Amazon is defining a process to amend the contract, and stating what the author needs to do to accept or reject those changes. I know that probably sounds like my overactive lawyer brain attempting to justify my student debt, but it could easily flip the outcome of a case.
Yes, Amazon does state that they “reserve the right to change the terms of this Agreement at any time in our sole discretion” And if that were in any way enforceable, Amazon would stop there. Instead, the contract proceeds to lay out a whole range of conditions and procedures for alterations to the contract that approach that level as closely as possible.
Why does Amazon include categorical language that sounds like they’re claiming an absolute right to modify the contract as they please when their actual power under the contract is less than that? Well, one, they want the author to believe it, and think they have no recourse if things go pear shaped, and two, it’s something to point to if authors ever complain that the more arcane provisions of the contract granting Amazon near omnipotence are misleading. (“How can you complain about the near omnipotence clause? Shoot, we claim total omnipotence in plain English in the third paragraph. There’s even a picture of a skull laughing!”)
I’m not trying to defend Amazon here — it’s a nasty set of provisions, and a naked power grab. I just hate to see people dismiss legal recourse out of hand as naive in these types of situations when sometimes it’s the least bad option. If I were to boil all of this down to a sentence, it would be “no matter the contract, never assume you’re screwed, always consult a lawyer.”
KG:
” If I were to boil all of this down to a sentence, it would be ‘no matter the contract, never assume you’re screwed, always consult a lawyer.’”
You and I have absolutely no disagreement on this point.
Several months ago I came across a study (of admittedly much smaller sample size) that an website selling rpg systems/books did of their sale prices that factored in a much wider range of price points than just those two in their efforts to determine, as Amazon seems to be doing, the pricing sweet spot (for their best interests in both cases). Their results, if I recall correctly, noted two or three optimal price points based on a number of factors (length of text, notability of title, intended age range, and so forth). I suspect, as Scalzi has suggested multiple times now, that the reality would be similar for non-rpg books as well, meaning it would be less than ideal to cap or try to skew the average towards a single number via policy and contracts instead of allowing for a more market-driven range, since authors could see tangible benefits at a number of price points.
Of course, I can’t actually be helpful and provide a link to said study because for the life of me I can’t locate it again. But it is out there, and I suspect it could be informative in this instance should one be able to locate it (and share as my lack of ability to find it is driving me a bit mad).
I think Amazon has a right to set the maximum price for the books it sells only if the publishers get to say what the maximum salary for Amazon’s CEO and board members should be.
John: I am personally rather uncharitable in my feelings about publishers and you may be right about the potential distribution of advance money. My point is mainly that both of us are just prognosticating. There is nothing particularly convincing about a guess. I really do hope we get more data soon.
@Arphaxad and @daringnovelist – No. Amazon does not care about its customers; it cares about its customers’ wallets, the content thereof and how they can convince its customers to transfer that content to them. It’s a business.
Amazon pretending to care about anything but its own bottom line is disingenuous, and that is as true for what its PR department says in its mission statement as it is for any public statement about authors.
Personally, as a consumer, I don’t want any corporation dictating anything this broadly for an entire industry. I also sure as shit don’t want any corporation – especially one that deals with something I care about as much as I do literature and books – succeeding with anything that smells so strongly of an attempt at a monopoly. Among other things, it’s worth noting that a monopoly isn’t at all good for the consumer.
@ERose Of course the publishers tried the same thing except instead of lower prices they tried to artificially raise them. Point being, both sides are pretty scummy here, which is why I prefer to try and figure out which one will most benefit the largest number of authors, despite the fact that neither side really cares about them.
@MooGiGon – the best thing I’ve read about pricing of goods that cannot otherwise be distinguished between premium and normal and cheap – that is, information goods – is this (oh my god it’s so old now :( ) paper by Hal Varian – later of Google – (um, PDF warning?)
Key quote for me is right there on page 1 – ‘As we will see below, the point of versioning is to get the consumers to sort themselves into different groups according to their willingness to pay.’
Amazon’s approach (and, to be fair, they’re only copying Apple) seems the opposite of this – and indeed the opposite of the more interesting e-retailers like pay-want-you-want Humble Bundle or find-a-reason-to-pay-$150 Kickstarter.
“Does Hachette (or any other publisher or retailer) want to control its market as much as possible? Obviously they do — again, this in itself is in my opinion neither good nor bad in itself.”
Fair enough, though it’s worth mentioning that recent experience shows that while most people would read an implied “within the confines of the law” into that “as much as possible”, Hachette has demonstrated that they are not among them. Warnings from Hachette that Amazon is trying to gain monopoly pricing power are the equivalent of having someone with a rape conviction on their record say “Hey, baby, the guy you’re thinking about is skeevy-looking, why don’t we spend time together instead?”
Not entirely sure that would have been the comparison I would have gone for, there, Avatar.
@Avatar – this is not the first time Amazon has pulled exactly this move on Hachette, the last time being in 2008 (I think), before it lost its moral advantage through, um, price rape? I’m not sure how ‘well, if the US DOJ says it’s wrong…’ helps the analogy, which requires a bit more moral absolutism. Probably something involving Micro$oft would be better.
I think it’s useful to realize that domain knowledge helps against the GIGO problem.
Just sayin’….
Minor correction – missing the word “with” (added below in all caps for easy finding)
.”
To be clear, none of you attacking Amazon actually think Hachette has any kind of moral authority, right? At best, this is just a turd polishing competition.
Q:
It’s turd polishing all the way down!
Which, mind you, is what I’ve been telling people since, like, forever.
No. There’s a very old expression in computer science that applies here. Garbage in, garbage out. If you lack sufficient domain knowledge and all your math is based on faulty premises, you get garbage out.
I think you are obviously correct to view this as a PR statement rather than an as attempt to achieve something constructive.
From an economics point of view, I disagree with a couple of your points.
First, my suspicion is that publishers set ebook prices above the profit-maximixing price even when other media are taken into account. The rationale would be to sacrifice (or “invest”) some profits to protect their position in the industry. While I agree that publishers offer a lot of valuable services to authors and customers beyond printing, storage, returns, etc., of physical books, I also think they are less vulnerable to competition in a world where physical books are still central. Just my opinion, which, like other things, I have one of.
I also put more stock than you in Amazon’s assertion that reducing the price will boost sales and therefore profits. Not to the same degree for every book, but to some degree for the majority of books. I also disagree that, if ebook prices were limited to $9.99, publishers could recoup their losses by making $9.99 the standard and only price. In most cases, that would be cutting off their nose to spite their face. If an ebook costs $9.99, the people who are willing to pay $9.99 or more are going to buy it, and those that are not willing to do so won’t buy it. There may be some books where simply abandoning the “less than $9.99” customers might lead to higher overall profits, but given that publishers typically sell paperbacks for less than that I’m skeptical that that would be true for most books.
On the other hand, I think you hit the nail on the head with probably the most important economic point: that it makes no sense whatsever to put a universal price ceiling of $9.99 on all ebooks. That would be great for readers, but from a business point of view it is silly. There are probably many books for which the profit-maximizing price is indeed $9.99, or even less, but I would guess that, for best-sellers at the very least, it is higher than that.
What I would like to see is more creative ebook pricing – which Amazon actually leans against with its one price fits all scheme. The standard model seems to be basically two price points (mimicking hardcover and paperback prices?) with occasional specials or discounts. Given the advantages inherent in electronic media, I would think there’s a way to imporve on that that both authors and readers would be happy with. Actually I suspect that development of more appropriate pricing models might be a way for an upstart publisher to actually take on the big five.
Anyway, thanks for posting on this topic – you have the best commentary on these issues.
Hachette and Amazon seem like the couple that can’t agree on anything, but that a divorce would be too expensive. They need each other too much. Each is free to leave the other and yet they won’t because each really needs the other financially for their business. This couple is 2 multi-national corporations fighting for money–nothing more.
First, I’m not an author so whether or not Hachette’s position is better for authors, I am not qualified to judge.
Second, I see this as a battle between two giants. If I want a Hachette book, there are other sellers besides Amazon and as long as that remains true, it doesn’t matter to me that Amazon has made it more difficult to get a Hachette book from them.
Third, although I worry about Amazon’s monopoloy power and am less than happy about many individual product decisions the company has made, as a customer I could not be happier. I have never, ever experienced anywhere near the positive level of service Amazon has given me from any other company in any field.
But speaking solely as a reader, I’m inclined to side with Amazon because I think e-book pricing is absurd.
I have a huge personal library of paperbacks (like, in 4 digits-worth). About 90% of those were bought in used-book stores, at half-price. Another 5% probably cost a dollar or less (via $1 bins, library sales, free curbside books). The authors got nothing from those sales. E-books, OTOH, unfortunately (for we buyers) can’t be loaned or re-sold. Indeed, we can’t even legally bequeath them to our heirs.
Now, I have no idea what the turnover rate on used books is. But I suspect that for some used-book buyers, the stores function as a sort-of paid lending library: one buys the book at half-price, sells it back, buys another at half-price, sells it back, etc. Who knows how many hands a hardback or paperback may pass through with only the first-sale copy generating revenue for the author?
I also don’t know at what price-point the typical used-book buyer would opt for an e-book if the used-book option were not available. Nor do I know what percent of “real” books end up in used-book stores or get recycled through them. But I sincerely doubt that frequenters of used-book stores are likely to buy a non-loanable, non-resaleable e-book for $14.99.
In short, I would think that authors would prefer e-book sales to real-book sales simply because each sale of the former puts money in their pockets whereas only the first sale of the latter does.
Lastly, and I don’t know how typical or atypical I may be: I was firmly a “no e-book ever” person until I got my first Fire. Then I discovered public domain e-books. Since my “real” books had long since flowed off shelves onto floors and tables and every other available surface, physical space for expanision had neared the breaking point. It made sense to me to stop buying any book in the public domain. Then I discovered the Kindle Daily Deals and other price cuts. My e-book library now numbers in the hundreds: lots of public domain, e-book versions (on sale) of real books I already own (because it is a lot easier to travel with a tablet than a library), and new purchases.
I am very price sensitive. I download samples even for books selling for .99. I have a constantly updated list of books I’d like to get when the price drops or they turn up in a used-book store. I bought your “Old Man’s War” (my first Scalzi book) as an on-sale e-book because it was actually less expensive than the paperback in my neighborhood used-book store. You no doubt made very little money on the sale – but you did get more than if I had bought the used paperback.
Thanks John, it’s been a while since I’ve checked up on the site, but I came here specifically because I just read an article about this announcement. I’m not a professional writer, and wasn’t sure what to make of the article. I knew you’d have something valuable to say.
“Yet again, because what Amazon asserts to be a general truth may not be true in a particular case, and because there are more price points between those two to consider.”
Right, all 500 of them. None of which apparently are going to lead to a higher sales ratio than by selling at $9.99.
Amazon’s been selling books and crunching sales data for 20 years. Obviously they must be telling tales out of school with these latest numbers. After all, its not in their best interest to tell their suppliers they could make more money if they priced a little less, is it?
Because after all, none of the major publishing houses have been in business longer than 20 years!
On the subject of books being interchangeable: They aren’t fully interchangeable (like perhaps soap), but they are FAR more interchangeable than you give credit for. Your loyalest fans will certainly buy on day one, and damn the price, but for a lot of people, if I’ve got money to spend, then I have lots of good choices – Scalzi’s latest at full rate? Or perhaps an older Stross that I didn’t get to yet at significantly less.
The point I think I hear you making is that you and your publisher should be deciding these things, not Amazon, and on that we agree, but I don’t think you can dismiss the theory that books are interchangeable at all – there’s LOTS of good writers, and VERY few readers who only read the books of one writer.
I must admit that there’s a sort of naive part of me that has the knee-jerk reaction that lower prices are good for the consumer. But while there’s some truth to that, it is far from the complete truth. Imagine if Amazon managed to get all the books in the world available for a buck. If that were the case, would Scalzi be able to keep writing books for a living? I’m guessing the answer is no. Would the authors of technical manuals with an audience in the hundreds be able to keep producing updated versions if they sold for $20 instead of $200? Again, probably not.
The market isn’t magically rational. What seems like a good idea in the short term, can cause harm in the long run. Consider the music industry – there are far fewer new bands being carefully groomed for success, and/or allowed to go their own way, because the industry can no longer support it. The labels can’t afford to let a dozen bands have a couple of albums to do their own thing, hoping to get another Pink Floyd or Led Zeppelin. Instead they have to turn artists into income quickly.
So I’m wary about any argument that turns on lower prices being good for the consumer. The goose laying it’s golden eggs ever more quickly sounds like a great idea, until it dies of exhaustion.
You can’t go far wrong, if you think that they are all out to get you :)
That said, I prefer SmashWords. Amazon? It’s there. I don’t buy from them though, and I know a lot of people who won’t touch them either.
Wayne
In retrospect that analogy is much creepier than I intended. Mea culpa.
I don’t think it’s a moral issue either way. Certainly as a consumer I’m nominally in favor of things which let me purchase things I want for less money.
In addition, I can also see how an established author can definitely be against this idea. For one, even assuming that the publishing houses were happy to increase their royalty percentages, that money’s going to come from somewhere; the obvious first candidate is the advances they pay. (Indeed, in the self-publishing model the compensation is all royalty and no advance; in effect a negative advance, since you incur the production costs before you start selling at all.) If a significant portion of your income is coming from advances, you’re going to be very leery about plans which would result in a reduction in advances. It’s a paradigm that shifts risk away from the publishing houses and toward authors; fine if you are a new author who can’t command a fat advance, but absolutely the suck if you are an author like that.
(Not claiming that you’re just taking this position just because you like big advance checks! Nothing wrong with liking big advance checks, either. Just sayin’, as the argument is that both Amazon and Hachette are pushing their preferred narrative for their own economic benefit, I don’t think anyone in this conversation is taking a position where they end up worse off in their own estimation, myself included.)
Also, I mean… I’m not a professional author, but I’ve heard many, many horror stories about the unreliability of royalty payments and the questionable accounting practices that are sometimes involved. So even if you were convinced that your personal pie could get bigger in the abstract, I don’t know that “but you’ll make it up in royalties!” is a convincing argument to people who have doubtless heard a lot more about it than I. I can buy that an advance check you can cash today is better than a royalty you might never actually collect…
“Because after all, none of the major publishing houses have been in business longer than 20 years!”
From an industry that can only tell its contractors how much they’ve earned twice a year, and even then they get the math wrong sometimes, thats not exactly a ringing endorsement.
Thanks for the clear thinking about Amazon. More good information came out of the NYPL panel discussion on July 1st, Amazon, Business as Usual. Also lots of up-to-date news bits in Shelf Awareness. My local bookstore is my shop of choice. There ought to be room for both Amazon and independents. Amazon looks at books and their production and marketing as nothing more than nuts and bolts. But even nuts and bolts are not the same, some are better than others. I don’t want just one stop shopping for books, not the ones I buy, or the ones we review.
Andrew:
No, Hachette does not. Hachette has e-books at all different prices. Most of Hachette’s e-books are priced BELOW $9.99. Some are priced in the $11 range and the A slot lead bestsellers coming out in hardcover are priced at $14.99, some ten bucks below the hardcover price, and then the prices decrease over time. Amazon’s plans would actually end up RAISING the prices of a lot of e-books that are cheap now, which would be not great for lesser known authors. Having one price for e-books is a disaster for the market. And bestsellers, including in e-book form, have higher costs for marketing and publicity that Amazon doesn’t have to pay — publishers do.
Also, again, Amazon can’t guarantee that you would sell 174,000 copies at $9.99. They are making up figures. For e-book buyers who are price conscious, $9.99 is too high. They want $2.99. Or at least $6.49, the price a lot of e-books are sold at. So there is no potential explosion in sales for that $9.99 price point. And other readers are not price conscious. They are willing to pay for the book in the formats they want — hardcover, e-book, audio. And they are willing to pay more to not have to wait to have it. If you’re willing to wait a bit, your local library will let you read a bestseller for free, including now often in e-book form. So this isn’t just about price per unit, nor is it a simplified $14.99 vs. $9.99 issue. We’re talking about thousands of e-books priced at $6.49 — and Amazon wants more money in co-opt fees for those too.
Q —
I’m not actually attacking Amazon. I’m just pointing out the actual realities of the case, versus Amazon’s spin on it. You’ve got a lot of people who know squat about actual production costs and the book publishing industry making up scenarios and trying to pretend that it’s a victim and villains situation, and it’s not. It’s a business negotiation and there are a lot of factors involved. Nor is it simply two mean corporations trying to screw everyone out of a dollar either. The melodrama around this very prosaic and average dispute is just off the charts.
Michael —
Again, we already HAVE creative e-book pricing. We have lots of different prices for e-books and e-book prices for bestsellers drop over time. That actually does not help Amazon out. They want to have more control of e-book prices for the entire market.
I love these assertions that believe that a drop to $9.99 for bestsellers will somehow be a magic selling tool, not only for the bestsellers, but for the entire industry, most of which are not selling anywhere near that price point. Publishers have been selling bestsellers at a range of prices in different editions and mediums for over a century. And as such they know that price is not the be all and end all, especially now when the wholesale market is smaller. Books are a luxury item bought by a small portion of the population on the more affluent side of the scale. (That’s one of the reasons that Amazon picked books as its gateway product — to collect data on educated, affluent shoppers to whom they could then sell more expensive products.)
Books are also unusual in that we have publicly funded libraries that loan books out, especially bestsellers. If price is your be all and end all in whether you will try a book? Free books online and free books from the library is what you use. You never have to buy a book at all. And yet, people do. That’s a particular type of market. Which is why Amazon can’t actually guarantee squat. A lot of people like discounts, sure, particularly on bestsellers — which is why the fact that an e-book is less than a hardcover does well. But drop the price only to $9.99 and the increase in volume will be very uneven, depending on the bestseller. And e-books still make up less of the market than print.
Michael Kohne:
Yes, the core fans will buy the premium price of a hardcover to have a permanent copy right away. (Some will even buy leather bound special limited editions with special artwork for much more.) Other fans go, no, hardcover is too expensive, but I want it soon, so I’ll buy an audio or e-book version at the lower price. Other fans go, too expensive, I don’t need it now, I’ll wait for a paperback edition or the e-book price to drop. So you might buy the older Stross instead of the newest Stross, which is more expensive, and wait for Scalzi’s newest to become older and buy it then when it’s cheaper and no longer his newest. The publisher’s job is to get everybody at the different price points they are willing to pay. Everybody is not just you. It’s about what the consumer values the book at — not all consumers value creative products at the same price.
Wayne:
I do and I don’t. A decent company, but they also lied in their own PR spin about the publishing industry to self-publishing authors, so I don’t particularly trust them either. But I might buy books from them anyway.
Drew:
1) Authors aren’t contractors of publishers. They do not work for publishers. Publishers are the contractors of authors who issue the publishers a license as business partners. 2) You clearly know nothing about how vendors like Amazon get to pay publishers monies and in particular the returns system that applies to over 70% of the market, including Amazon’s print sales. 3) Amazon not only gets math wrong sometimes, but they’re so secretive, there’s considerable concern over whether they are actually reporting all e-book sales and whether they are inflating and exaggerating e-book sales that then disappear when revenues are sent. There’s a lot of concern over the whole accounting system. But that’s also a normal part of business too.
The reality is that Amazon doesn’t give a crap about the health of the publishing market. If it collapsed, they’d just dump books — including the self-published ones, and go on selling the stuff that actually makes them a lot of money. And that’s fine — they are a department store. But publishers are actually focused on the health of the publishing market so that there continue to be books. That doesn’t make Amazon the villain, but it certainly doesn’t make Amazon a hero for authors or book buyers. It makes them expedient. And what’s expedient for Amazon is not necessarily helpful to authors or publishers, the growth of the book industry, or even customer service.
Amazon deserves to get paid well, given its large role in the marketplace. The question is whether now it wants to be paid more than that particular marketplace can afford, even given its role. If you can’t deal with that issue rationally, at least stop throwing out clueless statements about how the industry operates.
You’ve got a lot of people who know squat about actual production costs and the book publishing industry making up scenarios…
And they’ll keep on doing that, even after people inside the industry are telling them they got the facts wrong.
Hm. Wonder why they’d do that….
One thing my spouse noted was that the contract negotiations are probably under a NDA. So Amazon can say all it wants about what it thinks ebook rates should be — and Hatchette actually can’t say, “Well, actually, what you’re trying to get is…” (E.g., out of my imagination, 50% of ebook prices and 60% of paper books.)
Amazon isn’t saying this is what they’re fighting for, are they? They’re saying “This is what we think is right.”
(My biases? I’m mostly self-published. I think Hatchette is a buffer-state between the Zon Empire and my small barony. In this case… I think it’s in my best interests for Amazon to blink first. Besides, if big publishers want to price their books at whatever, then I can do the “unknown, but an inexpensive risk” pricing on mine, and with my lower overhead, I do well with this tactic. I also think it’s in my best interests for at least some people to buy my books at places other than Amazon, so that if Amazon’s terms ever change in a way I find unacceptable, I can walk without dropping my book income to $200 a year from $200 (or more, depending) a month. Competition keeps Amazon more useful to me.)
I had something else I was going to say, but I need to get my kid to sleep.
Amazon almost certainly knows way more then the publishing houses about markets and price elasticity. A market is a market is a market and it doesn’t matter if it is widgets, books or TV’s they all work the same way and the main key into them is lots of data, and math.
Data Amazon has both access to and world renown expertise with. While the publishing houses mostly still keep their data on punch cards and are frankly lucky to manage to pay people on time. This coming from a guy who worked with Simon and Schuster on their analytic systems (shudder) in addition to some hands on experience with Amazon and Facebook’s internal analytics.. It’s space shuttles vs biplanes there
That doesn’t necessarily mean Amazon isn’t lying or misrepresenting of course it just means the publishers are throwing rocks at tanks when they try to compete with amazon and my money is on the tanks
Also, I really doubt John’s domain knowledge of which he doubtlessly has a ton, comes into play much here. I would be surprised if he was generally involved in pricing his books or dealing with the market mechanics, that is not his job any more then an airplane pilot is an expert on maintenance of jet engines.
With regards to the relative production price of an ebook vs a physical book it is not just the physical manufacture and shipping that plays in, it is also the relative cost of the retail vs e-retail system. Stocking and selling ebooks is way way cheaper then having to pay for a physical store. The cost of stocking and selling a single ebook is so close to zero as to be effectively zero.. .
Also this whole authors against amazon thing reminds me of Metallica vs Napster. .
Heh. And they do it again.
John, thanks for sharing your thoughts with the world, I’ve read Preston and Russo’s thoughts, as well as Eisler and Konrath, and it’s nice to see a point of view with more of a legacy slant that isn’t complete nonsense. It would definitely be nice to see more actual discussion on the topic… although it looks a little like you’ve already made up your mind, based on your replies here.
I agree with you that Amazon has Amazon’s best interests at heart, and it’s going to use data points that support its own argument best… but those data points are consistent with other peoples’ figures, where people have published their own findings – at the moment, there seems to be MUCH more money to be made at lower price points than higher.
Regarding your thoughts…
1. Eisler and Konrath have already discussed the “Amazon is killing the bricks-and-mortar bookstore” idea, so I’ll point you at them.
2. It’s not just Amazon saying this. Everyone who has released their figures seems to come to the same conclusion (if not the same figures), so unless you know and are willing to release any data to the contrary, “I’m not sure about these conclusions” just looks like being contrary.
3. This hasn’t happened in any market, and seems pure straw-man. Amazon wants publishers to run with a $9.99 standard list price because that’s the price that will make Amazon the most money. Based on the same logic, it’s also the price that will make the publisher/author the most money, because it will bring in the highest gross. There’s absolutely no indication that this is a flat price suggestion though, Amazon internally recommends prices throughout the $2.99-$9.99 range, so I can’t see why you think they’re forcing a single price?
Also, without visibility of the actual negotiations, it’s impossible to say what the concrete bargaining positions of either Amazon or Hachette are. Based on what we CAN see though, Amazon’s dealings with KDP authors seems mostly fair, and Hachette’s dealings with its own authors seems mostly unfair, so don’t mind me if I extrapolate from there.
4. I really think you’re reading too much into this one. This reads like a pure throwaway PR point. “Hey, we give KDP authors the full 70% which would cover the normal publisher’s costs too. Wouldn’t it be nice if publishers shared more of that pie with their authors?” Amazon explicitly states “We believe Hachette is sharing too small a portion with the author today, but ultimately that is not our call.”, so I don’t see where you’re getting “letting Amazon also attempt to set what that percentage should be” from. This is a war for our hearts, and Amazon is trying to win us over.
5. You could say that it was Hachette that set this tone, since they were the ones who accused Amazon of harming the poor authors in this whole hostile renegotiation. If Amazon feels like it needs to demonstrate that it cares about the authors, it’s because of a bunch of venues calling Amazon out for not thinking about the authors. Or to put it another way, with Hachette and the Authors Guild (no apostrophe – authors don’t own the guild, publishers do!) and the new Authors United all claiming that Amazon is harming authors with these negotiations, what would the reaction be if Amazon DIDN’T make counter-claims and demonstrate why they care about authors?
@gwangung “Heh. And they do it again.”
read: “Your opinions are wrong because reasons. So you’re wrong.”
After this next masterpiece (Locked In) do you have plans to finish the story started The Human Division?
Yes, clearly Amazon knows more about book marketing than publishing houses, some of whom have been doing this for more than a century. How could they /possibly/ know anything about it? They aren’t the latest hotness.
The amount of willful /deliberate/ misunderstanding that goes on over this issue confounds me, and I continue to be amazed that Kat has the patience and the grace to explain it over and over and over.
I would argue that if brick and mortar bookstores’ or publishers’ business models are that dependent on the $25-30+ Hardcover, they’re already in deep doo-doo, because this genie is well and truly out of the bottle. Luckily for them both, the $8-10 Paperback is actually competitively priced vis-a-vis e-books with the value add of being a tangible, physical good, and with a little effort I suspect that the $12-20 TPB could easily supplant the Hardcover as the “lead” version of paper books.
Heck, if not for the desperate efforts of publishers, Hardcover would have an even smaller market share than it already does. Can you imagine if for the first year or two of a new video game or DVD release, the only version available was the deluxe-ultra-mega-collector’s edition that cost 250-400% what the standard sku will retail for…only without any of the extra features except for a fancier, sturdier, longer-lasting box?
Finally, while books are not -perfect- substitutes, they absolutely ARE substitutable in the modern market to some degree. 1,000 people going into a bookstore wanting John Ringo might not settle for J.D. Robb (well, aside from a few people I know), but I doubt even 300 of them are leaving the bookstore empty handed. They’re walking out with books from David Weber, Walter Jon Williams, Lois McMaster Bujold, H. Paul Honsinger, Jack Campbell/John G. Hemry, David Drake, James S.A. Corey, Elizabeth Moon, Rachel Bach, Neal Asher, Hannu Rajaniemi, Michael Z. Williamson, Kameron Hurley, or maybe even something by a Mr. Scalzi…
…and that’s JUST from authors whose newly published books I’ve picked off the shelf at a bookstore or from a website in the past few years, almost all of them having examples in a specific SUB-GENRE of what mainstream publishing still considers to be a lesser “genre”. When we start to look at older works by current authors or the the greats of decades past, made easily available by digital sales and/or web stores, “spoiled for choice” doesn’t -begin- to cover it.
I live in a country where mass market paperbacks are routinely priced between $20 and $25. Ebooks were amazingly cheap to begin with, and then the market for readers took off and the publishers ramped up the prices. Our ebook prices are significantly higher than those in the US. the argument on higher prices has always been higher taxes, but books purchased through amazon ( with geo location) are not paying any tax. Yet still the price charged is only marginally less than the physical version.
Amazon and the publishers are doing this. Not a fan of either. And Hachette? One of the worst offenders. What the publishers and amazon are looking for is the sweet spot of what consumers will pay. It has little to do with actual costs to produce the book.
And while readers for a particular book/series/author may be willing to pay higher prices, the overall reader community is very price sensitive. Unknown books are a fairly interchangeable quality. I am far more likely to take a chance on a new author if the cost is less than $5. And I will buy those alternate authors if the one I like is selling their newest book for $20 as an ebook.
I stopped buying Hachette books about three years ago,as a result of what I considered egregious overpricing of backlist by a favourite author and a rather frustrating refusal to publish an author I liked in ebook format in my region ( they owned the rights to ebook publishing in the region and choose not publish). I was sufficiently frustrated by this to find a list of all their imprints and to not to buy from them. Yes, this is sometimes irritating when an author I like is published through them, but there are plenty of books out there.
“Q:
It’s turd polishing all the way down!
Which, mind you, is what I’ve been telling people since, like, forever.”
John – So help me understand why you’re defending the status quo – In which publishers collude to fix author royalties at 15%, and in which e-books earn you ~1.65 less per sale and them ~$2.50 more than p-books – so aggressively? All Hachette has to do is weep a few crocodile tears for authors and the literary culture and authors who’ve had success in the system grab torches and pitchforks and march to sow Amazon’s corporate HQ with salt.
I’ll be honest, as Mr Troy points out above, you post comes off as contrariness and Amazon bashing. Sure, Amazon isn’t in it for the authors, and I don’t think they’re saying that. I hear them saying that their model is better for Authors than Hachettes. And the numbers I’ve seen from Amazon and elsewhere supports that. I’ve yet to see a single market analysis from Hachette or anyone else that suggests Amazon’s wrong. It’s all crossed arms and knee jerk suspicion because its Amazon.
As you’ve pointed out Hachette is every bit as much in it for their own interests as Amazon and routinely and burn authors in a heartbeat if it’s in the interest of their bottom line. So why Amazon gets you in such a lather is a bit of a mystery. Maybe it’s because, having gotten inside the gates of publishing’s Mingo City you’re more invested in Emperor Ming than that upstart Emperor Palpatine (see Popham’s ‘Down and Out in Mingo City’ –). I honestly don’t know.
As you’ve previously advocated approaching this fracas as a matter of business, I’m puzzled as to why you’re suddenly picking sides in the holy war.
“Yes, clearly Amazon knows more about book marketing than publishing houses, some of whom have been doing this for more than a century. How could they /possibly/ know anything about it? They aren’t the latest hotness.”
If you think this is a fight about different views on who knows how to sell more books you are sadly mistaken. For Amazon it is, they believe Amazon, publishers and authors can make more by hitting a sweet spot on the price-profit curve. Self interest, yes, but with a flow on effect that benefits both publishers and authors.
But for Hachette this isn’t about profit, they want to control ebook prices to protect their control around print books. That’s why they colluded to fix ebooks at a higher price. It’s a rational decision for them to do so, and here’s why:
Publishers can’t let ebooks supplant paper because almost all their promotion is based on paper and bookstores. That promotion is what pushes their ebook sales. If ebooks take away their paper sales, they also lose the promotion that pushes sales of ebooks. If that happens, they have a bunch of eBooks bobbing in the tsumani with all the rest, and they’re begging Amazon for promotional spots because they don’t know how to market directly to consumers.
Without paper, publishers do not have a competitive advantage. Their protection of paper is rational.
Publisher marketing doesn’t really drive sales in an online world where web-based co-op spending is far less effective than retailer algorithms tuned to showcase to each customer the books they are likely to actually want.
Bookstores: Publisher co-op determines what sells. Publisher marketing “makes” bestsellers. Reduced visibility (and enforced scarcity) of midlist/backlist titles keeps sales concentrated in the premium-priced frontlist.
Online Retailers: Customer preference determines what sells. Retailer algorithms “make” bestsellers. Limitless inventory ensures that the long tail continues to sell, even if that competes with the premium-priced frontlist.
For Hachette, this isn’t about profit. It’s never been about profit.
Hachette simply wants to maintain control. And they have no qualms about doing this even if it means less money for authors.
Both Amazon and Hachette are in this for their own self interest, but only one is screwing authors.
Mord Fiddle:
“I’m puzzled as to why you’re suddenly picking sides in the holy war.”
A negotiation between two very large corporations is somehow a holy war now? Well, see, that’s your problem right there.
And it is a problem, because it points to the stupid, short-sighted, rabble-rousing aspect of the public messaging around this negotiation. Please get it into your head:. People who are thinking about this as sides in any other sense but Amazon and Hachette’s very specific negotiations where the two of them are representing their own interests to each other are, in my opinion, being more than a little dramatic, and need to go sit in a corner for a time out. That’s the most polite way I can put it.
Likewise, criticizing Amazon’s messaging does not a priori mean anything regarding my thoughts and opinions on Hachette in a general sense; it simply means I am criticizing Amazon’s messaging. I have said time and again that I have very positive relations with Amazon, and some negative ones; I have said time and again that I have very positive relations with publishers I work with, and some negative ones. I have criticized “traditional” publishers at least as harshly as I am criticizing Amazon now. Do I have a bias? I do: It’s against writers (particularly me) being fucked over by, or being manipulated by, companies that fundamentally do not give a shit about them except for their particular use. Amazon’s the one who’s been posturing publicly on that score, so guess what? That’s where my attention is at the moment.
That you (and it appears others) have assumed my criticism of Amazon’s messaging means I am “picking sides in a holy war,” says rather more about your, frankly, woefully simplistic and binary view of this event than it says about what I am saying. You are welcome to have it and it’s clear you do, but I’m not obliged to subscribe to such a view, and I don’t. This is not in fact a holy war. It’s two large corporations, one with a very small profit margin and one with currently no profits, trying to find a way to make money off of each other, both of them throwing off chaff in the form of public messaging — and that public messaging — all of it, from both of them, to be viewed with suspicion, particularly as it regards authors’ interests.
I hope I am being sufficiently clear on what I think about “sides” and “holy wars” here.
Right, all 500 of them. None of which apparently are going to lead to a higher sales ratio than by selling at $9.99.
Minor pedantic correction: none of those lead to a higher increase in sales than does the change from $14.99 to $9.99. However, the sales might actually decrease for an ebook priced at $12.99 when it is priced at $9.99 and the lower price point could lead to further downward pressure on prices (i.e., lead to lower long-term sales volume and lower profits in the long run). In addition, those sales ratios are extremely sensitive to the pricing points; “cherry picking” doesn’t begin to describe.
To say that pricing is fiendishly complex would be understating the matter.
This is all very concerning and I appreciate your post and analysis. Your mathematical points are spot on and is something that anyone that has taken a statistics course should have been aware of themselves (but apparently are not).
I have fiction and nonfiction (mathematics in fact) published in the Kindle Direct market and whereas I am fine with my individual fiction titles being priced under $9.99 and individual sections of the nonfiction being priced low also, my 5 year plan includes compilations and $9.99 is just too low.
John –
You reflect my point precisely. This is all about technological change driving a business paradigm shift. It is not an either-or-choice between a genteel literary culture or writers as sharecroppers.
If I and others have expressed dismay at your apparent choosing of sides, it’s because that it how your column comes off. Accusing me of being “woefully simplistic” and “binary” does not change the way your column is generally perceived. It’s rather like sitting in a writing workshop and telling the people providing feedback that it’s their fault for not getting from the story what you intended. Sort of ‘I wrote what I meant and if you didn’t read it the way I meant it it’s because you’re a bunch of stupid-heads.
I respect you as a writer and have a high regard to the nuanced thinking you often bring to your blog. In this case, however, you come off as taking sides – I’m sorry if I offend you by saying so, but that’s how this post reads to me and I find it puzzling. Calling me names for expressing that opinion in a civil manner is unkind at the very least.
Like you I am not on a side so much as acknowledging the publishing paradigm shift happening under traditional publishing’s feet, trying to figure out where it’s going and what the publishers of the future will. I truly wish the discussion in the author community generated less heat and more light.
I just want to make a quick rebuttal on point 2, which is not quite what price elasticity suggests. They’re not saying books are fungible, and that every book is equivalent to another at the same price point. What they are saying is, for the average book, if you sell 100 of your book at 14.99, you would have sold 174 at 9.99. Now, granted, this is based on agglomerated data, and so it’s entirely possible that the price elasticity of _your_ book is not 1.74, and maybe one should conduct this analysis (which prompts the question, what was the increase in books sold for you, Mr. Scalzi, when you dropped the price to 2.99 for a day? I’m not being snarky, I just want to know if you have some data to shed and can calculate your own price elasticity. If you can run the experiment, you probably should, and then you could figure out your optimal pricing strategy.). However, their point stands; for the average author, a reduction in price results in an increase in sales.
JohnD brings a relevant point though, in which we only have a single piece of data to go with, which is hard to apply to other books with different prices, and with no assurance that 9.99 is the optimal point, on average. I doubt the
Here’s the real question…
If Amazon can sell more at 9.99, why not reduce the price and take it out of THEIR part of the profit?
As long as the author doesn’t get less, no harm no foul.
… somehow clicked on post instead of preview.
My last point was that I doubt the average consumer cares about all of this however, because if Amazon can bring the prices down, the consumer will jump on it, and frankly, I’m sure the average person is more interested in books from the big sellers (ie, better known authors that can charge 14.99) than unknown works at 6.99, which makes Amazon’s battle a win for them.
@Charlie Stross:
Why are those binding arbitration clauses EVEN LEGAL?
Because it’s two parties to a contract mutually agreeing to waive their right to deal with any dispute in the court system in favor of arbitration. There are all kinds of things that people can agree to give up via contract; in the US, this is one of them. The assumption is that unless there is some really good policy reason, or it violates the law, people can make whatever agreements they both want.
It’s not, as @Brian would like to believe, some reaction to class action lawsuits (which contrary to his comment are becoming more restricted these days). It is in fact the ‘rational tort reform’ he praises. i.e., mandatory arbitration shifts cases out of the public area, where there are fixed procedures and protections, and judges have no interest whatsoever in the outcome of a case, to a privileged (literally: private law) system with its own rules. Arbitration is generally more expensive for anyone who isn’t a large company, and even if the arbitrators are not openly biased towards one side, as for-profit companies they have a very obvious interest in finding in favor of repeat customers. Those would be the corporations writing arbitration clauses into their contracts, not the individual authors or on the other side.
John Scalzi said:.
So the term “average” means something different in “your” field? Are you applying the special snowflake argument that in publishing everything works differently (even math)?
Rather than address my actual points you have simply attacked my credentials. As others have pointed out your particular credentials of being a published author have no relevance in understanding price elasticity of books, and even less relevance in your ability to evaluate statistical data. How often do you get to set the price of your traditionally published works? For your self-published works, have you done extensive pricing experiments with the $14.99 and $9.99 price points? Even if you did, do you think your experience as a single author is valid for the entire mass of authors in the world?
I prefer to debate purely based on evidence and logic, but you decided to bring into question my credentials as the primary reason for questioning my conclusions and refutation of your points. Can I take it that if I provide you with proof of my credentials (since that seems to be the most important criteria to you in evaluating the validity of an argument) – you will then make a new blog post stating that each of your points were wrong for the reasons I stated?
JS–
Killing off Amazon’s competitors is good for Amazon; there’s rather less of an argument that it’s good for anyone else.
If publishers sustain prices for ebooks closer to hardcover and other printed books, at say $14.99, to benefit retailers of books, it harms the consumers, who will be paying a price premium that they otherwise wouldn’t under the Amazon $9.99 model. Readers can be 50% fewer titles at the higher price than the lower price.
From the author/publisher standpoint, your position makes quite a bit of sense. But for the vast majority of people who are not authors or publishers, Amazon is doing a much better job arguing for lower prices than for higher prices.
In the end authors and publishers have the ultimate trump card – you can always opt-out of Amazon completely. And then the theory of author/book non-fungibility can be put to the test. And also you can see how well retailers can hold up. If you went exclusively non-Amazon, it might start a huge trend that would do a big benefit to traditional booksellers.
We have 1/2 of a quarter of reduced Hatchette shipments in, and by the way, Amazon still sold more units of both books and ebooks than the pre-dispute quarter.
I hope I am being sufficiently clear on what I think about “sides” and “holy wars” here.
JS, do you have any theories on why both Amazon and Hatchette seem to be treating this as a holy-war, and directing followers to “take sides”. I mean, for example, the SFWA recently
“picked sides”.
You had a nice post a while ago about your many business arrangements, and how it entangled you essentially into having a stake on both the Amazon and Hatchette side. Is that common among bestselling authors or are you somewhat unique in that regard?
Hugh Howey, champion of the self-published, has had a completely different take on this recent salvo:
It would be interesting to see the two of you debate this subject as I think it may clarify things for those of us (me) who get confused and frightened by all this talk of eBooks and pricing.
Mord Fiddle:
Oh, good. Glad that’s settled.
Charles Pearson:
Hugh’s an excellent fellow and I definitely encourage people read his thoughts.
dh:
It’s certainly in the interest of both Amazon and Hachette to motiviate authors for their own goals, so why wouldn’t they? I would prefer authors look at each critically.
Let’s also be clear: “Authors” are not a monolithic front. Some authors legitimately have interests aligned with Amazon more than Hachette, and vice versa. Acknowledging such is not an issue. I do think an issue is not acknowledging those alignments are usually temporary and contingent.
“If publishers sustain prices for ebooks closer to hardcover and other printed books, at say $14.99, to benefit retailers of books, it harms the consumers, who will be paying a price premium that they otherwise wouldn’t under the Amazon $9.99 model.”
And? If they don’t like the $14.99 price point, they won’t buy it at that price. Eventually the price point will drop and they’ll buy it then. Again, this is what already happens with books: They come out in hardcover to service people who will buy at that price, and then later they come out in paperback at a lower price. This doesn’t “harm” consumers, it’s a standard practice. Also, of course, no one is being forced or required to buy a book (exception: academic market, which is its own animal, so let’s not get into it here), so again the issue of “harm” is not a great one to apply here.
(This would be where people bring up collusion — let me agree that collusion to keep book prices high is bad and harmful. But we’re not talking collusion right now.)
Daniel Knight So the term “average” means something different in “your” field? Are you applying the special snowflake argument that in publishing everything works differently (even math)?
Given that there are multiple definitions of “average” (arithmetical average {or mean}, geometrical average {or mean}, harmonic average {or mean}, weighted average, etc.), then yes, “average” could mean something different to two different people. And given that you misused the term in your second post (if we use the mean on a bell curve then it means that half of the authors would sell more and half of the authors would sell less {unless we assume bias}, not “most would sell more” as you asserted), then you probably shouldn’t pound too much on your authority in the topic.
I am not sure amazon will end up having a single “price” for a book. They have shown ability to selectively target pricing to individuals. There are all sorts of options to play with that can drive sales. Like I said, it’s space shuttles vs biplanes
For the people making the math argument that 1.74 > 1 respectively at 14.99 and 9.99. Part of Scalzi’s argument is that the people that would buy at 9.99 will still be there and you can drop the price afterwards. So a simplistic model would be:
$14.99 x 100 = $1499
$9.99 x 174 = $1738.26
$14.99 x 100 + $9.99 x 74 = $2238.26
Artificially setting a cap on price would decrease overall profits.
I have no side in this battle, but will take any press release from either side with a grain of salt. So the same week Amazon reported more losses than predicted which caused a stock price drop and has shareholders openly wondering when they’ll make a profit, Amazon has a press release that wants lower prices (for the consumers!) and better royalties (for the authors!) with the one publisher they happen to be in contract negotiations with. The grain of salt here is boulder sized.
Honestly, there is a lot of condescension going on here that is really harming the discussion. Pro tip: belittling others will not get them to agree with you. Most people can change their minds if you care to try and convince them in a non asshole way. Also, if you stop using condescension and supposition and start using evidence and facts, that will probably help as well.
Incidentally, I think it’s clear that many “pro-amazon” commentors actually do disagree, at least in part, with their actions, but still see possible merits that are worth exploring, as well as problems with the way Hachette does things. Or maybe that’s just me. Frankly, a lot of you decrying people for picking “sides” seem to be doing so yourselves, and much more strongly than I or others trying to broaden the discourse. Before you respond to slap me down, if you truly believe you are treating all angles fairly, I am probably not addressing you.
Also, those of you bickering about the actual production costs bla bla bla need to realize that you have been kicking a dead red herring for quite some time now. Yes, some people keep rising to your bait, but that doesn’t change what it is.
But publishers are actually focused on the health of the publishing market so that there continue to be books.
This is a rather obvious misconception I’m seeing repeatedly that even Scalzi has eschewed. Publishers are focused on their own health, just like the record labels were during the whole music piracy saga. Except this spat is far less dramatic. Let’s be honest here. Amazon is not going to kill the book industry, as some of you seem to think is possible. Publishers are not fighting for the book industry either, other than to maintain their place in it. They are fighting for their own bottom line and that will affect different authors in different ways. That, my friends is the crux of what we should be discussing (imo), instead of trying to decide which “side” is more of an asshole.
The industry is changing and will continue to change. Publishers need to change with it and stop trying to cling to the old ways, or Amazon will soon be the least of their problems. That might be beyond the scope of this thread, but it’s also a far more important discussion. Just my two cents.
Daniel Knight:
I’ve gone back and re-read your argument and see where we differ. You are assuming that the population of price sensitivity follows a bell curve, not the population of authors 9which is what my first reading got). Looked at from that perspective, you are closer to being right but there are still some hidden assumptions.
If we assume a σ of 0.24 on the μ of 1.74 for price sensitivity, then approximately 1/3 of the authors will do worse with a price of $9.99 than they would with a price of $14.99. (We’ll ignore the associated questions of different price points for now.) OK, so we’ve got a clear majority of authors who are selling more books, so that’s good, right? Not necessarily as it could mean that the total profit for the authors and for the companies is lower.
The reason for that is because those authors who do worse probably have less price elasticity and are more likely to be the authors who sell the most books. That is, people are willing to pay more for popular authors because they like the books more. As a result, the total number of books sold may go up but the total profit for the universe of authors as a whole could go down and the profit for best-selling authors (such as your host) will almost certainly decrease.
Andrew S: that is a wonderfully simple demonstration of the math, thank you.
I prefer to debate purely based on evidence and logic,
That’s what you say, but not what you’re doing. You have used the latter and have actively denigrated the former. THAT is why you are getting pushback.
As a former NY author and now a small press publisher dealing with Amazon as a wholesale vendor, Yesssss to everything John said, Yes to everything Kat Goodwin said.
Please, y’all, go read THE EVERYTHING STORE, which points out that Jeff Bezos picked books as his entry product because of their demographics, nothing else. Amazon routinely price matches to beat other competitors, eating the losses, and eats the difference on all those yummy discounts you see, and on promotions (when it does the picking.) That’s how it maintains the low, low prices customers love, while driving booksellers out of business and framing publishers as greedy villains — even though books are the *only* product we sell, whereas books are just the gateway drug Amazon uses to lure customers through the door.
My press has made a lot of money thanks to Amazon, though the gold rush days are over. The dark side of the relationship is a landscape of double-talk, manipulation, and ever-increasing contract demands. (BTW, my authors get 40 percent net on ebooks, and our average ebook price is under 9.99)
I know the dark side of the NY houses, too, and had both wonderful and bitter experiences there. “It’s just business,” is the best summary in all cases, I suppose.
I don’t trust a word from either side. I’m just doing business, too.. Also, I believe it assumes the increase in sales is constant across the universe of authors. I would expect best-selling authors to increase their sales at the $9.99 price point more than obscure authors as they benefit from media and advertising.
For example, my sister bought Rowling’s ‘Silkworm’ in hardcover. She’s a fan. I bought it from Amazon as an e-book for 9.99. Despite Rowling’s reputation, I wouldn’t have bought it at over $10.00 (It’s a 14.99 e-book on B&N). Having read her, I might buy her next book despite never having bought any of her prior books. However, I wouldn’t have bought the book at all if it hadn’t been so heavily advertised and reviewed.
None of this matters. The game is over, and what we’re seeing now is the dying gasps of NY publishing’s old ways. Even if Hachette comes out on top in this particular battle and forces Amazon to charge high prices for digital books while windowing the titles to try and help Hardcover books survive for a few more years, it’s not going to change the end result.
Digital is here, the genie is out of the bottle, and no time in history has an the old model survived when faced with a new and more convenient technology. Eventually, Amazon will get what they want. There is no doubt about it.
A few points: (1) Missing from the discussion thus far is the “barriers to entry” nature of the monopoly/monopsony problem, with regard to Amazon’s short-term vs. long-term strategy. If it is, in fact, trying to drive consumer prices down (and accept short-term losses) in order to be the only (or major) supplier of books to consumers and/or reseller of books from publishers, this can be viewed as predatory pricing – perhaps good for the consumer in the very short run, but less so in the long run, since there are significant fixed costs to establishing a similar e-book/bricks&mortar presence in the market, particularly in the light of Amazon’s potential willingness to drop prices enough to make business untenable for the new entrant. Re @Austin’s point on dumping, the Clayton and Sherman Acts do address the issue of predatory pricing – the question is whether current or future governments will act on it. (2) Another drawback of e-books: digital rights are country-specific, so you can buy an e-book in the US, but it may not be available when you take your e-book reader somewhere out of the country on vacation, business, etc.(3) Consumer response to temporary price changes requires that consumers be aware of such changes. A Scalzi e-book may have been available for $2.99 for a day, but as I was not aware of that, I didn’t buy it. Absent some sort of widget that provides real-time price quotes on all of the books one might be willing to purchase from all of the suppliers that might be able to sell it, the cost of acquiring such price information means that consumers will be operating under conditions of imperfect information much of the time; thus, one cannot expect all of the benefits and results of the free market to obtain, even if you assume other market imperfections away.
What I really need is someone to design “library shelves of holding +8”.
Mord Fiddle: This is only true if we assume the authors royalty cut remains remains at the 15% (for p-books, less I believe for e-books) maintained by the big 5 imprints
That is a fair point but secondary to the question of pricing; an author who gets 15% royalties when her books sell at $14.99 is not going to magically get 25% royalties when her books are priced at $9.99. Indeed, I would expect publishers to try to decrease author’s royalties in order to keep their profits high. As Scalzi has noted extensively, this isn’t personal, it is just business.
I would expect best-selling authors to increase their sales at the $9.99 price point more than obscure authors as they benefit from media and advertising.
The (scanty) evidence argues against that. Best-selling authors tend to have more demand which means that their readers are probably less price sensitive as a group than readers of less popular authors; indeed, there is some evidence that publicizing an author makes her readers less price sensitive.
@Charlie Stross et al. regarding the arbitration clauses: here’s a link discussing recent Supreme Court cases regarding class-action suits and “binding arbitration” –
Am I the only one who’s not worried about new releases in this scenario? I go to repurchase The Wheel of Time series. Paperback: $5.50; eBook: $14.99. What? You can’t deny the eBook has less value than a paperback, and cost less to produce, so please explain to me this thinking — because I don’t get it, and as a result, most publishers can rot with their set the price methodology.
(NOTE: My pricing is from an incident that occurred before the collusion between Apple and others was adjudicated. Currently the book I was looking for is $4.99. Much better; who had to win the fight to get it there?)
JohnD – The reference journal article is dated 2002 and is related only to hardcover/paperback pricing over time. A lot of technological water (and new data) under the bridge since then.
A few other articles of interest tracking numbers:
gwangung:
Books are sold just like widgets! And publishers will die soon! I don’t know what any of the terms involving book marketing and accounting mean, but I will use them anyway! If you point out, (as has now been done about 15 different ways by different people,) that Amazon’s math is wrong, I will give you fake math about how things should be!
Notice how they keep arguing as if there were only two e-book prices — $14.99 and $9.99. Most e-books are sold for less than $9.99. Most books never come out in a hardcover edition. They are essentially arguing about a small handful of books that are initially priced higher because they are new and popular — like any other creative product sold in the market — and ignoring the entire rest of the market.
You never hear anyone going, how dare they sell that designer handbag for $700 when it comes out with the spring collection for the first time, when there are handbags for twenty bucks sold on the street! Or how dare this new digital game be priced $50-200 on its initial release when there are older games that now only sell for $25!
But with books, apparently hardcovers, which are sturdier than paperbacks and more permanent than e-books or audio books and so some customers pay for those features because they want them, are evil. Never mind that more and more self-published authors are also putting out print books in trade paper or hardcover too for a wider distribution of customers. Never mind that publishers have been selling hardcovers just as long as they’ve been selling paperbacks at different prices and nobody freaked out about it ever. Never mind that e-books are cheaper than hardcovers by a lot to an enormous amount. Never mind that you don’t actually ever have to buy a hardcover and they actually have very little impact on e-book prices. Never mind that Amazon sells tons and tons of hardcovers happily. They’re evil! And the symbol of everything that is wrong with book publishing, but not apparently wrong with any other creative or tech industry.
Oh, and if Amazon says that the bestselling author will sell mega more e-books at $9.99, apparently that must mean that it is true. And Amazon’s only concern and demand in the negotiation is about e-book prices, which we already know is bullshit.
I don’t really get the worship of Amazon — I never have. It’s a good company, sure. It expanded markets, not invented them, (the digital e-book market existed in the 1990’s, online selling existed before Amazon did.) Other newer companies now are probably more innovative. It’s been a company impressive for running for twenty years on losses and no profits but still getting investors in its “potential.” It is ruthless in its business practices and grip on market shares and probably breaks the law on predatory pricing. It’s helped shore up the losses after the collapse of the wholesale market, but extracts a heavy price for it. It did a neat PR move with self-published authors so that they could then sell self-published authors and their families goods, that has increased book distribution, so good. (But now self-published authors may be entering a new bind there.) It treats its warehouse workers like shit and its customer service has gone down hill. It’s not the Death Star and it’s not the Rebellion, and neither is Hachette.
They are simply negotiating. Amazon dragged the authors into the middle of it, because they do love a good Death Star PR spin. And Amazon may have hit the boundary wall of what they can suck out of publishers with their market leverage and strongarm tactics. Which they will survive. So will Hachette.
“Notice how they keep arguing as if there were only two e-book prices — $14.99 and $9.99. Most e-books are sold for less than $9.99.”
This average includes independently published and small press published.
What is the average price of a major 5 publisher e-book?
Mord Fiddle –
Hey, I said that the data was scanty! 8-)
Seriously, though there has been a lot of technological change, I doubt that the buying habits of people have changed all that much as they are closely linked to our mental makeup. (Yes, I’m one of those who thinks that homo economicus is {mostly} a myth.) So though technology has changed quite a bit since 2002, our habits are still stuck in the 1980s.
“However, their point stands; for the average author, a reduction in price results in an increase in sales.”
This point would only stand with a could stuck in there. A point made earlier that needs to be reiterated here is, Amazon’s numbers are based on overall sales of a group of books sold at $9.99 versus a group of books sold at $14.99. At no point do they specify anywhere what books are in group a. and what books are in group b. Even if they took an even sample of literary fiction, balancing the male and female authors in both groups, debut versus legacy authors, books adapted to film versus never adapted, etc… They’re still looking at two disparate populations of books. Even if they took books priced at one time at $14.99 and now priced $9.99 or vice versa, they’re looking at books at differing times in their lifespan.
Their numbers show that they sell more eBooks at one price point than they do at another, which suggests an author could see more sales of eBooks at the lower price point than the higher, not that they will.
.”
We also have to assume here that increasing sales of eBooks doesn’t cut into sales of hardcover books. If pricing within Amazon’s preferred margin preferences eBooks and hurts sales of hardcover books, John could end up either seeing no monetary advantage, or possibly losing money in the exchange.
Advantaging eBooks over physical books has a huge upside for Amazon (still sells books, sells Kindles, spends less on shipping, storage, etc…), but it’s far less advantageous if you’re income derives from selling both.
@mrTroy quoth: “Hey, we give KDP authors the full 70% which would cover the normal publisher’s costs too. Wouldn’t it be nice if publishers shared more of that pie with their authors?”
It would be nice if Amazon actually gave self-publishing authors the full 70%. I get 35% on my 99-cent short stories. For the novels, I get 70% minus “transmission fees” that are about a dime each. And that’s in the “blessed” countries; in other countries, since I didn’t go exclusive, they offer only 35% no matter what price range I’m in.
Which is to say: Amazon apparently likes to nickel-and-dime people, and what they say sounds really good — and then they start assessing fees, or wanting exclusivity, etc., and it becomes… maybe still better than alternatives, but it’s not really what they’re touting. So when they make their pretty, shiny, idealistic comments about what they think the split should be? Grain. Of. Salt.
@mmug — I like your points. :)
@Mikes75 “We also have to assume here that increasing sales of eBooks doesn’t cut into sales of hardcover books”
Indications are that purchases of e-books doesn’t reduce p-book purchases and can actually increase. People with e-readers read more books than they did before purchasing the e-reader. And a person who buys 20 p-books a year will likely still buy at least 20 p-books after they buy an e-reader. It’s not a zero-sum game. See the Pew research link above.
e-books have been very, very good to Hachette and company.. They were one of the top e-book sellers last year.
No Name said:
Sure I can. For many of the books that I read, I value the size, weight, and portability of an ebook. Instead of carrying two or three paperbacks with me to a dentist appointment or on a trip, I can carry one small device. I also really like the fact that the light on my book reader is adjustable and adds no weight to the device, which means that I can read in bed at night without keeping my wife up (an improvement over either the lamp on the nightstand or a booklight).
I get that I am theoretically giving up the right to resell that book or that it could cease to exist at some point in the future if the retailer or publisher decides to yank my license (all the more reason to buy books without DRM, if that is a concern). However, the reality is that the kinds of books that I buy digitally are, to me, disposable entertainment, much like going to a live concert or movie theater. I read the book and am done with it. Paperback books just end up taking up space in my house, and the hassle of boxing them up and taking them to the local bookstore outweighs the benefit (to me) of whatever pittance I might get in reselling them.
So, while an ebook might have less value to you than a physical book, there are definitely specific classes of books (i.e. those that I read as disposable entertainment) for which an ebook has greater value to me.
@JohnD – Again, check the Pew survey data. It shows our buying habits have changed. and it’s consistent with my own informal surveys – people with e-readers tend to buy and read a lot more books. Seems books are habit forming. ;)
Mord Fiddle:
“Indications are that purchases of e-books doesn’t reduce p-book purchases and can actually increase.”
My anecdotal notes: I don’t see eBook sales cannibalizing hardcover sales (they bite into mass market, but kind of put them in the same bucket, so I don’t see it as a loss — really, it works out just fine). Also, audio sales don’t seem to have an impact on either hardcover or eBook sales. I suspect they address audiences that overlaps but have substantially separate elements.
In the end, Amazon has already won this war because organizations like SFWA have proven themselves to be utterly worthless. It doesn’t matter what anyone says on this blog or any other, Amazon has already won.
The debate is over.
Who knew this was a war between Amazon and the SFWA? The things one learns, listening to trolls.
You know rochrist, I have to concede, you are absolutely right, there is no disagreement between SFWA and Amazon, yet another reason that Amazon will win.
DH —.
As has already been explained, most of Hachette’s books aren’t hardcovers, only a percentage of Hachette’s books are going to be bestsellers, and like everybody else, Hachette drops the price of the e-books over time. In SFF, which Hachette has several imprints in multiple countries, half the market is mass market paperback. Most new authors in SFFH are brought out in mass market paperback original only and don’t get a hardcover until they build up enough of an audience that would be willing to buy a hardcover edition first because they like the author’s series and they want it.
So you can get nearly all the offerings of your favorite SFF authors in e-books already for under $9.99 (at least if you’re not in Kenya.) You just can’t get that price for the new one that just came out of the big sellers who get hc pubs because everybody wants it right off, so if you want it right off — which is a service you’re getting and has higher initial costs in not only production but marketing and publicity costs — you pay a premium for it. If you don’t like that price, you wait and buy it later when it’s lower. Or get it from the library.
Publishers do this, booksellers do this and self-publishing bestselling authors do this — they’ll put new offerings at a slightly higher price of a few bucks (even though Amazon then takes a bigger cut for the exact same service it’s giving,) and then drop it over time. It’s called selling for what the market will bear because different readers, again, value different books at different levels. One reader won’t pay over ten bucks even for an author he likes; another has no problem with it. Authors don’t make more money by selling only to the guy who won’t pay much and not to the other person who values the author more. He makes the maximum money by selling to BOTH. That’s why we have both hardcovers and paperbacks.
Elizabeth McCoy:
Amazon gave the self-pubs a lot of free passes in the beginning because it was building the marketplace. It doesn’t have to give as many freebies anymore, so it’s taking some of them away and it’s exerting control over the marketplace since it owns 90% of it. If you want to sell with Amazon, Amazon controls what price you can sell for in the whole e-book market (and self-pub audio market too, sounds like.) But it wasn’t giving freebies to the publishers and was demanding more fees in co-opt money as well. Which is a big part of the negotiations with Hachette — e-book prices of bestsellers have very little to do with most of it.
Blackadder:
Dude, SFWA has got very little to do with price setting and booksellers. It’s an author organization that tries to help authors run their business. It helps authors in legal disputes and to get services like healthcare. It provides information and resources, keeps an eye out for scam operations, offers mentorship and networking, and helps spread awareness of the field and authors with promotional opportunities. It isn’t like the AUW. And it has nothing to do with Hachette and Amazon’s negotiations. SFWA may be effective at some things and not so much at others, but authors coming together to help each other is not worthless.
Mord Fiddle – Again, check the Pew survey data. It shows our buying habits have changed.
I don’t think that our two points are incompatible. People can buy more books and still be price sensitive about some authors (“Huh – a RHDS book for $0.99 – nah, still too much!”) and not so sensitive about other authors (“Oh, boy! A brand new edition of The Silmarillion with extra glitter! And only $99.99!”). So even though folks buy more books, decreasing the price on ebooks won’t necessarily translate into more sales or more profits for the authors; instead, the money that would have gone into buying the book may be used for other forms of entertainment, such as eating or having running water.
John’s post has made it to slashdot, where the idiocy abounds.
Rochrist:
Indeed, it was in reading a Slashdot comment that I was moved to remark on Twitter that I enjoyed reading about the fantasy versions of me that exist in people’s heads, which don’t actually seem much like me at all. But, eh.
@ Joh.
Using the values you’ve given, the earnings are quite close. If you’re an author (and you get a 17.5% royalty), and I say you can sell 100K books at $12.99 and make $227325, or you can sell 130K books at $9.99 and make $227272, is that extra $53 dollars worth 30K less readers (and their potential buys of your books) to you?
100K at sold at $14.50 with a 17.5% royalty rate is $253750. 145K sold at $9.99 with a 17.5% royalty rate is $253496. You lose $253 but gain 45K additional readers. Not a bad trade.
The ratio line from $9.99 to $14.99 probably resembles a sine wave than something straight, if the ratio at $12.99 to $9.99 was better, Amazon would have used it.
“It doesn’t have to give as many freebies anymore, so it’s taking some of them away and it’s exerting control over the marketplace since it owns 90% of it.”
Amazon had 90% of the market in 2008. Since then their share has fallen to 55% to 65%, depending on who you read, and its falling each year. Maybe not a lot, but its falling. If Amazons trying to achieve a monopoly, they’re doing it wrong.
@Elizabeth McCoy: “It would be nice if Amazon actually gave self-publishing authors the full 70%.”
Indeed you are correct, I was aware of the different royalty rates that KDP offers at different price points (and other considerations), but I either forgot or never knew that other fees came out of that royalty.
Do you know if legacy royalties to authors include these fees or if they are handled separately? Also, are Amazon’s fees on their royalties offset by the fact that they are paid monthly (I believe) instead of six-monthly, giving you an opportunity to earn (on average) three extra months of interest on the earnings (or contrarily, three fewer months accruing interest on credit)?
Not that nickel-and-diming anyone is a good thing, and it would be nice to see Amazon simply incorporating fees into its percentage cut rather than taking a “percentage-plus-fees” approach, as long as that doesn’t lead to a higher cut being paid overall.
@Kat Goodwin: “Notice how they keep arguing as if there were only two e-book prices — $14.99 and $9.99.”
You make some great points, but it would be even better if you could spend less time attacking straw men and easy arguments. Obviously creative pricing exists, (nearly) nobody is arguing that there are only two e-book prices.
Like it or not, books are an entertainment good, and are largely fungible across other entertainment goods, such as movies, computer games, board games, and to a lesser extent cups of tea or coffee while out with friends. The point of lowering prices and increasing demand is slightly due to price sensitivity within the market… but also to do with price sensitivity between markets. If I can afford to buy a computer game or a book, but not both, then I will make a choice based on which one I think I will get more enjoyment out of for the price.
Everyone who has released figures in every entertainment industry has demonstrated this price elasticity, so it seems that it doesn’t overly matter if core fans are price insensitive, since you get to enjoy price sensitivity over the whole entertainment market.
You don’t have to agree with me, but saying that I’m wrong because I’m not in the publishing industry, without backing it up, doesn’t make for a very compelling argument.
@John Scalzi: I enjoyed reading about the fantasy versions of me that exist in people’s heads, which don’t actually seem much like me at all. But, eh.
And yet you don’t do or say anything to disprove them. When it comes to public perception, it unfortunately doesn’t matter what you think you should be perceived as. Just as Mord Fiddle mentioned, people react to you in a particular way because that’s how you come across. If you want a different reaction, act differently :-)
Or be true to yourself, and don’t give a damn what people think. I think you’ve got the first bit down, just stop complaining about other people not liking it and you’ll get the second bit. Or at least you’ll look like it, which is most of the battle.
Finally, to those who are suggesting that you can maximise profits by releasing at a premium price point, then dropping to mass-market afterwards… this is demonstrably true, but can you accept that this might be considered as rude and exploitative by your fans? I mean really, why would you want to charge your biggest fans 50% more than people who aren’t such big fans of your work? Aren’t they the people that you want to reward the most? I mean, these are the people that really want you to keep writing, so I can see why they are willing to pay more to support you… but I’d like to offer an alternative, where you let your biggest fans pay the same as everyone else as thanks for their loyalty, and then offer them other ways to support you further if they want to and can afford it.
Hardcover sales are great for your big fans, since they are a great product (if you have the space) that lasts a long time (although I don’t know if this is a good example, since I don’t know how much extra you make off hardcover sales), donations, book signing tours, speaking events… I’m sure there are many other ways to maximise both revenue and fan goodwill, with the benefit that all of these add-ons are all piracy-proof!
It’s interesting, I have been buying ebooks ever since the first edition of the kindle came out and my purchase of physical books has gradually dwindled down to zero. It took about five years though. My overall purchase of books has certainly gone up
My guess is this pattern is not going to prove unusual there is a lot of habits to build up and tear down with regards to th physicsl object that is a book but eventually the convenience factor gets you
Ian:
Again, all of these figurings are based on an either/or pricing system — that you can sell the e-book for either $14.99 or $9.99 but not both over the course of the book’s sales time. And it is based on the assertion that books are guaranteed to sell more copies at the lower price point — that the lower price point will automatically bring in these new readers.
As has been explained, selling the e-book at $14.99 and then later $9.99 or $6.49 in conjunction with paperback, can sell more copies than just selling the e-book at $14.99 or $9.99. And the volume rates are not guaranteed for any one book, as any person in publishing can explain to you. Books don’t sell at the same rate, just because they are at the same price. Amazon can no more guarantee a bestseller will sell 175K or 145K at the $9.99 price point than they can build a unicorn. Nor can they guarantee that if the publisher first prices the e-book at $14.99 when the hc comes out, and then drops it later to $9.99 or lower, that they won’t sell 175K when the book is dropped to the $9.99 price point plus the 100K sales or more at the $14.99 level. The e-books that are doing the best are in fact new bestselling fiction coming out with the hardcovers — the ones at the higher price point. It’s entirely not uncommon for an individual book to sell better in hardcover than it does in paperback at the lower price point.
So there’s no point in saying that the earnings would be relatively close as you have no way of knowing that those would be what the earnings would be, and neither does Amazon. Nor do publishers have to make Sophie’s choice on the book prices — they can do all of them.
Amazon still has 90% of the self-publishing book market, which is the market I was referring to in reference to the freebies that Amazon gave the self-published authors. They have about 55% of the whole e-book market. They did have a very nice monopoly of the whole e-book market, but that was broken by Apple and other players in the field as the e-book market expanded. They milked it as long as they could. But they maintain their monopoly on the self-published market, largely by having the largest sales base and by ruthlessly squashing as many competitors for that market as possible and using contracts with self-published authors to control their prices in the market.
Troylaurin:.
Yes and no. As has been explained before, book publishing depends on a core readership that regularly buys books, tries new authors, and puts aside money specifically to buy books. (About 20% of the population.) And then they try to drag other people in, largely with word of mouth and things like movie adaptations and hope that a small percentage of them add to the core readership.
But dragging other people in is not simply a matter of low prices. It’s a matter of what books are offered. People who become interested in a drag them in book, like say Twilight or The Yiddish Policemen’s Union, are usually the ones most willing to pay the premium price for it, not the cheap price. When a book becomes a film, for instance, the hardcover, trade paperback and mass market paperback editions of that book may all end up back on the bestseller list at the same time, despite the mass market being the cheapest (and the e-book too — that’s one time where the e-book price may be raised again because they know people will pay it.) Cheaply priced editions of books, even bestsellers, are readily available — and yet people will pay the higher priced version if it has the features and speed they want, just like with other creative products.
If I’m choosing between a computer game and a book and my determinant is price, I’m looking at a $50-60 game, plus a $400 gaming system, plus several hundred dollars in accessories like headphones and mics — that’s a lot of cash versus even a $200 e-reader and some e-books. But let’s say I’ve got the equipment already, so it’s just the $50 game. I can buy two hardcovers for that, five paperbacks, three trade paperbacks, and anywhere from three to twenty e-books. Now I can play the game for a long time, more than once, with friends, etc. so it may be worth putting my money into that if I don’t like to read. If I do like to read, then I may go for getting more books instead of one game that will be obsolete in four months or require me to shell out more money for updates or a whole new version. If I like both, I might go for a cheaper, older game at the $25 level if I can find one and buy a book or two. Now I have both — whoopee! Not everybody makes the same buying decisions, so it’s not a one to one ratio.
Movies in the theater are fricking expensive. And yet my husband and I go see big special effects movies on the big screen right off, because we don’t want to wait. (We’re dragging the clan to Guardians next week.) We get sattelite t.v. which gets tons of movies only a few months after they were in theaters. If we watch one we saw in theaters, since they get a fee from the t.v. folk, we’re paying twice. And yet we also have to have Netflix, which has some of the movies and lots of t.v. shows for a lower but additional price. And then there are DVD’s, some of which you can borrow for free from libraries, but some of them we’ve bought. Those are purchasing choices we make. They don’t have to be the choices others make, or maybe even can make. If price was the issue, there are a lot of ways to get things cheap, or free, especially if you wait. Price isn’t the main issue.
And that’s just fiction. The bulk of the money in book publishing comes from non-fiction, not fiction, which does not compete with movies, games, etc. Non-fiction provides information, information that you can very possibly get for free on the Internet. And yet people still buy non-fiction books for a variety of reasons in a variety of editions, very few of which have anything to do with price.
Amazon’s figures in its media spin are bullshit fluff. Amazon has gotten self-published authors to price their books as low as possible for several reasons, a prime one being it lets them control the market and crush competitors who are stuck also selling those e-books at the low price. And it works to a degree because new authors will get more gamblers for a cheap price. But when the self-pub author becomes a bestseller — they raise their prices because people are willing to pay that price, and many of them have made deals with print publishers and have hardcovers, etc. at the higher price because they will sell. They sell at the higher and they sell at the lower. How high and how low depends on the particular book, the selling venue and a lot of other factors.
So I’m not saying you are wrong because you are not in the publishing industry. I’m saying you are wrong because you are saying things that are wrong (which likely comes from ignorance of the publishing industry.) And rather than just saying that you are wrong without backing it up, I’ve been writing paragraphs and paragraphs explaining aspects of book publishing and pricing issues to show why you and others are wrong. And others in the industry have explained why the math is wrong and Scalzi has explained why the math is wrong. If I back it up anymore, people are likely to pass out.
If it’s demonstrably true, why do you keep arguing that it’s not? Also, why is it rude or exploitative? No one makes readers buy any of these things at any of these prices. They choose to do so, knowing full well that they can get it cheaper in another format later on or at the same time, or get it for free from the library, which you can’t get with a lot of other stuff. Why is it not rude and exploitative that there are movie theaters showing movies for premium prices? Why is it not rude and exploitative that a game sells for $60 when the game will be only $25 the next year, and why is it not rude and exploitative that the new gamebox for the games will not play the old games and you have to get all new versions if you get the gamebox? Why is it not rude and exploitative that a handbag is sold for $700 from a designer but then will be sold for $100 a year later? Why is it that books — one of the cheapest, most valued creative books available — is the only product that is rude and exploitative if you offer people willing to pay authors a decent amount the choice of a hardcover version? Why is it rude and exploitative to have sales where prices drop?
Books are not gallons of salad dressing. If you value a book, you decide what you will pay for it or if you will get it at all.
Readers don’t/won’t pay for book tours and most speaking events. Authors can’t even get cons to pay for them to come anymore or even let them in for free. Some projects can get some stuff like kickstarter, but by and large authors don’t get donations. Readers also seldom buy supplementary merchandise from authors, unless there are dramatic adaptations — and the authors don’t get much of a cut from the adaptations merchandising. The you should come up with other ways to cage money because I don’t like the existence of hardcovers is a strange argument. If you don’t like hardcovers, don’t buy them.
Unholyguy — E-book sales are leveling off, so no. E-books have many pluses. But there are reasons people buy print instead. There are a lot of people who can afford to occasionally buy a print book, but can’t afford to buy an e-reader and its accessories and its batteries and have to replace it every few years since they are designed to die as soon as possible or may get damaged or stolen. Hard copy books also have their uses in permanence. People like to use books to decorate as well as read. For some readers, however, e-books can be very useful, or at least some types of books as e-books can be useful. It varies.
Bundling now, that I still think may end up more common. But ironically, Amazon has put a big spike in bundling.
Deciding on how to spend your finite amount of money doesn’t make the things you are choosing between fungible even if they all fulfill a general need of entertaining you. If they were fungible, buying birthday presents would be a whole lot easier.
@Kat Goodwin, again, thanks for discussing. You continue to raise good points, but I think you also continue to see things from the same limited point of view, which contributes to other people looking to be wrong.
.”
Amazon mentions and compares two specific price points in its PR, but never claims that no other price points exist. Near the end of the PR it distinctly implies a range of price points: “Is it Amazon’s position that all e-books should be $9.99 or less? No, we accept that there will be legitimate reasons for a small number of specialized titles to be above $9.99.”
And yes, a number of people have focused on those two price points, on both sides of the discussion, which is why I hedged with “(nearly) nobody”. I’d like to invite you to ignore them as they are obviously not reading what you’re saying, so you won’t have to get frustrated repeating yourself at them.
“I’m looking at a $50-60 game, plus a $400 gaming system, plus several hundred dollars in accessories like headphones and mics … I might go for a cheaper, older game at the $25 level if I can find one”
The computer game market parallels the book market much more than you seem to think. There’s a bunch of legacy-published stuff that sits at high prices – $50 and more at first release, and some of those are pure gold. There are also huge swathes of games released at $10 new (strangely never $9.99), and at any particular point in time there are any number of games available anywhere from $2.50 to $25 to whet one’s appetite, between Steam and GoG.com and Desura and Humble Bundle (who also offer e-book bundles now)…
Movies don’t just include the cinema, since renting a movie from iTunes or similar runs around $5, or a subscription to Netflix if you happen to live in the US. If we’re talking DVD or blu-ray, $25 could get you a copy of a beloved movie after the cinema run, comparable to buying a hardcover copy of a beloved novel.
So at a variety of price points – $5, $10, $25 – there are direct comparisons available between books, movies and computer games, and there’s a growing number of consumers who span all three markets (or more). You’re right, this only applies to fiction though. I have no idea if this is at all applicable to non-fiction writing.
“And others in the industry have explained why the math is wrong and Scalzi has explained why the math is wrong. If I back it up anymore, people are likely to pass out.”
I think you’ve been restating your claim, more than backing it up. Note that yet others in the industry have explained why the maths is right, too. See JA Konrath and Barry Eisler in particular, I haven’t had time to read other opinions at this point. Those two in particular seem to have done quite well for themselves using Amazon maths. Does that extrapolate to the rest of the industry? I have no idea, but they seem to think so.
“If it’s demonstrably true, why do you keep arguing that it’s not?”
I probably left that impression from my first post, but I think I’ve stopped arguing that since then.
“Also, why is it rude or exploitative? … Why is it not rude and exploitative that there are movie theaters showing movies for premium prices? Why is it not rude and exploitative that a game sells for $60 when the game will be only $25 the next year …”
Just because everyone does it, doesn’t make it rude. As I mentioned in my last post, I think it’s rude (I guess many would disagree) particularly because I enjoy reading series, and a high initial list price like this that feels like a slap – “You want to know what’s going to happen to your favourite characters? I know you’ve already been waiting for a year or more, but if you don’t want to wait for another year then you’ll have to pay more! Bwahahahaha!”
I know that other people find it rude in other industries as well. In computers in particular, if the price of a product drops a few months after it’s released people tend to complain quite loudly, often to the point of demanding (and sometimes getting) refunds.
As to why I think it’s rude… it’s really because you’re offering a one-sided contract. Take it or leave it. If I leave it though, is that because the price was too high, or did I miss the release for some reason, or was I uninterested at any price? And how crappy do your fans feel when they buy at the higher price the week or the day before the price drops? Perhaps if the high initial release price also included dates of expected price drops then it isn’t rude any more, because then I can see the terms of the contract I’m potentially agreeing to.
I can’t stop anyone from pricing high and then dropping the price over time, and I don’t expect to be able to. I can only share why I don’t like it when it’s done to me, and hope that somebody cares to listen. If you are simply after a way for fans to prove that they are willing to support you in your endeavours, then give them whatever you think is a fair price, and let them donate if they think the fair price is higher. I don’t know if such an approach will work for everyone, but some artists (musicians, comedians, and authors, at least) have done ok out of releasing their works for free and asking fans to pay what they think it’s worth. Heck, even our host John Scalzi has been in a Humble Bundle.
“Why is it not rude and exploitative that a handbag is sold for $700 from a designer but then will be sold for $100 a year later?”
That’s actually a bit different – the handbag isn’t the product in this case, being in front of the fashion trends is the product. So after a year, you’re not buying the fashion trend, you’re only buying the bag, and that’s what the price is reflecting.
Most books aren’t fashion items though, so I don’t think it quite applies here. Then again, Dan Browne and Stephen King might disagree…
“Readers don’t/won’t pay for book tours and most speaking events.”
Perhaps not, but budding authors who like your work will. Smaller market, and sorry for the shifting goalposts, but still non-zero if available.
“The you should come up with other ways to cage money because I don’t like the existence of hardcovers is a strange argument. If you don’t like hardcovers, don’t buy them.”
I suggested that hardcovers fit exactly into the model of offering a larger product to allow fans to support you, why do you think I don’t like hardcovers? I’m quite a fan of hardcovers, but I unfortunately can’t fit any more into my house. Even paperbacks require a one-in, one-out scheme at my house at the moment, which is part of what I like about ebooks.
I do find it interesting that you consider my proposal to be cadging money off your fans, but outright charging them more for the same thing just a bit earlier than everyone else is fine.
And that’s more than enough from me. Have a great weekend, folks.
@CS Clark,
But those categories are fungible, at least accounting for taste. If you know someone likes thrillers (and doesn’t dislike either movies or books), then a thriller novel or a ticket to a thriller movie are largely comparable. It doesn’t even have to be the same genre, as long as you know that person likes all the options.
Sure, it doesn’t help in narrowing down a person’s tastes, but that’s not enough to discount the argument that this is the way that a growing part of the world is choosing to spend their disposable income.
It seems to me that paper books are more fungible than ebooks because mushrooms prefer the more biodegradable food source.
@MrTroy – Fungible doesn’t just mean a bunch of things you might potentially like that have something in common. Building a rocket, fighting a mummy, climbing up the Eiffel Tower, discovering something that doesn’t exist, giving a monkey a shower, surfing tidal waves, creating nanobots, locating Frankenstein’s brain, finding a dodo bird, painting a continent and driving your sister insane are not fungible even if they’re all good ways of spending your summer vacation.
And, come to think of it, doesn’t your whole argument about the rudeness of asking people to pay X rather than X – 5 + a donation revolve around a hypothetical reader who is vexed by having to wait six months until the price of Book V of Blood Wrath Of The Dragon King’s Vampire Storm Zombies drops down to what they are comfortable with? How does that happen if they won’t be (almost) exactly as satisfied with buying Book XLI of Lord Angst Magic And The Apocalypse Of Doom? Or getting a coffee?
Troy Laurin:
“And yet you don’t do or say anything to disprove them.”
Alternately, I clearly state my thoughts and they ignore them for their own version of what I said, which in my experience is rather more often the case. In which case I shrug, because that’s their karma. There’s a lot of reading comprehension fail (for example, in this particular case, the illogical inference that noting a problem with Amazon’s position a priori means one is on the “side” of Hachette, especially when I’ve taken pains to note both companies work for their interest, not mine), and it’s not my job to wander about the field, gently teaching people how to parse arguments. Grown humans should know how to read, and a quarter century of being a professional writer suggests strongly I know how to write.
“this is demonstrably true, but can you accept that this might be considered as rude and exploitative by your fans?”
You know, if someone holds as a cherished tenet of their life that I am only allowed to make money in a manner they find personally acceptable, otherwise I run the risk of being seen as rude and exploitative, perhaps they will accept that I might tell them to fuck right off. If work of mine is not at a price point they find acceptable, they can wait until it is, or they can go to the library and read that copy. Or they can pirate it and never pay me at all, although if they prefer never to pay me or the people I work with over waiting a bit for the price to come down (or visiting the library), I question whether they are “fans” at all.
I like video games, but I very rarely pay $60 for them, so it means that I wait for the price to come down to a price I’ll pay. I don’t consider it “rude” or “exploitative” to price something at what the market will bear, and then drop it when that market goes away. This is because I am a grown human being in control of my emotions and I don’t get into a stompy hissy fit when I can’t have a mass-produced commercial item that I want the way I want right this very second. This is partly because as a grown human being I recognize that it’s economically beneficial for the video game makers to address that upper market tier, even if I can’t see why I would want to be a part of it, and partly because I can keep myself busy with other things until that game’s price point drops to something I’ll pay.
That said, I like the idea of the “why don’t you just do [x] instead of what you’re doing now?” plan you suggest, and I heartily encourage you to develop that system and mature it to the point where it’s in fact economically viable for me to participate without penalty (I will look askance at you, however, if you suggest I have to do all the heavy lifting and carry all the risk). I’ll note that if you follow my career at all, you’ll have seen many many examples of me trying various economic models to see how they work, because I’m always interested in new ways to make money; some have worked, and some have not, and I generally share such information.
However, in the meantime I hope you will understand when I say that I will also continue to use a distribution model that I know benefits my career and income, and continues to do so at this point in time. I know my fans like it when I get paid, because it means I don’t have to do anything else other than write the books they like. And this is a fine way to get paid.
Ian – Using the values you’ve given, the earnings are quite close.
Only for values close to one σ. Get two σ away and the discrepancy is much larger. For example, there will be some authors whose works only have a price sensitivity of 1.26 at those two price points (see the discussion with Mord Fiddle for why). One of those authors would see their readership increase by 26,000 new sales but their net sales would decrease to $1,258,740 for a net loss of $240,260 and a royalty loss of $84,091 on assumed sales of 100,000 units. Don’t know about you, but $84k is pretty significant money to me.
You lose $253 but gain 45K additional readers. Not a bad trade.
But shouldn’t the choice to make that trade be in the hands of the author or publisher and not the wholesaler? And, as noted above, the price per reader can become very high indeed for the more popular authors.
The ratio line from $9.99 to $14.99 probably resembles a sine wave than something straight,
Nope. It is far funkier than that with lots of local minima and maxima. But I agree that Amazon picked the points they did in part because they helped to make a better case. My complaint is that it is an incomplete case.
@ JohnD
“But shouldn’t the choice to make that trade be in the hands of the author or publisher and not the wholesaler?”
Publishers made that choice the first time around with Agency Pricing. $9.99 and more units sold wasn’t good enough. $12.99 and $14.99 was the future, according to them, and that’s what they’re trying to get back to.
I know more than a few authors who are upset at how their books have been priced in the past, and have yet to meet one whose traditionally published who has final say over what their work sells for. And remember, were talking about an Industry that doesn’t do MSRP. It prints the prices directly on the product, and hardly ever changes them downward.
If Amazon’s ratio is wrong, why hasn’t Hachette said so? Surely there’s a bean counter with a spreadsheet somewhere in France that could look at the number as well?
I could use a jpg of Scalzi looking askance at me just for those times when I contemplate doing something counterproductive.
Mr. Scalzi, I’ve defended you against SJW’s, as being free to speak your mind, because I thought you were intelligent. This post has removed that belief. You may, or may not be a good author (for certain values of author :-)), but you clearly do NOT know business economics. I do. For the sake of assumptions, a minimum wage worker earns wage is $6.12/hour, after taxes (FICA and income, no state/local income taxes). Your $14.99 ebook costs 2.45 hours of pay. a $9.00 ebook costs 1.63 or _50%_ less. I’ve spent a lot of years working at “low paying jobs,” so I know what choices you have to make. If you make $50K/year, $14.99 is trivial (probably), but at $14.5K, it’s very different.
As to prices in general, Amazon is 100% right in saying that “book prices are not very elastic.” (Elastic pricing means the price has little effect on sales.)
For example, Fuel (gas & diesel) are somewhat elastic. A 10% rise in price does not mean a 10% or more drop in sales. Food is more elastic in vegetables and fruit, than in meats. A $10/lb. steak can be replaced by cheaper cuts, masking the _individual_ cut’s elasticity. OTOH, clothing, housing, furniture is very *non* elastic in price. Doing without “new,” or just not buying, are very real choices, and often used. Another example is books of all kinds. Books compete with free TV, Libraries, movies, DVD rentals, etc. Every book purchase (text books and reference books are an exception) has to have a _perceived_ value of more than the time required to earn the purchase price. James Patterson, L.E. Modessit, David Weber JD Robb (Nora Roberts)and Steven King may be able to sell ebooks at $14.99, but most can’t. They don’t have fanatical (I want their books at whatever price) readers. Maybe your sales wouldn’t change, but *most* authors will sell more at the $9.99 top price. _That_ leads to more royalty income.
Finally, yes the retailer/publisher “sets the price,” but the *buyer* decides whether or not to *pay* that price. If as a buyer, I decide that a rice is “too high,” I don’t buy the book. *No one* has the power to _make_ me pay a price higher than _I want to_. (Note:my first book is about to come out, and I pay close attention to this subject. I want to sell the _maximum_ number of books, at the highest profit {royalty} I can. Maybe other authors don’t care, but _I_ do.)
If Amazon’s ratio is wrong, why hasn’t Hachette said so?
It isn’t a question of the ratio being “wrong”; the ratio is incomplete. It tells us the average change in sales for authors as a whole for two selected price points. It does not, for example, tell us what change in sales John Scalzi could expect if his latest book was priced $9.99 instead of $14.99; it is possible (though unlikely) that sales would actually decrease (marketing wonks call this the “discount effect“). Without knowing that information for each individual author, we cannot tell if the price change that Amazon wants will be a net benefit to the author or not; indeed, it is likely to be a net loss for the more popular authors as their readers are less price sensitive (i.e., would be willing to pay the higher price for the book).
The only thing that we know for sure is that Amazon would make more money if the book prices were lowered to $9.99. Not that there is anything wrong with that, but it would be nice if the authors made more money, too.
Yeah, no, citation needed and all what the kids say. Are you really trying to argue that the market for books, movies, computer games* and board games is nearly identical, such that consumers really don’t care whether they read a Scalzi book or play Monopoly, they’ll just pick whichever is cheaper and more available? Or that all this stuff about ‘bestselling authors’ is illusory, and people will always opt for the $9.99 e-book of Author McNoName’s latest thriller over paying $14.99 for the newest Brad Thor or Dan Brown novel – after all, they’re equally available and one is cheaper?
Because that’s what “fungible” means; the goods are so like one another that substitution doesn’t matter. If the grocery store bagger drops my sack of flour, they hand me another one; there’s no real difference between the one that broke and the replacement. What you seem to be trying to say is that people may buy Book A or go to Movie B if they can’t get Book C. Setting aside whether that’s even true, that’s not “fungible”. Fungible would mean that if you ordered a Scalzi book from Amazon, and they were out, so they sent you Danielle Steele’s latest instead at the same or lower price, you’d be just as happy.
*Even ‘computer games’ is not a fungible category, since that encompasses everything from mobile games to freemium MMORPGs to blockbuster episodic titles; you can’t possibly be saying with a straight face that the audience for the next Grand Theft Auto sees that game as fungible with Clash of Clans.
Walter Daniels:
“James Patterson, L.E. Modessit, David Weber JD Robb (Nora Roberts)and Steven King may be able to sell ebooks at $14.99, but most can’t.”
And this argues for artificially capping the top price for all authors and their books how, exactly?
Note also that the authors you note sell very well, and add significantly to the profits of the publishers, some of the profits of which will go into the acquisition of works from other writers. I’m not entirely convinced that arguing to publishers that they should remove a significant chunk of their profit potential, thus starving them of the funds they need to run their business and develop new authors (and thus, new sources of income) is one that they are going to see the wisdom of.
Likewise, when an author can’t sell at $14.99, what happens is the publisher prices their works for less. This already happens; take a look at the vast number of books between the $14.99 and $9.99 price points (and also, below $9.99 as well). And yet again, the idea that books may be priced at one level to take advantage of those willing to pay it, and then at another level for another audience, appears to be disregarded. When Amazon contends that book prices are not very elastic, it (and you) appears to be willing to elide its own experience at selling books at price points ranging from $14.99 to $.99, finding willing consumers at each, and moving prices about to take advantage of different groups of consumers. As I’ve noted before, one of the great things about ebooks is the ability to have more flexibility in pricing — I’m not sure why I or you or anyone would want to choke off that flexibility on either end of the price scale.
Let’s stop pretending that Amazon doesn’t want books at no higher than the $9.99 price point for its own reasons — i.e., to lock people into the Amazon ecosystem, and to help winnow out other competitors — and isn’t massaging its messaging to aid in that quest. I know you’ve already insulted my intelligence here, but don’t insult it that way.
“you clearly do NOT know business economics. I do.”
I have reason to doubt this assertion, based on the available evidence.
What lost is the fact that Amazon wants books to sell at $10. Unfortunately that’s a long way from the price of comparable digital media—games, tunes, shows, apps—which is about a buck. So the bad news for publishers has only begun. And it’s bad new for Amazon as well.
“Finally, to those who are suggesting that you can maximise profits by releasing at a premium price point, then dropping to mass-market afterwards… this is demonstrably true, but can you accept that this might be considered as rude and exploitative by your fans?
I think it’s rude (I guess many would disagree) particularly because I enjoy reading series, and a high initial list price like this that feels like a slap…
When people talk about “fan entitlement”, this is what they mean. This assumption that it’s about you, personally. It’s not. Authors (and filmmakers and musicians and game developers and their attendant studios and publishing companies) are not your buddies. They don’t know you, they aren’t in it to do something for you, they don’t particularly care about you. I imagine that may come as a shock, if you’ve been operating on an assumption that it’s otherwise, but there it is.
They do care about your money. It is a solution to the problem of their own prime motivators: food, shelter, reproduction. When Scalzi posts funny comments about how he’s having to spend the day writing because he “like[s] to eat”, or “has to pay [his] mortgage”, or “need[s] to pay for Athena’s college”, he’s not actually making a joke. And so, he wants to price his work at the highest point the market will bear, in order to make the best living possible. I’m no staunch capitalist (I’m a public school teacher, FFS), but even I understand that that’s the deal.
Mr. Daniels, here’s how I know that you both don’t know the book industry very well, and haven’t even bothered to read this (or any other) discussion thread on the topic: while your analysis about real costs probably seemed clever and on point, it is in fact irrelevant because, as has been pointed out several times, minimum wage workers are not the market for books. Books are, as has been pointed out, a luxury item marketed to reasonably affluent customers. Low income customers have long known known that they can get books for free, if that’s how they want to spend their small amount of leisure time. Further, despite your expertise in business economics, I think what you’ve done here is allowed your private financial choices to color your analysis. You have as they say, allowed it to become personal.
Winstuff, you might have a point if those were actually “comparable digital media”. But since they aren’t, well…
@Walter Daniels – “(Elastic pricing means the price has little effect on sales.)
For example, Fuel (gas & diesel) are somewhat elastic. A 10% rise in price does not mean a 10% or more drop in sales.”
Technically speaking, price elasticity is defined as the absolute value of the ratio of the percentage change in sales to the percentage change in price. Ratios higher than one are said to show elastic demand, while ratios less than one show inelastic demand. Thus, if a 10% rise in price does not result in a 10% or more drop in sales, the price elasticity of demand for fuel is deemed “inelastic”. Obviously, there are degrees of elasticity (and inelasticity), and the characterization of demand depends on the broadness or narrowness of the categorization of the item of which you speak: “entertainment” vs. “written entertainment” vs. “books” vs. “fiction” vs. “space fiction” vs. “space fiction by someone whose last name is Scalzi” etc. There is also a corresponding measure, “income elasticity of demand”, measuring the sensitivity of demand to changes in the level of income. This measure is not insensitive to the level of income being considered – a 10% increase in the price of books will have more impact (% change in expenditures on books) on me than it will on Donald Trump, despite the fact that books are a line-item in our family budget, ranked behind food and shelter, but before clothing. So, when Amazon says that book prices are not very elastic (a mystifying comment, since elasticity has to do with the responsiveness of demand to a change in price), are they suggesting that changing prices will do little to change demand? If so, they are undercutting the price point argument they are making.
Kat Goodwin:
.”
I am under the impression that non-big 5 published ebooks are not often priced above $9.99, with many or most seemingly priced between $2.99 and $7.99.
What is the average price of the big 5 e-books? Do you have this information? I presume that you do, because without that information, you really can’t argue that “most are under 9.99 price” without understanding the pricing of big 5 versus everyone else.
You keep mentioning that the average price of e-books is already below 9.99, but I think that this number does not mean what you think it means because it includes an awfully lot of ebooks that are given away, or sold less than $9.99 outside of the traditionally published channels.
dh,
I’m just a reader, but even I can tell that most ebooks are below 9.99. Go take a look at Scalzi’s books on Amazon. I scrolled through about 10 pages of his books. You know how many of his ebooks were priced above 9.99? A grand total of 3. All 3 ebooks were new releases, not yet available in paperback . In fact, most of his ebooks were priced well below 9.99. Other than a boxed set, his new releases were priced below 14.99.
I would say that Scalzi is a good example of what Kat has been saying. I can’t give you specific numbers, but it’s pretty easy to investigate. Oh, and my experience as an actual shopper tells me that Kat is correct.
Oops, in my discussion of the income elasticity of demand, I should have said that a 10% increase in my income would yield a different change in my demand for books at my current income level as compared to the change one would see if my base was Donald Trump’s income level. Been away from academe too long…
Mr. Troy:
Like, say, that they are bestsellers, just out. And also, umpteenth time, most of the e-books sold by the publishers are well below $9.99 and the price of bestselling e-books drops over time. So Amazon’s concern in this piece is ONLY with the bestselling, hardcover produced books where the e-book might be above $9.99 — the specialized titles. They think that they should be less in price. And at the same time, they say that they understand why they would be more. In other words, bullshit fluff.
What they are trying to imply is that publishers are pricing all the e-books at $14.99, which is a lie. And Amazon thinks that all the e-book prices should come down to $9.99 or lower, except for the specialized first run bestsellers. In other words, Amazon thinks the e-book market prices should be exactly what they already are.
Essentially, Amazon is banking on a lot of authors being very bad at business and math here, and having very little awareness of the overall market. Stephen King and James Patterson are not bad at business or math, and they aren’t at all happy that Amazon is trying to muck with their business (which includes many more sellers than just Amazon,) in order to gain more control over the marketplace and get more co-opt money from publishers.
You mean like when somebody claims that nearly nobody is arguing about two price points so why am I falsely implying that there are more of them, and I have to repeat myself that there are actually quite a few people arguing just that, and then that person contradicts himself to agree that there are a lot of people doing that and this time instead I should just ignore people doing that? Seriously, dude, read your own posts.
My point was that in pricing systems, computer games and books are parallel. There are games that are cheap, but the bestselling games go for $50 or more when they first come out because lots of people want them right away and are willing to pay for that service. Likewise, there are books that are cheap, but the bestselling books go for more when they first come out because there are lots of people willing to pay for that service.
I don’t consider artists cadging money off their fans to be a bad thing. It’s just a reality that while readers will pay for the hardcover and initial e-books because they value getting the work right away, they don’t have much interest in paying the author directly to hear the author speak. They care much less about fiction authors as personalities than they do about the actual books. If an author is a really big name and speaking at a convention or for a charity, readers may pay to listen. But, the convention, while it pays actors for showing up — they very seldom pay any authors for showing up and speaking. And the author speaking to just make cash for the author — mostly that’s only going to work if you are a non-fiction writing expert on the speakers and seminar circuit, speaking to an audience that are not readers but usually business folk. That’s part of your overall business.
But the business of fiction authors mainly is fiction, and the books are mainly the only thing readers want from authors. So the suggestion that authors give up huge reams of their income and try to find other sources when those sources don’t work, well, see what Scalzi said. There are authors who are trying to do some innovative stuff with Kickstarter and such, but that’s just to fund the project so the author doesn’t have to do it on spec (similar to a book advance.) The project still is going to be priced and sold to maximize sales. Because people value artwork from different people at different prices.
Andrew:
That’s not at all what actually happened, nor is happening. What publishers wanted was price flexibility and more vendors for e-books, rather than Amazon controlling the market and prices, and since e-books are a tech product, to use the agency pricing system used throughout the tech industry — including by Amazon with other suppliers. For the zillionth time, most e-books from publishers are priced lower than $9.99. Bestsellers can be priced higher because many readers value them enough to pay it.
And AGAIN, there is no guarantee that if you price a bestselling e-book at $9.99 that you will sell more books. You may sell more books, depending on the title, but it’s not automatic money, and even if you do sell more books at the lower price, there’s no reason you won’t sell them if you drop to $9.99 after selling at $14.99. (This is how hardcovers and paperbacks work.) The reality is — and Amazon knows this and is lying about it — that publishers sell and are far more likely to sell more books in total by pricing bestselling e-books at $14.99, then $9.99, then $6.49, than they would selling the bestselling e-books only at $9.99. So if selling more units is the goal, why would publishers and authors want to sell less units than they are selling now?
That is completely and utterly untrue for print, e-books and audio books. First off, prices may be dropped when a new printing is done, and often is. Second, prices drop when the book switches print formats — hardcover to paperback. Third, e-book prices are dropped all the time. Fourth, there are these things called stickers that you can put on books to create a new price, and sales, where a bookseller may say they’ll give you a fifth paperback free if you buy four, or you can buy two e-books together and get a discounted price. Amazon does them all the time. They built their business by doing deep discounts — which publishers partly paid for.
1) Well first off, because responding to an obvious PR fluff from your negotiating partner is not necessarily the right media strategy. 2) Because Amazon never releases any actual data of its sales. It can say the ratio is whatever it wants without any data, much less a guarantee which it can’t give. 3) Because as already mentioned, Hachette can’t discuss various terms being negotiated with Amazon in the media — terms that have nothing to do with e-book prices. That’s how Amazon got around it — they are talking about e-book prices instead of what they are actually negotiating with Hachette about. 4) Saying no you can’t sell a lot of e-books at $9.99 is silly. Hachette is not falling for arguing against the nonsense salad Amazon is making.
Amazon is arguing that a publisher can make more sales selling a bestselling, first run e-book only at $9.99 than it can by selling that e-book at $14.99, then $9.99, then $6.49. Which is highly unlikely and they can’t prove it in the least. But by pretending that the publisher is only selling the bestselling e-books at $14.99 forever, Amazon is making an economically pointless argument. They set up a pretend market and then argue that the pretend market needs to change — into the market that we already have.
Walter Daniels:
Actually no, most authors will not sell at $9.99 — that’s too high. Which is why most authors’ e-books already sell for less than $9.99. (Seriously, did you read anything previously in this thread at all?) It’s only the bestsellers put out in hardcover that sell for $14.99. And as you note, they have fans who will pay that price when the e-book first comes out. Which maximizes their royalties. So why do you want James Patterson to have to lose royalties? Joe Schmoe — his e-book is already selling for only $6.49 because that’s his value, and he’s maximizing at that price.
Jane Schmoe self-publisher — she’s got even less clout in the marketplace, so she sells her e-book for $2.99 — because that’s all Amazon will let her sell it for without taking an arm and a leg for just letting her sell in their market stall. And Amazon keeps her from selling it for less anywhere else with the contract she agreed to, even though they can change the terms of the contract whenever they please. But even with Amazon taking more of her sales than they are entitled to for the service they offer (which is not the same and is much more limited than a publisher business partner,) Jane does well, so it works for her business. Her low price attracts some readers who are willing to take a chance on her because the book is cheap (just as they do with Joe.) And they spread word of mouth, so lots more come and buy her book and she is also able to sell it on more vendors than just Amazon.
Now Jane’s a bestselling self-pub author. People value her work more, they’re willing to pay more for it. So she raises her price to $5.49 for her newest book (and also puts out a trade paperback version priced at $11.49.) That means Amazon gets an even bigger cut — even though they aren’t offering her any more services — but because lots of people are willing to pay it, she maximizes her profits over her costs. (Amazon does not pay her a royalty, Amazon just takes a cut — a cost. Amazon is not her publisher. We’ve been over this before.)
Later on, Jane drops the e-book price from $5.49 back to $2.99 while pricing her next new title at $5.99. Joe Schmoe’s $6.49 e-book is dropped to $3.99 because it’s older, and his newest title (a mass market paperback in print,) his publisher sets at $7.77 because the previous title did well and Joe’s a bigger name now. Meanwhile the bestseller e-book drops from $14.99 to $9.49 and then $6.49 as the paperbacks come out. When the bestseller’s new title comes out, it’s at $14.99 or possibly $12.99 if they do a special deal.
E-book prices go up and down, as do print book prices for what the market will bear. That’s business economics. And I am done, folks. Anyone else pops up to say that the meanie publishers want to sell all e-books for $14.99, I will consider them clueless trolls and let Scalzi mallet or kitten them as he chooses.
To add to what Kat just said,
$14.99 isn’t even the standard price of an ebook of a hardcover. It is the high end of the usual prices. Look through Amazon. Ebooks of hardcovers are priced at about half the cover price of the hardcover. For example, I just looked up Lock In. Hardcover list price 24.95. Ebook list price 11.99. That is a $13 dollar difference, or 52%.
Honestly, and yes for fucks sake, if you really think that 52% less than the hardcover list price is too onerous to pay for John’s new book, then as so many others and John himself have been trying to tell you. You don’t have to buy it.
This is fairly standard pricing for all major publishers. Around half the list price of the hardcover for the initial ebook release. Then the price goes down, as the book is released in cheaper print formats.
Oops, I missed DH. Okay, one more.
Well we weren’t talking about just e-books. Many, many small presses don’t do e-books — they can’t afford to to do them. They do print, and they usually do hardcover and trade paperback, because they can’t do wholesale mass market distribution but instead do smaller, regional distribution and online distribution. These hc and tp are priced at hardcover and trade paperback prices that are often not as deeply discounted as hardcover and trade paper from the Big 5. The smaller presses can’t necessarily afford to do say a 30% price discount on their titles like the Big 5 can do, so they usually sell for the retail price.
If the small press does do e-books, they usually sell the e-books for a lower price because their authors are less well known, and since the production costs are less past the initial set up, they can try the lower price. If an author does well, they can then raise it. But a lower price of $2.99 is less common for a small press (as opposed to a self-pub.) The small press’ terms with Amazon are not the same as Amazon does with self-published authors, and are more akin to what it does with larger publishers. Amazon may demand more co-op money from small presses than they can afford, and bigger cuts to Amazon are harder for them to do. Amazon negotiates with small presses over both print and e-book titles if they have them, so that was the factor I was addressing.
The price of the e-book depends on what the title is and what they are doing with their print publishing when. Half or more of the Big 5’s titles are mass market paperbacks sold at retail $8.99 or $9.99 and often sold at further discounts. Now, if it’s a bestseller, and still selling a lot in hardcover while the mass market paperback is out, then they might price the e-book at slightly higher than the mass market paperback retail price. But most of the paperbacks aren’t bestsellers and half of them are paperback originals with no hardcover. So they price the e-books below the mass market paperback retail price usually, unless the whole series is being pumped up with a new release and it’s a mmp bestseller. If it’s trade paper, they will usually price it below the trade paper price, which most of the time also ends up below $9.99.
Which as Jennifer showed you by doing your work for you, is information readily available on Amazon itself. In fact, Scalzi himself, earlier in the thread, pointed out the level of his e-book prices. So I can get Lone Survivor, a bestselling non-fiction memoir that was made into a movie, for $7.54 as an e-book. I can get Brent Weeks’ bestselling The Way of Shadows for $6.59. N.K. Jemisin’s bestselling The Killing Moon for $9.42 and her 100 Thousand Kingdoms for $6.59. I can get Laura Resnick’s rapidly growing in sales Esther Diamond books for $9.75. The Night Eternal by del Toro — $7.45, etc., etc. It’s a very wide range of e-book prices and publishers and vendors do experiment, raising and lowering, as do the self-pubs. The idea that the Big 5 are selling all or most of their e-books at $14.99 is easily disproved by their own website.
Again, Amazon is focusing on just the big bestselling authors to make a simplified and false price argument that has almost nothing to do with their actual contract dispute with Hachette. Which is not unusual in a contract dispute. But seriously, even for that sort of thing, this particular volley of Amazon’s is very lame. All this energy you’re putting into trying to make it somehow, in an alternate universe, plausible — why? Amazon does not care about giving you low cost books. They’d just like their e-book monopoly back. And I don’t blame them for that, but seriously, they are beginning to sound like the Queen of Hearts in Alice in Wonderland. Or the Mad Hatter.
I’ll give you a last, more anecdotal one. I was given a Paperwhite for a gift that my daughter could also use. Since I don’t want a Paperwhite for my e-reader (I want a tablet and a very particular kind, my husband uses an iPad and buys titles from Kindle,) I didn’t mind my daughter absconding with it. My daughter still likes a lot of YA authors and the bulk of the YA market is actually trade paperback, with a large number of hardcovers and some mass market paperbacks. That’s expensive. So my daughter was thrilled to be able to con me into letting her get a bunch more YA titles in e-books — from the Big 5 — because they were way cheaper than the print. I would not have allowed her to buy those books if they were $9.99 or more. (If they are that much, might as well just get the trade paper or even hc,YA hc often being lower than adult hc.They are sturdier and more permanent and portable, although only as one by one, whereas the Paperwhite gives her a library, but for me, the price point had to be cheap.) Because everyone makes their book purchase choices differently with different values. My husband, he just bought the new John Sandford e-book, for slightly less than the hardcover — felt it was a good value because he wanted it. But that’s again the big bestseller new release.
What are those greyed out prices next to the sales price?
Luttrells Book, $9.00
Weeks, $7.99
Jemsin $14.99 for the Killing Blood
Resnick’s Diamond Books $7.99
Del Toro $9.99
Those same books usually more expensive on BN.com, where NOOK books are often as expensive as, or in some cases more expensive, than their paperback versions.
If publishers wanted to experiment with price, why print the price directly on the product?
Publishers ceded the digital domain to Amazon, and now they’re playing catchup. Amazon needs the competition, if anything to keep them on their toes (ala today’s announcment with regards to investing in India). But Publishers are going to have to be smart and innovative about such things, and judging how poorly they accepted digital, hang onto DRM, and haven’t really changed any of their business practices for a few decades, I’m not getting my hopes up.
HarperCollins started an e-tail site featuring their books a month ago. It’s a bit clunky, and didn’t work to well with Opera, but you can search by genre and imprint. I checked out ebooks in their Harper Voyager Division, home to Raymond E Feist, Kim Harrison, Robin Hobb…
Blood of Dragons. $14.99
Queen of Dark Things $13.99
Magician’s End and Witch with No Name $16.99, and Feists book was published last year.
Some ebook prices are comparable to Amazon and BN.com, and it looks like when you buy a bundle you get a small discount, but half their eboook titles are in the $10 to $20 range.
This is what HarperCollins prices ebooks at knowing it has competition from Amazon and BN.com, amongst others. Any bets what those prices would be like without them?
The best thing for amazon to do would to release the data. Or allow authors to run their own A/B tests with different price points and prove to themselves what prices result in the most revenue
Well that sucks, wrote a post with citations as requested, but I can’t post links or edit the post, so it’s all lost.
See: JA Konrath, Techdirt, Barry Eisler, Smashwords. I and other commenters might not be in the publishing industry so our opinions are subject to being ignored, but would you mind commenting on the opinions of opinions of people actually in the publishing industry? Their summary: Amazon might not be perfect, but they’ve done more for non-best-selling authors than anyone else. If best-selling authors are showing support for the legacy publishing industry over Amazon, then maybe their interests don’t align with the interests of non-best-selling authors.
Re entitlement, I’m not forcing anyone to do anything, and I’m under no illusion that there’s a measurable impact on your life if I like you or not. But there’s a growing number of authors and other entertainers around that do things the way I like (of their own volition – that’s how I learned about the practices),
Keep in mind that pricing high and then lowering the price over time might harm your numbers as well… say you release at $12.99, then drop to $9.99 after 6 months, then $6.99 after two years, and eventually down to $3.99 for backlist… sure, if I am that eager for the book and I really like what you do then I’ll put down the $12.99… but if not, why would I buy at $9.99 either? If I’m willing to wait (and there’s a lot of good books out there now, I’m probably a couple of years behind in my to-be-read pile by now, let alone good indie authors who release at lower prices, and bundle books, and regular sales by etailers…) Now, you’re the one in the industry and you have better view of the numbers than I do, but do you actually get good sales numbers when you drop the price? How do they compare overall to releasing at a lower price from the start? I know you’ve gotta eat, and an experiment gone bad could mess with that, but you still could be cannibalising from your sales numbers by pricing high and working your way down.
Ian:
The greyed out prices are the retail prices for print paperbacks and Amazon is showing you how much you save on those paperback prices buying from them. The $14.99 price for Jemisin is the retail cost of the trade paperback but you can get that trade paperback apparently for $6. Del Toro’s mass market paperback is $9.99, but you can get the mass market paperback from Amazon at $6.58. How much of a price cut on paperbacks you get depends on who Amazon bought the stock from and if they are trying to dump inventory, etc. If they are trying to dump paperback inventory that outlasted the return period from their warehouses, they may give you the remainder or close to remainder price.
No, Barnes & Noble and Borders ceded the digital domain to Amazon, because they are all booksellers. The publishers supplied all of them with books and e-books. Other companies such as Apple and Kobo did not cede the digital domain to Amazon and chipped away at Amazon’s market, which is why Amazon went from having 90% of the e-book market in 2008 to 55% now. But since Amazon has been effective in trying to control price to stomp out book-selling competitors, and since it uses its still considerable leverage to extract a bigger cut of sales by doing things like freezing publisher book sales in negotiations, some publishers are renewing and expanding direct mail sales but with e-books as an experiment, just as Amazon is experimenting — for the third time in its history — with publishing books with their own imprint, as well as selling them. Since the e-book market itself grew for everybody, publishers and booksellers, saying that everybody is playing catch-up now doesn’t make a lot of sense. Publishers were playing catch-up to build tech and accounting infrastructure to meet Amazon’s supply needs back in 2008; they caught up already.
And Kindle titles have Amazon’s Kindle DRM — they insisted the publishers do it. So they cling frankly harder than publishers to it.
Best for whom? Not Amazon. They’re waging a PR campaign, not doing science.
@Kat Goodwin: “And Kindle titles have Amazon’s Kindle DRM — they insisted the publishers do it. So they cling frankly harder than publishers to it.”
I was going to call this out as something that came from the publishers rather than Amazon, but it turns out that’s not the case. (boingboing dot net/2013/10/11/amazon-requires-publishers-to.html) Turns out it is actually Amazon pushing DRM the hardest. So yeah, definitely take what Amazon says with a grain of salt, but I maintain my position that Amazon seems to deal more honestly (and is more likely to deal) with midlist and starting authors than the legacy players.
@Kat Goodwin
Hi Kat, is this mistaken, or perhaps just true during early days of the Kindle (or maybe I just interpreted what you wrote incorrectly)? Because there are books that are DRM-free, at what appears to be the publisher’s choice, in the Kindle store (as JS noted, Baen and Tor both eschew DRM – and this seems to extend to Amazon’s Kindle store, see and).
From an outsider’s perspective, it would appear DRM is good for Amazon given their dominant ereader position, but strategically bad for publishers (and by extension, authors), as it facilitates ecosystem lock-in. I am perplexed as to why some publishers choose to enable it.
Article on the front page of the new york times today on this, where they raise a very valuid point, why are prices even part of the contract at all? Only reason I cna think of is to protect the brick and mortar business
It’s interesting browsing Steam’s top sellers and seeing how different it is with video games
#1 Popular Indie franchise for $39.99
#2 Early Access Indie game for $29.99
#3 Two year old big budget franchise discounted to $14.99 from it’s original $59.99
#4 Two month old Indie game discounted from $19.99 to $13.99
#5 Freemium Indie MMORP for $49.99
The #7 spot is the entire 5 game Red Faction franchise, which is 3 years old, for $5.99
@Unholyguy –
While there are some similarities between video games and books, I think the change over time of prices is probably not quite as similar. I think video games, as a general rule, have much more time value decay than most fiction books. Obviously there are exceptions in both camps.
(amusing anecdote – I was at a used bookstore a few weeks ago, and they had a 1988 SAT prep guide for $10… that must be some hardcore nostalgia, to want to buy an old SAT prep guide for $10! Obviously _that_ type of book should have price decayed down to $0)
Yes I am sure that video games decay faster mostly due to increase in compute power that make old graphics obsolete. However things that are worth noting that probably apply to books
1: wide range of price points even for new releases argue against a one size fits all pricing
2: most prices start at lower then what used to be the norm ($49.99 or $59.99). There I some $149.99 title in the top ten but it’s a collectors edition kind of thing
3: a market that is well informed and technically literate adjusts prices frequently to maximize sales
4: Steam doesn’t have a lot of say about pricing, the publisher sets the price and things work fine
5: PC game brick and mortar sales are essentially dead which makes all this possible
@docrocketscience: … you might have a point if those were actually “comparable digital media.”
Here’s what buyers know about “comparable”: All digital media are just electrons and there is no marginal cost to publication. And we don’t own digital media. If we did, we could loan and resell it.
Everybody thinks their baby is a unique creation. And it is. Unfortunately buyers/users decide what’s comparable, not the authors/creators. Among the forms of digital media … games, apps, tunes, shows, books … only shows can justify a bigger ticket, due to innate technical costs.
@Unholyguy,
If you publish through Amazon you can track your day to day sales. I believe most of the other online stores/distributors have similar tools.
@ Kat,
“The greyed out prices are the retail prices for print paperbacks and Amazon is showing you how much you save on those paperback prices buying from them.”
OK, so if Amazon wasn’t able to discount as much, which apparently one of the things Hachette is aiming for, it’s possible I could be paying more for that ebook, full retail price? And the constant price changes are not directed from the Publisher of the book, but Amazon using its own information and algorithms to try and sell it, yes?
“No, Barnes & Noble and Borders ceded the digital domain to Amazon, because they are all booksellers. The publishers supplied all of them with books and e-books.”
Barnes and Noble didn’t come up with the idea to release the ebook version six months after the hardcopy version came out, did they? It wasn’t a Borders exec who said “For every print book we lose to an e-book, we lose money.”
“We hoped that a handsome object would slow the migration to e-book for [Stephen] King.” wasn’t said by someone at Barnes and Noble.
And Barnes and Noble started selling online in 1997, only three years after Amazon did. IIRC, they tried an even earlier online store, yes, from their website, Trintex, in the late 80’s. Open Ebook was available in the early 2000’s. The opportunity was there, and publishing (publishers and sellers) muffed it. And their one response to try and combat it was to get caught colluding on prices, using a pricing model that hurt authors.
I’m still interested in what you think of the price listings on HarperCollins. And I’ve looked, but either my search skills are failing (which is quite possible), or I’m looking in the wrong places, but where was this level of outrage last year when Barnes and Noble was doing the same thing to Simon and Shuster? A brief google search shows some good news reports, but nowhere near the number that this current dispute has engendered, and to be honest, nowhere near the level of vitriol either.
In fact, a brief review of the SWFA website does not mention the Barnes and Noble vs. Simon and Shuster fight at all. Lot of posts wrt the Malzberg/Resnick blowup happening at the same time…then again maybe Stephen King wasn’t as motivated as Douglas Preston appears to be. :-)
Thanks for the earlier reply!
Sorry Mr. Troy, I crossposted one of your posts there.
Authors aren’t showing support for publishers (please stop calling them legacy or traditional publishers, they are just publishers and self-pub is the older form of business.) They are pissed that Amazon dragged them into business negotiations by cutting off their sales and mucked with their livelihoods. And again, what Amazon is negotiating with Hachette about is not e-book prices. It’s business expenses that cut into revenues.
One of the big issues on this is that Amazon and Smashwords and such pretend that the monies that they give self-published authors are “royalties.” And then there will be all these fake math comparisons between publishers and Amazon, as if Amazon was a publisher which had been given a license to production rights, was exploiting those rights and paying the author a royalty for those rights. Amazon isn’t the publisher. The self-pub authors are the publishers. Amazon just lets them sell directly in their marketplace — a book-selling broker. When Amazon takes fees and the agreed upon amount of the sales as a fee, they are a business expense. What Amazon gives to the self-pub authors is net sales revenue the author earned selling their book, not Amazon, and not a royalty. The two business situations are entirely different. But the pretense is that it’s the same because it sounds better. That doesn’t mean that self-pub may not be a good, money-making choice for many authors. It does mean that they sell it on comparisons dishonestly, which is something that has come up before.
Amazon has been good for all authors, best-selling and non, self-pub and partner pub, because, for a few, A) Amazon helped replace part of the wholesale market, which shrank in the 1990’s, and really helped open up online book-selling; B) Amazon has increased international publishing by expanding into numerous countries, allowing more international authors to hit the big English markets, English authors to hit new markets and transnational publishers to do multi-country launches more easily; C) by launching the Kindle, Amazon juiced the small e-book retail industry into a much larger, fast-growing market, which helped replace mass market wholesale sales, etc. Amazon has been bad for all authors because it insists on deep discounts to near costs, resulting in losses it can handle with its system but publishers and authors can’t. Amazon has been bad for all authors because it ruthlessly squashes competitors with predatory pricing and other tactics, reducing marketplace competition that would benefit authors, particularly mid-list authors. Amazon has been bad for non-bestsellers because its co-op demands cut into the money that would otherwise fund publicity and marketing efforts for mid-list writers and gambles on new authors. Amazon is good in some areas and bad in others, like any other company. It’s not simple, it’s complex.
Overall, the wholesale model of selling, which Amazon uses, benefits best-sellers more than mid-list authors. The independent bookstores that hand-sell and specialty stores therein like SF or mystery bookstores — they are the big helpers of mid-list titles, which is why mid-list authors have had it hard for awhile. Mid-list authors published in mass market paperback or even trade paperback do not have $14.99 e-books. They may not even have $9.99 priced e-books. So saying that this issue of the $14.99 price point is terribly important to mid-list authors is simply not true.
Amazon has been very good to self-pub authors, offering them a fast, international, low cost distribution ability. It was good to them for its own purposes, giving them stuff for free it makes publishers pay for, because then those self-pub authors and all their buddies would buy Kindles, and would be loyal customers who buy other stuff from Amazon that brings in more money. And there’s nothing wrong with that. It’s been great for some authors and at least fun if not profitable for thousands of others. But the downside is that the self-pub e-book market is almost entirely dependent on Amazon. All e-book sales are slowing in part because there aren’t enough vendors to help the market grow. And Amazon is now starting to dump freebies in favor of raised fees — increased business expenses. Additionally, Amazon totally controls the prices of self-pub e-books by linking how big of an expense they are to what the author sells for, pressuring authors to sell for less, and ensuring that the author can’t allow any other vendor to undersell Amazon (although Amazon can undersell other vendors.) That’s okay for now; later it may be problematic. But it has nothing to do with the business issues involved in pricing e-books for best-selling titles from publishers. And it has nothing to do with Amazon’s negotiations with Hachette.
Well, you see, we have this system in print called hardcover and paperback? And it’s been working pretty darn well, yeah, for a long time. And again, the area of e-books that has sold best? First run bestselling fiction titles that have the highest prices. Because price is not the number one issue with books, especially fiction, and time is an issue for some buyers and not for others. If business people want a business non-fiction title that they feel has time-sensitive info or illuminates hot new trendy business jargon — they’ll buy it at the high initial price. If a person really likes a fiction author, like my husband with the new Sandford book, they may want to read it right away and buy the high price and feel the author is worth it. Or they’ll wait, if they don’t need to have it right away, and buy it when it’s cheaper. Not necessarily the cheapest price, but cheaper. Or they’ll read it at the library for free, then get it in a nice hardcover to re-read if they like it. Or any way of a number of ways that works for them.
Do you know how Amazon got the $9.99 idea, that it attached to only a handful of authors, mainly hardcover bestsellers? Because it’s the cost of a 10 song album on iTunes. In the very early days of the Kindle boom, people demanded that 500 page e-books be .99 just like a 3 minute song on iTunes or they’d go pirate, and Amazon picked that up in part because that’s what they thought Apple would try to get e-books for. And so now the screaming is about $9.99 being what should be paid — a higher price point. How much people value things changes with the market. And if Amazon cared that much about how much the authors were getting from e-books, they would A) stop cutting into revenues with extra fees so that the publishers would be able to give authors a bigger unit royalty (which is happening anyway as publishers and authors negotiate,) and B) take a 30% unit fee from all of their self-pub authors, not just the ones with the cheapest prices (no tiers for the same service.) But everybody has different business factors and have to work out the expenses and the payment plans.
Mr. Konrath prices e-books at $3.69. Why doesn’t he price his e-books at the lower $1.99? Wouldn’t he sell more if he did that right off the bat (and Amazon’s fees would be lower)? He has trade paperbacks at around $11. Why have them when the e-books are cheaper, and if have them, why aren’t they cheaper than $11? Hugh prices his e-books now around the $5 range. Why not .99 cents — wouldn’t he make it up in volume? Why did he team up with a Big 5 publisher and other publishers in other countries to have print editions of his work which are more expensive than the e-books? Why, when you can get a hardcover for a 30% discount at Barnes & Noble, or a 40% discount from Amazon, do many people buy it at a local indie bookstore for retail price or only a 10% discount? Why ever buy a store version of a book when you can get them cheap from used booksellers?
Many reasons — timing, nearness, willingness to pay that price because that particular author is worth it to them. Casual readers don’t choose between games and books — they don’t read books mostly, no matter what the price. But when they do get interested in a book, it’s usually a big name one and they’ll usually pay the high price for it. So nobody is the same, and expenses change, and prices have to be flexible, rising, falling, and rising as market factors and different audiences dictate. Self-pub authors and publishers have to make business decisions and figure price points against their costs — including the fees from vendors like Amazon. And no one, including Amazon, can actually predict the volume of sales an individual title will have at any price point, or in dropping or raising the price. They can’t predict what authors will become the new bestsellers. Everybody makes guesses; nobody knows for sure.
I would disagree. Amazon is not particularly honest with anybody (although they keep it fairly legal,) and they are way more ruthless than any publisher I’ve ever encountered. They aren’t in the same business as publishers (except for their small imprint.) They’re booksellers. And it doesn’t care at all about mid-list and starting authors selling in its market. It does not invest in the self-pubs either; it just sells them, sink or swim. It mainly cares about the bestsellers, because those are the ones that mainly bring in their bacon and make the good PR. Which is why they keep whining about the price of e-books of best-selling hardcovers.
In contrast, publishers care a great deal about their own mid-list and starter authors. They invested in those books, laying down hard money, and they want to recoup their costs. They want to grow as many of the mid-listers as they can into bestsellers, and they need new books to be the continuing lifeblood, but they need the funding to gamble on them. That doesn’t mean that they treat even their bestselling authors well. It depends on the title and what’s happening with that author, who the author is working with, outside market factors. In a recession, mid-list authors get squeezed for the safer bet of bestsellers and fewer new authors are spent money on.
But self-publishing has the same tiers — bestsellers, midlist and newbies. And the midlist self-pub authors can get just as squeezed because they don’t yet have enough word of mouth and name recognition. And they don’t have anyone investing in them to back them up either. Whether they publish alone and just as e-books, dealing with Amazon’s monopoly, or they partner with a publisher for license rights, authors are running a business. A creative business, the valuation of their goods being entirely different for each customer.
Frankly, what Amazon needs to do is make a better argument for why publishers, their authors and self-pub authors should keep giving Amazon more money. Are they getting more services? Can Amazon open some further sales channels or more and better publicity? Why is making Amazon the only seller in their best interests? But Amazon may not be able to make this argument because of non-disclosure rules about their negotiations with Hachette, or because they don’t want to.
This is really long. I think I’ll do JTC’s question separate.
Oops, sorry, it did the italics quote all over thing again. Maybe Scalzi will fix it for me if he gets the chance.
JTC —
No, Amazon did Kindle DRM from the beginning and has continued to want to have it, but since they’ve been working to get their app on more devices, they are a bit less zealous about it. When the boom occurred, publishers did not have the infrastructure in place to handle the giant demand. They were happy to go along with Amazon helping them set up Kindle DRM version of the e-books. But over time, they got the infrastructure and staff and wanted to have e-books in more places.
The problem was that there were like twenty different formats, used by the e-book industry before the Kindle and by Sony for their pre-Kindle reader and then by companies now entering the e-book market in the wake of the Kindle. And each of those formats had to be prepared separately and proofed 4-6 times (which they pay people to do.) So they locked everything in DRM so they’d have more control over the formats and the e-book market until it got worked out which would be the main, fewer formats and how everything would operate. This coincided with the smartphone explosion and the iPad/tablet explosion, and there was a lot of battling over which systems, like Android, were going to be the top dogs there, which effected e-book format choices and possible vendors. So they sort of had to wait to figure it out and publishers are cautious. Publishers weren’t going to be moving as fast as tech companies do because they aren’t tech companies.
But Baen likes to do open tech and Macmillan decided that the market was sufficiently standardized, with the main formats settled, to take off DRM. And they negotiated with Amazon how that would work for the Kindle. So Amazon is not un-open to it and the publishers aren’t either, but the tech world keeps mutating rapidly in terms of devices and connected systems, etc., so it’s an on-going thing. The Kindle DRM lets Amazon wipe files off your devices if they need/want to, but I understand that it is fairly easy to strip off, if you want to do that. However, most people just buy the Kindle e-books and don’t bother. Eventually, I think most of it will be removed, but the boom e-book market is only seven years old.
Ian:
The greyed out discount is only about paperbacks, not e-books — it has nothing to do with the e-book prices. And that print discount is not, as far as I know, one of the things that they are negotiating. Amazon sells like a wholesaler for print, rather than a retail bookseller. As a wholesaler, they can do deep price discounts on print books because they buy them from the publisher at a very low price. That does cut down on revenue per book, but the trade off is that Amazon can sell hopefully a high enough amount of volume to make it worth it. It’s not guaranteed, though, and it is something that publishers have to deal with in making business decisions. Warehouse stores like Costco also sell/buy for a very deep discount. The chains have been able to negotiate a bigger price discount for print than indie bookstores, so they can do 30% instead of 10% discount on the retail price. Returns also factor into print sales calculations. It’s a very different system than selling e-books.
It wasn’t an idea so much as that the publishers didn’t have the staff to make the e-book in the multiple formats in time with the hardcover in the early days. And they weren’t sure that it was a good idea until they figured out the market. But now, I can buy Lock In, which just came out, in hardcover or e-book. They really didn’t know what the sales patterns with e-books were going to be. So a lot of people said a lot of dumb things in the early days. Such as that print books should already be gone by now. :)
Nonetheless, the publishers supply the booksellers like Amazon. They primarily aren’t in competition with Amazon — Amazon is selling their products. Barnes & Noble and Borders were in competition. But Amazon had the tech, they did not, and they were very cautious. And so they ceded the digital market to Amazon, giving Amazon a monopoly for a few years. Apple had the tech and they broke Amazon’s monopoly, along with other companies. But Apple isn’t a bookselling company, they’re not that invested. Borders died off thanks to mismanagement and over expansion. Barnes & Noble floundered around with some of the same problems of over expansion as Borders, and now are selling off the Nook arm to Microsoft, which is not good news for Amazon. (On the other hand, Microsoft is not being brilliant lately.) Some publishers are doing direct selling (which they also do with print,) but it’s a small operation. And it wouldn’t have happened at all if the market had been opened to more vendors. So we’ll see where it all goes. But Amazon does not own the plain, just large pieces of it.
I would expect e-book prices on the publishers sites to be higher than on Amazon because of the initial infrastructure costs of setting up the operation. But after that, they will probably decrease. They aren’t going to try to out-price Amazon. It’s simply trying to develop more venues.
They didn’t do the same thing. Amazon killed the buy button on titles they had in stock and pre-orders, which is a very quick leverage tactic. The pre-orders particularly mess up the BookScan stats, which other booksellers base their ordering on, so Amazon was messing up the authors’ opportunity to get orders all over the place. (This is what pissed the bestselling authors off, and not just for themselves — that pre-order issue hit mid-list authors way harder than them.) Barnes & Noble instead just shrank their ordering amounts from Simon & Schuster. This also would effect sales and ordering stats — it’s just a lot slower. And it didn’t stop people from buying the books from Barnes & Noble fully. Amazon has used this tactic too; it’s hardball, legal and effective, but not as dramatic.
As for the vitriol, well, first off, back in the 1990’s when Barnes & Noble did the predatory superstore kill off the indies thing — plenty of vitriol. But now it’s a struggling company and the business media don’t care. Amazon is still the hot mama and they do care. And Amazon is known for ruthless stuff. They’ve taken out small presses, other companies that sell other things or services. And because they don’t pay sales taxes in the U.S. and can undercut on prices with free shipping, they have major damaged entire main streets of small towns in retail, leading also to lower property values and unemployment. They played a role in making Best Buy super shaky as well and they are pushing into the tech market with all sorts of devices and phones. So a lot of people hate Amazon for reasons that have nothing to do with books.
Whereas, Barnes & Noble only sells books, which most people don’t care about, and B&N did not go on a big media campaign about the negotiations. But mostly it’s just that folk are fed up with Amazon’s business tactics. And investors want profits, so Amazon is consolidating revenue, leading to more hardball tactics. But again, the negotiations with Hachette aren’t focused on e-book prices, so all the screaming about e-book prices is largely irrelevant.
Again, long, sorry. I’m actually going out of town, so more questions maybe Scalzi or others will answer or I’ll catch you on the later side. (I’m gambling that Amazon and Hachette settle their dispute before the Christmas season, but I could be wrong.)
“They didn’t do the same thing. Amazon killed the buy button on titles they had in stock and pre-orders, which is a very quick leverage tactic. ”
They did that with Macmillan in 2010. This time around they removed the preorder button. A button that the vast majority of self published authors do not get. Big difference there
“The pre-orders particularly mess up the BookScan stats, which other booksellers base their ordering on, so Amazon was messing up the authors’ opportunity to get orders all over the place.”
What did booksellers do before the preorder button then?
“Barnes & Noble instead just shrank their ordering amounts from Simon & Schuster. This also would effect sales and ordering stats — it’s just a lot slower. And it didn’t stop people from buying the books from Barnes & Noble fully.”
They shrank the order amounts to zero. Shuster went so far as to tell authors not to do any promos at a Barnes and Noble, but Howey has a story where he was promoting Wool at a BN and they had three copies of his book for sale. And the Barnes and Noble/Simon Shuster fight went on for 9 months.
“So a lot of people hate Amazon for reasons that have nothing to do with books.” True, but then complaining is not an effective business strategy, is it?
“Whereas, Barnes & Noble only sells books,” Um, have you been inside a Barnes and Noble lately? They started selling more than books quite a while ago.
“But again, the negotiations with Hachette aren’t focused on e-book prices, so all the screaming about e-book prices is largely irrelevant.”
Then Amazon is lying when they said “A key objective is lower e-book prices.” All the posts I’ve read the past week that say a big part of the contract dispute is about e-book pricing are all wrong? If they are, what are they arguing about?
Also, I note that you still have not answered my questions regarding ebooks on Harper Collins.
@Kat Goodwin, thanks again for replying. I think I largely agree with what you’re saying other than some semantic niggles that aren’t worth going into, but there were a couple of points that jumped out…
“Well, you see, we have this system in print called hardcover and paperback? And it’s been working pretty darn well, yeah, for a long time.”
I get the hardback, trade paperback and mass paperback tiers for book printing, and I don’t think that system’s broken. But that system works because the higher price point gets you a better product – better paper stock, a stronger cover, and yes, also an earlier release date (just because I personally don’t like the practice doesn’t negate the fact that it’s generally seen as a point of value). I just don’t see how that applies to e-book pricing, where pretty much the only point of difference is the date.
“Mr. Konrath prices e-books at $3.69. Why doesn’t he price his e-books at the lower $1.99? Wouldn’t he sell more if he did that right off the bat (and Amazon’s fees would be lower)?”
I thought Amazon charged higher fees for list prices below $2.99? According to, if you charge less than USD2.99 (or AUD3.99!) then you’re not eligible for the 70% “royalty rate”, at least for KDP. If he couldn’t expect to sell more than twice as many books at lower than $2.99 then he wouldn’t even make the same amount, unless he’s able to negotiate a better rate.
MrTroy, you don’t always get a lesser physical product at a lower price with print. Wait a while on a big bestseller and you can, quite often, purchase a hardcover (i.e. the exact same book) for a price at or below TPB. More than once I’ve purchased HC at MMP prices. (I imagine that sometimes it’s easier/more cost effective to sell off the overstock in the stores and write off the loss than to collect up the overstock from multiple stores for returns.) And speaking of MMP, more than once I’ve found different editions of the same paperback, at different price points, sitting next to each on the same shelf. Also, consider how the price changes between price points in print and ebooks: $29.99 -> $14.99 -> $7.99 vs. $11.99 -> $7.99 -> $3.99. That reduction in materials quality is reflected in those steeper drops in price.
While you are correct about the KDP policy on the 70% return (I prefer to think of it as a 30% commission, which isn’t unreasonable for consignment sales; 65% is outrageous), I think Kat’s point was to point out that even those defending Amazon aren’t following through completely on the logic of Amazon’s PR. As well the should not be. Chasing sales in a race to the bottom is seldom good business.
I love Amazon. They’re the best thing to happen to books and literature in decades. I love that they are starting to force the oligopoly of publishers to reconsider the terrible terms they present to authors in their contracts, and I love that they’ve unlocked a new distribution path that isn’t slave to the big 5. I love their imprints and how they’re changing the way big 5 authors look at publishing.
I do understand why you don’t like them, and that they threaten your ability to get large advances based on reputation and one good Heinlein-esq book, but that’s life. Soon, advances will be a thing of the past, and you might as well get used to that now because this “war” is over.
The battles will still happen from time to time, but people will never go back to print. Bookstores will never come back to the way they were in the 80’s and 90’s (or the 2000’s), and Amazon isn’t going to go anywhere. And while it’ll be sad to see you pull all your books off their website in protest (which is what I assume you’re going to do since you’ve felt the need to speak out against them so often, and to continue supporting them would be hypocritical), you have to live by your convictions.
BTW, this… “If you entertain the notion that Amazon is just 30% of the market” is laughable. Amazon is nearly 80% of the digital market and close to 60% of the print market in the US.
James, were you literally cackling, out loud, while you typed that?
James
Strangely enough, books are sold all over the world, and your apparent belief that global sales are trifling by comparison with the US market suggests that you are remarkably ignorant of the publishing industry.
Equally, the stuff about ‘nobody will go back to print’ suggests that your professed love of books hasn’t helped you much when it comes to actually grasping that people want different formats of books for different reasons; there is a considerable difference between the sort of books I acquire to go on holiday with, and the books in my library at home. Equally, there is a considerable difference between the books I acquire when I am on holiday – illustrated texts about archaeological sites are the obvious example – and the ebooks I take with me.
All in all, nobody cares whether you love Amazon or not, and nobody cares whether you are so ignorant as to believe that fiction sales in the US are what global publishing is all about. Printers are in no danger of going out of business in the foreseeable future…
Ah, Stevie… Silly boy. The US book market is by far the largest market in publishing, followed by the UK/commonwealth and Germany. Do you not know that? Well, don’t worry, I’m happy to educate you. Amazon dominates the industry in the US (which was my point), and they are a massive player in the UK and Germany, and growing larger every year thanks to a loyal customer base that is driving the sales of digital books on a global scale.
I never said printers will go out of business, and I hope Amazon’s fight to save the publishing industry insures that they’ll be around for generations to come. I do however doubt the people running the corporate NY publishers know what they’re doing, and left on their own with no competition, they will kill the publishing industry. They were off to a very good start in the 2000’s when the chain stores had all but killed the indie stores, and publishers had switched their focus to blockbuster doorstop genre fiction and celebrity biographies. Thankfully Amazon stepped in and introduced the Kindle and saved many publishers from bankruptcy. Now they’re making a profit, and it’s all thanks to Amazon and the digital format.
While “nobody” cares whether I love Amazon or not, it’s also fair to say that nobody cares whether you or Doug Preston or Stephen King or Stephen Colbert hate Amazon or not. It doesn’t matter. People aren’t going to stop shopping there, and digital books aren’t going to go away. You might as well get used to that fact and prepare for it, because like I said before, these little skirmishes between Amazon and Big 5 publishers are nothing but noise. The war is over. The market for digital books is huge and stable and spreading and nothing anyone says is going to make any difference.
James–nice tactic in re-framing by using the phrase “Amazon’s fight to save the publishing industry” like that’s the real issue or even reality.
I see a Soviet-style poster with flags and banners and people standing proudly with one leg forward and the caption “The People Valiantly fighting the Oligarchy”.
And I apparently missed the part where everyone wants digital to go away.
As a disclaimer–I don’t have a digital reader. I love the feel and heft of physical books; I’ve never really found a reader I liked (don’t want a Kindle) and I already have too many books to get to as it is.
I figure one day I’ll get there but I’m not in any big hurry. Until something I HAVE TO READ is only available digitally.
James:
Try not to be a jerk to other people when you comment, please. You’re doing a poor job of that so far.
James
You really do have a penchant for uninformed assumptions; I’m female, I’ve lived in the City of London, which is the centre of the business and financial markets here in the UK, for over 30 years, and I buy a lot of books, both here and abroad, across a wide variety of subjects and in a wide variety of formats. It’s been a while since I last went mano a mano with a balance sheet, but during the course of my career I dealt with many thousands of businesses, starting with small shops and moving on to multinational corporations and global financial institutions.
This probably explains why I lack your misty eyed approach on this, but it is unusual for people in the UK in general to have the attitude towards Amazon that you possess; over here we tend to realise that it’s a business, and businesses are in it to make money. Amazon’s business practices are known to be somewhat sordid; its treatment of its staff in the ‘fulfilment centres’ is notoriously bad, and its abuse of other people’s trademarks is equally obnoxious. Fortunately, Lush Cosmetics had the money to take Amazon to court, and it won its case, but Amazon continues to bully smaller businesses.
And then there’s the fact that many people here don’t view Amazon as a bookseller at all because many people using Amazon don’t buy books; it sells vast numbers of things to vast numbers of people, but buying books is fairly small fry. If people are either uninterested in reading books in the first place, or read very little, then it doesn’t really matter what the price is, or what the format is; the research which has been done on this:
suggests that pricing itself is not a significant factor. What is significant in people who do buy books is that deep discounting on a small number of titles results in distrust because the general public doesn’t realise that these books are being sold at a loss, and therefore question why other books are not similarly discounted. That reduces the market rather than expanding it, and Amazon is the prime proponent of deep discounting on small numbers of titles.
Note that this is real research, carried out by qualified people with the results available to all; Amazon, by contrast, just plucks numbers out of nowhere and expects us to believe them. You are, of course, free to believe whatever you wish but you are unrealistic in imagining that everybody shares your infatuation, and even more unrealistic in believing that our host doesn’t understand his business…
Setting aside the issue of James apparently having confused the type of discourse appropriate to ‘a discussion of the business dispute between Amazon and Hatchette’ with that of ‘my sports team is better than your sports team’, his numbers are not quite right.
(John, please delete this posting if my information is hopelessly out of date or wrong)
Dear folks,
With the caveat that I haven’t looked into the intricacies of the economics of mainstream dead-tree publishing in at least a decade…
The couple of people who are insisting on arguing theoretical economics with John (and you know who you are) remind me a lot of that physicists’ quip, “Assume a spherical cow of uniform density rolling along a frictionless plane.”
There are circumstances in which such an assumption can be a highly useful approximation. There are times when it does not reflect the real-world behavior of cows especially well.
Book publishing has so many non-spherical cows isn’t funny. It’s why John simply dismissed the criticisms instead of taking them apart. Way, way too many cows needing to be dissected, where do you even begin?
I will dissect one, a really big one. Mainstream book publishing is a highly subsidized business. Considering that the number of copies sold may vary by a factor of almost 1000, there isn’t a lot of variation in price for books of comparable bulk and construction. Maybe a factor of two, which is quite significant when you’re looking at market demand, but far less than you’d expect from economies of scale.
Why is that?
It’s because a very large fraction of the books published simply don’t make money. If the typical book doesn’t sell out its first printing and/or earn out its advance, it’s a loss for everyone. The author, of course, doesn’t manage to make a living wage off of that advance, but neither does the publisher. In fact, they are likely to of lost money when you take into account the fixed overhead and initialization costs for publishing a book.
I don’t know what the fraction of those books are today. It used to be the majority of them. I bet you someone like John or Teresa Nielsen Hayden can tell us if it still is. Whatever it is, the situation is that if you’re a new author or a not-yet-successful author, you’re not making good money, and you know that! But, neither is your publisher. Where they make their money is on the minority of successful authors, who will sell out their print runs many times, and also get much bigger print runs (lowering unit production cost). Those are the cash cows for the publishers, and they are what keep publishers afloat while they’re losing money on all those didn’t-sell-out-first-printing efforts.
It’s also worth noting that a pretty decent percentage of the really successful authors — the ones who are getting seven-figure advances — also end up seeing royalties on top of those advances. Their books sell really, really well. Really big cash cows of non-spherical form.
This is not necessarily an ideal way to run a business. But is the way it is. Believe me, most publishers would love it if they could guarantee that they only published best-sellers. That’s a nut they haven’t cracked yet. They do spend a lot of time trying to figure out how to maximize the sales on those cash cows. With varying degrees of success, to be sure, but figuring out the price point for books is one of the variables they can control to aim for that maximum.
This is why Amazon’s arguments are so problematical. First, they are removing one of the controls that publishers have for making their cash cows more profitable. Amazon might know better than the publishers; publishers are highly fallible. But that hasn’t been demonstrated, and that’s why it raises concerns. Their arguments also involve sphericizing many cows. By looking at aggregate averages, even if their numbers are correct, they aren’t ensuring that this will make a publisher more profitable. You would need a much, much more detailed analysis to even begin to estimate that, on just a theoretical basis.
If you want to know what happens if publishers can’t make a ton of money off of those cash cows, take a look at specialty presses and academic presses. Typically, their selling price for a comparable chunk of dead tree is much, much higher than what you’re paying, mainstream, for a first novel from some new novelist. That’s what the pricing looks like in a non-subsidized market. Kind of like what would happen to airline fares if the airlines weren’t counting on business and first class and last-minute travelers to bring in a lot of the money they don’t make on coach class seats (a very imprecise analogy, so don’t dig too deeply into it).
That’s the best case. The worst case is that publishers just end up publishing fewer new and low-tier authors, because they simply can’t afford to.
Or, you know, it might turn out that Amazon is right about this and it’s going to work out much better for everybody. But you can’t tell it from the numbers they’ve published, and that’s why this effort to unilaterally restructure and control book marketing is of concern to so many people.
As for the ancillary argument that the dead tree book market is indeed a dying business and eventually it will all be virtual, so who cares … It may be true. I’d even be willing to venture that it’s 90% true. But right now, it’s kind of like when someone says to Death “Am I going to die?” and Death says back “Yes. Just not today.” Unless “eventually” means within the next year or three, I’m not as much concerned about that right now. I’m figuring out how to make a living this year, and next year, and if I’m really farsighted maybe five years into the future. Eventually, we’re all dead; doesn’t mean I’m going to give up acting like I’m living. Or live as though I’m going to die tomorrow.
pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
— Ctein’s Online Gallery
— Digital Restorations
======================================
I mentioned the case brought by Lush Cosmetics against Amazon, and, for the benefit of anyone unfamiliar with the sort of tactics adopted by Amazon in their legal disputes, I draw your attention to the case report:
which follows on from Amazon’s loss of the case reported at:
The judge started by noting that:
‘I handed down judgment in this action on 10 February 2014 and am now called upon to adjudicate upon the form of order. The parties have been unable to agree upon any of the matters of substance, in particular on the form of and territorial extent of any injunctive relief, upon the extent of any Island Records v Tring disclosure (relevant to the election of an inquiry as to damages or an account of profits), upon whether and if so in what form there should be an order for publicity of the judgment as an enforcement measure, upon the appropriate basis and order as to costs, upon whether the Claimants’ costs should be reduced by reason of the fact that for a period an exclusive licence was not registered, upon the extent of any CPR 31.22 order in relation to alleged confidential information, upon whether there should be permission to appeal and upon whether there should be a stay pending any appeal.’
Amazon’s scorched earth approach in which it refuses to agree to anything, notwithstanding the fact that Lush had won, was a calculated attempt to exhaust the mental and financial resources of that company. The Judge took a distinctly dim view of it: eg
‘The matters which impress me here are a) Lush has established liability by the application to the facts of what I consider to be now well established law, b) Amazon has made some changes to its website but, to my mind significantly, has done nothing to the content of its web pages to address the Google France test other than to remove some explicit references to Lush. Moreover, it has not put forward any evidence to the effect that making clear to consumers that the products it returns, in response to searches for Lush, are not Lush products is something that it would be difficult or problematic for it to do. I infer that the reason that no such evidence has been put forward is that such would be easy to do but that Amazon would prefer not to do it, would prefer its customers to try and work out the true position for themselves (and if they fail so to do, then that is the customer’s problem, not Amazon’s). Whatever be the correct inference, it cannot be the case that putting the injunction into effect pending appeal will damage Amazon in such a way that there will be no appropriate remedy if it succeeds on appeal, and c) the damage of which Lush complains in the action is of the kind which is very difficult to quantify and very difficult to repair. For these main reasons, and subject to Lush providing a suitable cross undertaking in damages, I refuse the stay of injunctive relief.’
The Judge wasn’t quite frothing at the mouth, but he was certainly not a happy bunny, and it was obvious to him that Amazon had no interest whatsoever in the customers at all, other than as people to be duped out of money.
I realise that not everyone shares my passion for balance sheets and Court reports, but there is a fun side to it; watching a highly competent Judge ritually disembowel an entity, which thought it was above the law, brings a smile to my lips. Actually, that’s not quite right; it’s more of a bwahaha than a smile but since nobodies going to hear it I can always deny it!
I realise that not everyone shares my passion for balance sheets and Court reports, but there is a fun side to it; watching a highly competent Judge ritually disembowel an entity, which thought it was above the law, brings a smile to my lips.
Hm. Sort of like the Dover creationism case….And Judge Jones did it in fairly simple English, too.
@docrocketscience: “I imagine that sometimes it’s easier/more cost effective to sell off the overstock in the stores and write off the loss than to collect up the overstock from multiple stores for returns.”
@ctein: “Considering that the number of copies sold may vary by a factor of almost 1000, there isn’t a lot of variation in price for books of comparable bulk and construction.”
I know you were talking very specifically about dead-tree publishing ctein, but aren’t these both issues that can to some degree be discounted in the digital publishing realm?
Of course there are always going to be upfront costs completely separate to the author creating a manuscript; you can always identify which books scrimped on editing, and unfortunately even ebooks benefit from good quality layouts (though I guess typography can be skipped)… but not having to choose on the size of a print run for a first book, as long as both author and publisher are willing to weather digital-only or perhaps print-on-demand sales, would surely allow the risk equation to be at least tweaked?
I don’t know why I’m doing this, as I really do have to get on a plane tomorrow, but oh well:
Ian:
Well they should ask Amazon for one if they want one. Amazon is perfectly able to give them one, and with e-books, it wouldn’t be an issue of whether the author publisher could get the stock there in time and at the right price, process returns, etc. Actually, I’m rather surprised that Amazon hasn’t done pre-order buttons for self-pub authors. Are you sure they haven’t? I suspect Hugh Howey gets one.
On the extra plus side, Amazon would never use the remove the pre-order button with the self-pubs as a tactic because Amazon already has a contract with the self-pubs that says that Amazon can do whatever it wants whenever it wants with them. To raise fees, they don’t have to negotiate; they just raise fees, which they’ve been doing.
Before online ordering, they just took pre-orders at the stores, and still do. They base ordering decisions on lots of different factors, such as the publisher’s official print order, but in more recent years with Nielsen bringing in BookScan which all the big players use, they have more information about sales track records, including pre-orders. And they keep hoping it will be a magic formula with past sales predicting with assurity future sales. That doesn’t work well in fiction and not even non-fiction. But Amazon likes to tout their algorithms anyway. But they don’t release all the info to BookScan.
Oh actually yes, very much so. That’s why they are doing it and Amazon is doing it right back. It’s a negotiation and PR ploy. Towns have been rallying people around buying local, for instance.
Yes, they do and may end up a department store some day. But their main business is books. Whereas Amazon is a department store, and a tech company selling tablets and smartphones, and a movie studio, and a software and data analysis company, and a business services company, and……On its retail arm, books are almost entirely insignificant. So it’s a different business focus.
They are lying and they are not. They want control in the market of e-book prices, and they want to be able to undercut competitors and the competitors be contractually unable to compete on prices. Certainly they would like lower prices for the bestsellers so they can get rid of that annoying Apple!
But as we’ve said about fifty times, the main dispute between Hachette and Amazon is about how much in revenues Amazon gets in fees from print and e-book sales and co-op advertising fees and various terms they want from Hachette to be able to do business with them. Remember again, most e-books are already below $9.99. So Amazon’s moaning about the horrible price of e-books in the media is pretty much like a lady pretending to have the vapors.
I did too. I explained why HarperCollins was selling e-books, that it was on a limited basis basically to open up the market and have flexibility, etc., and why prices would at their website be higher than they do with big vendors, at least at first. Whether the publishers are going to go full hog on selling e-books is something we’ll have to see in development. Overall, they’d rather not. But a lot of small presses — because they were squeezed on terms and fees by Amazon — are trying to sell direct instead.
Mr. Troy:
The date and that you get the book in the tech medium you want and can put on your devices as a special service. Print books are done once they can go to print, whatever the binding. But also putting out an e-book means more staff or service producing and proofing the e-books for being e-books. Errors in e-books has been a big problem. And you have to pay people to do it. The self-pubs do their own stuff for free, but they’re only putting out a handful of their own books. The publishers put out thousand of titles a month. And people don’t work for free. So to get the e-book first out, and to get an e-book which requires extra effort and cost for the publisher to do, you pay initially more — on hardcover bestsellers. On mmpaperback titles, you may not initially pay more. Plus they do have sales. Quite simply, people pay the higher price if they want an e-book version as soon as they can get one. Others wait till the price drops. Again, there’s nothing magical about $9.99. Amazon simply arbitrarily picked it as the price point. But it’s not necessarily the market price point that actually makes anyone money on hardcover bestsellers.
It looks like they’ve simplified it now to two tiers, instead of multiple tiers, one where they get 30% and one where they get 65%. To get the 30%, you have to price your Kindle version 20% below the price of any physical print or other form of the e-book (which lets them control the e-book price throughout the market.) They also get to charge additional fees as “delivery costs.” It looks like on below $2.99, you can’t get the 30%, which means they are nickel and diming the short fiction writers. But at $2.99, you can get the 30%, but only if you keep the price below the $9.99 marker. Beyond that and you go back to 65%. They used to give more incentive to do $2.99, from what authors told me, but it looks like they are smoothing it out.
Note that they are calling it a royalty payment, which it is not, since the author doesn’t grant Amazon a license that Amazon is exploiting. What they should say is that Amazon’s fee is 30% or 65%, deducted from revenues. This is rampant throughout the self-pub market and it’s a dodge that unfortunately keeps a lot of self-pub authors from actually learning the business that they are in.
James:
Wait, is Amazon trying to save the publishers from bankruptcy or drive them into oblivion? The conspiracy theories are crossing the streams! Bow before Gozur! Seriously, you’ve gone so far beyond fake math that you are into the ether of your own science fiction story. (Oh, and Stevie is not a man.)
Scalzi likes Amazon just fine, they are one of his business partners. It is not required that you are for or against every business practice of a company you do business with, with no middle ground. If the e-book market is good sized and stable (which it is though sales are leveling off,) that’s basically a good thing for publishers. Nor has it required them to radically change how they do business. Nor are any of your figures and pronouncements about publishing history particularly correct.
We can be excited that Amazon boomed up the already existing e-book market without thereupon believing Amazon is Jesus. I have a relative who works for them now, and she’s not even this obsessing.
Ctein — you get a cookie. Stevie — you get a whole cinnamon roll for giving me a link on book price data so maybe I don’t have to explain it over and over.
Have fun with James, everyone. I will be curious to hear if robots are going to take over writing books soon. (That would actually be fun.)
Amazon wants to get the lowest price for it’s customers. Not out of altruism but because Bezo’s believes putting the customer first is the way to win. He also believes if he pushes the prices way way down his competitors will not be able to compete with his level of efficiency
Both of these things actually ARE good for the customer and the anti-amazon lobby needs a better rallying cry then “paying more for books is good for you” or “amazon the evil monopoly”.
Amazon does not give a rats ass about its suppliers, they either keep up or get out of the way, either way amazon is fine with that and belives the customer will be fine with it too provided it gets them a cheaper product
Unholyguy
The contents of the law suit between Amazon and Lush Cosmetics cited above make it perfectly clear that Amazon is not putting its customers first; the judge specifically commented on that in his judgement.
You can aver otherwise until you are blue in the face, but you can hardly expect people to accept your unsupported assertions when they are flatly contradicted by the evidence.
Dear Kat,
Thank you! Yummy cookie!
As always, your analyses are brilliant.
I’m getting on a plane late tomorrow, too. London? Maybe I’ll run into you at WorldCon, then.
~~~~
Dear MrTroy,
Overly simplifying a complicated subject…
The easiest way to think of digital publishing (from the publisher/economic perspective) might be to think of it like what you do when you produce a paperback after the hardback edition. The changes you need to make and the redesign and re-layout are similar issues. Of course, as with the hardback edition, you don’t have the incremental cost per copy. Well, you do, but it’s vastly, vastly smaller.
So, yes, the risk equations do end up getting tweaked. How much is another matter. The “people costs” of producing the book are far higher than most people imagine. Even if you’re self-employed, unless you’re an attorney you probably don’t realize how much time you spend on your business. Remember that for a publisher, every single bit of activity is a “billable hour.” As an approximation, assume that each person-minute expended at a publisher costs the publisher one dollar (it will be different for different publishers and different parts of the country, and even more different for different kinds of businesses, but it’s a good starting point). So, you pick up the phone (how 20th century!) and call your editor just to confirm that your paper manuscript arrived okay (last century, ‘member). Okay, the small handful of minutes it took her to take your phone call, answer it, and then get her head back into the space to be working on whatever she was working on, that’s a handful of dollars out of the publisher’s budget. E-mail your editor one contractual query or suggest a change in the terms? Several people will have to think about that, and there are double digits of dollars that are gone.
This is just the side stuff. Of course there is the reading of the manuscript, the editing, the copyediting, the proofreading, the design and layout, the sales and marketing decisions, etc., etc. It really adds up!
Beyond that, for paper books, there’s the cost of printing each copy, both set-up and incremental, which doesn’t apply to digital (well, some setup does, but the incremental cost is incredibly small). On a typical first printing, I have no idea how the costs typically break down– what fraction goes for people costs and what fraction goes for physical production costs. It would be interesting to have a rough number for that. Maybe one of our resident experts will come up with one. My totally uninformed male-answer-syndrome guess would be that the majority of costs are the people costs. But you should not take that to the bank.
Somewhat related to this, another thing I don’t know is the distribution of sales among the “unsuccessful” books––those that don’t sell out their first printing/earn out their advance. I don’t know what the average percentage sold, relative to break-even, it is. I don’t know if the distribution is Gaussian, Poisson, or multimodal. All of this is important when judging what happens to that large fraction of books that don’t make money for the publisher. For example, if lowering the e-book cost increases the publisher’s revenues by 25% on those books, but the average shortfall was 50%, it means they’re still losing money. Less loss per book, yes. But if a forced lowering of the price cuts the revenues from the previously-discussed cash cow significantly, that may not be a net win.
None of which is to say that Amazon is wrong and the publishers right… Or vice versa. Repeating John’s frequent mantra. They are businesses looking out for their own interests, which are not the same, and they are known to be fallible.
What it does mean is that simplistic analyses of what’s going to happen based upon those spherical cows are nothing more than political talking points.
Personally, I feel like John does: Anything that reduces my ability to negotiate terms is likely to be a bad thing for me. I may well be entirely wrong in that perception, but it leaves me very uncomfortable.
pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
======================================
— Ctein’s Online Gallery
— Digital Restorations
======================================
I’m really beginning to wonder if some of the people posting in this discussion are even reading the things Kat and John are actually saying. Nobody in this discussion has said “paying more for books is good for you” or “Amazon the evil monopoly”, or even “Amazon must be destroyed”! The hyperbole is strong with you folks.
Once again, the majority of ebooks on Amazon are priced at 9.99 or lower, including trade published books. I’ve lost count how many times this has been reiterated. Amazon is trying to tell a manufacturer how they can price their goods, regardless of what the overhead cost is to the manufacturer. You know who else does that? Wal-Mart. I guess they are both great if all you care about is low prices. And by the way, Amazon’s customer service quality has been going downhill in the last year. So, they are even losing the edge they had in that regard.
I’m a customer of Amazon, but that doesn’t mean I agree with a lot of their business practices. I also don’t believe that lower means better for me as a consumer. When a retailer tries to force prices so low that it becomes impossible for a manufacturer to make a profit, then it’s always the little guy who gets screwed. People lose their jobs, because a company has to make up for those losses somewhere.
I don’t think publishers are the be all end all, but I also don’t get this knee-jerk reaction to see any criticism of Amazon as, “I hatez the Amazon! Die!”
Stevie the example you are citing is a trademark infringement case where Amazon is refusing to hack it’s search engine to not return links to competitors of a cosmetics company named “Lush”. Am I understanding that correctly? This company Lush does not want to be carried by Amazon
If that is your smoking gun, it seems pretty weak to me, regardless of the rhetoric from the judge? Or am I missing something somewhere? I mean really, who cares?
Jennifer, what exactly is it that Amazon has done wrong that I (as a consumer) should care about?
– I don’t care about two huge multinationals squabbling over contracts.
– I don’t care whether Amazon sets a price ceiling that some authors may not like, other then in general I like the idea of things being cheaper and middlemen getting squeezed
– I don’t care about book publishers at all really
In general, the only possible appeal to anyone is that all of this is somehow bad for authors. I do care about my authors getting paid. However, Amazon says this is good for authors and publishing houses say it is bad and no one is bringing any data so in general, I am back to not caring again. if anything I probably trust Amazon more then the publishing houses as the big 5 seem greedy and stupid to me
Hence my response of the lack of a rallying cry against amazon’s practices
And that’s where we differ. I care about Amazon’s practices and the effect this will have on other retailers, jobs, and competition. I have the right, as a consumer, to view Amazon as greedy and I don’t trust them. But, trying to paint everything as a “side” is ridiculous.
When you said, “Both of these things actually ARE good for the customer and the anti-amazon lobby needs a better rallying cry then “paying more for books is good for you” or “amazon the evil monopoly”, it certainly sounds like you do care, and are picking a side. Anti-Amazon lobby? FFS, take it down a notch.
We don’t have to care about the same things, but don’t paint all consumers with this broad brush, as if we all believe what Amazon does is fine and dandy.
Unholyguy
I care as a consumer. The Company cares as a producer. The Judge cares because his job is to uphold the law.
Had you actually bothered to read my posts and the two Court Cases cited you would understand that they represent an excellent example of the thoroughly nasty way in which Amazon seeks to grind down competitors, and screws their customers.
What Amazon was doing was palming off trash, profitable trash, no doubt, but trash all the same, whilst leading its customers to believe that they were getting the real deal, hence the abuse of Lush trademark. Selling items under false pretences is generally recognised as coming very close to, if not over the border to, criminal activity.
As the Judge made clear, it was never suggested that it would be difficult in any way for Amazon to make the true picture clear to its customers; since it hasn’t done so the only logical conclusion is that it doesn’t want to. It wants to keep selling profitable trash to people it has conned into believing they’ve got the real deal.
This may be your idea of customer service but it sure as hell is not mine. I’ve just spent 24 days tootling around the Med and Black Sea; at least the guys lining up to sell us genuine fake wallets, not to mention the genuine fake watches, were perfectly honest about the fact that they were fakes.
We have now reached the point where street sellers in small towns in Turkey are more honest about their products than Amazon is…
I read this article on the Lush/Amazon thing, I would never trust a judge to understand or rule fairly on anything involving technology.
what do you mean by ” whilst leading its customers to believe that they were getting the real deal, hence the abuse of Lush trademark”. Was amazon changing the representation of the competitor products, or were the competitors themselves making their products appear like Lush?
In general a store search engine is going to algorithmically whatever they have avaiable that is closest to the thing you are looking for, that part is working as intended
You mean that you like the idea of middlemen being squeezed as long as it isn’t Amazon, the ultimate middleman.
oh Bezos squeezes Amazon plenty, just ask anyone who works there…
Dear unholyguy,
Umm, but it’s not a technology case. That isn’t the issue before the court.
Blaming it on the search engine is kind of like blaming it on a “computer error.” A human being writes the search engine and determines what it does and doesn’t do. It may very well be working “as intended.” Whether that working and that intent is legal or not properly falls within the scope of the courts.
I do not have an opinion in the case, not having read it closely enough, but an assertion that the software is doing what it was written to do is very much not to the point.
pax / Ctein
I read this article on the Lush/Amazon thing, I would never trust a judge to understand or rule fairly on anything involving technology.
And I would never trust a tech guy’s comprehension on anything legal. They get the law wrong FAR more often than a judge gets technology wrong. (Exhibit 1: Patents…..)
It’s kind of like the recent European “:right to be forgotten”. which sounds great in theory but is pretty unworkable in practice.
You are basically asking Amazon to re-engineer the way the way their search engine works to protect some small soap manufacturer who thinks they own the word “Lush” and that they are somehow getting ripped off if Amazon bounces them to products like their product
Have you actually read the claims? It’s basically “we don’t like how search engines work”.
The judge in the case, John Baldwin QC also seems to be a pretty famous patent lawyer, who seems to be still actively practicing as a patent lawyer in addition to being a judge. Must be nice tow ork both sides of the bench, how is that even possible?
Dear unholyguy,
Seems to me you have the cart before the horse. We are not required to live our lives by what is convenient or simple to program.
If the law says what you’re doing with a computer is illegal, it’s just too bad for you if it’s not easy for you to fix the code to make it legal. It’s not my problem; I’m not required to acquiesce to your convenience.
If, in fact, Amazon is illegally infringing on Lush’s commercial rights, it’s Amazon’s problem to fix it. Or make appropriate compensation. Law does allow for situations where physical remedy is difficult or impossible (in the US, you can, as a general rule, sue to obtain damages, but not to obtain specific performance (there are exceptions)). But it does not let you say, “Aww geee, it’s just too hard to not break the law, so we can break the law and not be penalized for that.”
Whether the plaintiff’s LEGAL complaint has merit is the question. If the way they “don’t like how search engines work” proves to be a legally valid complaint, too damn bad for search engines.
And this is getting into rather serious thread drift from the topic at hand. Maybe we should wrap it up before John does that for us?
pax / Ctein
Unholyguy
They don’t think they own the word Lush, they do own the word Lush. For someone who is lecturing us on how businesses do business you are remarkably ill informed about the real world.
As for John Baldwin, oddly enough we like Judges to have the highest level of expertise in their field, which is why they are appointed on either a full or part time basis. Henry Carr, who appeared for Amazon, is also a Judge.
Ctein, the interesting thing is that Amazon has not claimed that it would be difficult to jig their search engine, though I agree entirely that it’s irrelevant to the legal position. They are on a hiding to nothing on that aspect because their own witness admitted that people clicking on Lush legitimately expected to be taken to Lush products:
‘In my judgment, the average consumer is unlikely to know how the drop down menu has the content which it displays, but is likely to believe that it is intended to be helpful to him and is some consequence of other searches that have been carried out. In my judgment it would inform the average consumer that if he were looking for Lush Bath Bombs on Amazon, he would find them by clicking on that menu item. I reject the contention that the average consumer who was typing Lush into the search box would think that the drop down menu reference to Lush Bath Bombs was a reference merely to products which were similar to or competitive with the Lush product. Moreover, my conclusion is supported by the evidence of Dr Fliedner of Amazon who accepted without hesitation that a consumer would expect the brand he was searching for to be shown as the first result, and probably as the first few results, if it were available.’
I do think this is relevant to John’s theme, which is that Amazon is a business, seeking to expand its profits; it also supports his theme that what Amazon says isn’t necessarily true. Amazon made strenuous attempts to exclude their own witness’s testimony from public knowledge; they failed, and so we know that Amazon was deliberately attempting to mislead consumers. If they will mislead us on bath bombs then they will mislead us on books…
Agree this copyright stuff could be its own separate thread.
Regardless of the merits of this particular case, if this is really the best example of Amazon supposedly not looking out for their customers seems pretty weak to me
A copyright lawsuits about supposid confusion that a four year old would not be confused by, as it is exactly the way search on the internet generally works. Behavior that users actually want trying to help time find the nearest match for their desire that amazon has in stock
Dear unholyguy,
It’s about trademark, not copyright. That’s an entirely different thing.
I’m nowhere as up on trademark law as copyright law, but I am up enough on it to be able to tell you that your remarks in this matter are entirely ignorant of the law in such matters.
You’d really be best off dropping this. Maybe time to return to the real topic? Maybe?
You can have the last word… if you must.
pax / Ctein
Just a thought on the Lush/Amazon thing (digression continued), I agree with unholyguy in that it’s not really Amazon’s problem to help Lush combat its trademark infringement. If a competitor is infringing on its trademark, go after the competitor.
To introduce third-party liability on Amazon proposes massive chilling effects, in which every online marketplace that allows products from third-party vendors must ensure that they are aware of every trademark (and patent) held in every area that any of its products compete in, to ensure that none of them is infringing… which determination may only be possible by a judge anyway.
Again, go after the infringing company, not the search engine or middle-man.
Back on topic, a long but interesting read featuring actual debate on the Amazon vs Hachette debacle:
The first comment also links to Data Guy’s breakdown of daily gross takes across a large range of price points, though only for a single day. (insert spherical frictionless cow disclaimer here)
MrTroy
In the real world trademarks exist. Amazon is a business in the real world. You clearly haven’t read the judgements in the case, nor have you even read the extracts I quoted. Amazon is the infringing company, which is why Lush went after them.
In the real world businesses who infringe on other people’s trademarks are penalised because that’s how businesses work in the real world. Lush won, Amazon lost.
I find it extraordinary that people can be so ignorant of basic commercial facts, though I suppose it helps to explain the naïveté of those who actually believe that Amazon is on the side of the consumer…
@Stevie,
I did read your exerpt although did not previously read the case report. I’ve now read the summary of the case, and while I acknowledge and am not defending Amazon’s “scorched earth tactics” in which it refused to agree to anything, my opinion has not changed at all.
I think the court case is a prime example of a judge taking strictly to the letter of the law and not considering applicability in the real world or any downstream effects from their judgement at all. There are unfortunately many (and growing) such cases, and just because the law falls in a particular direction that does not make the result either right or even sensical (not nonsensical?)
Consider, if a customer enters a generic english word that happens to also be a narrow trademark into a search engine, even a search engine that is searching for commercial products within the domain covered by the trademark, I guess it’s technically possible (at least in the US) to simultaneously perform a trademark search on the term and not echo back the search term to the customer to avoid infringing the trademark on the search results page (I guess that would be serving the customer, according to the judge). If Amazon didn’t have a supply contract for Lush products, should it just display nothing? Or should it include products that self-describe as being “lush”, the generic english word rather than the trademark, but I guess not any products that include “lush” in their name because they are infringing on the trademark (even if the name is a phrase that happens to contain the english word “lush”)?
Note further that the Lush trademark does NOT cover category US051 “Cosmetics and Toilet Preparations” or US052 “Detergents and Soaps”, so it seems to have taken an IP judge to actually determine that the Lush trademark fall within the scope of “beauty” products.*
What about if I performed a search in the Beauty products for the term “velvety”, hoping to find something that feels nice? Should the search results be restricted to nothing because the “velvety” trademark is held by the Hung Mai Corporation and they don’t offer any beauty products under the velvety trademark (it’s an “Advertising and business” trademark, apparently in the export of wine)
So, we have a judge, who is an expert in matters trademark, who considers that it would be easy for Amazon to respect the Lush trademark in this matter. Pity he didn’t give any advice as to how that might be possible or feasible.
Obviously it’s not the judge’s job to determine how a business should operate, and there’s no particular reason why a trademark judge should need to understand any particular technology, let alone internet search technology… but the judge *should* have to consider whether a particular judgement will have the effect of upholding the letter and the spirit of the law, and whether upholding the law in this particular instance will have any chilling effects that are unintended by the law.
* For those interested, the Lush trademark covers: IC 004 “Lubricants and Fuels”. US 001 “Raw or Partly Prepared Materials”, 006 “Chemicals and Chemical Compositions”, 015 “Oils and Greases”. G & S: Candles
Kat,
Was away for a few days, and I did miss your earlier reply about Harper Collins.
Here’s what bothers me about the HarperCollins selling ebooks online. It’s not the klunky site, HarperCollins does books, not web design, so thats understandable.
There’s no middleman here, this is HarperCollins selling direct, right?
They are entering an already crowded retail field.
Despite those factors, many their bestsellers are priced higher than $10.00. Many are in the $14.99 and $16.99 price point. Which makes me wonder, if this is what Random House is selling its own ebooks for now, given such a competitive environment, why would they lower prices in the future? To compete? If they wanted to compete, wouldn’t/shouldn’t their prices already be lower/comparable? If they want to compete against Amazon, or at least give a customer a choice, how is offering to sell a reader an ebook at a higher price point going to do that? Maybe it’s a reverse psychology thing? Sure, the Season of the Dragonflies ebook is $12.74 on Amazon, but by pricing it at $14.99 on their own site HarperCollins is making you think you’re getting a deal. “You’ll pay higher prices on Amazon, because if you bought from us directly we’d charge you even more!” doesn’t seem to be a sound business plan…then again, stranger things have occurred.
@ Stevie
“though I suppose it helps to explain the naïveté of those who actually believe that Amazon is on the side of the consumer.”
You mean Hachette and the other publishers are? Collusion and price fixing are for the public good?
I’m sure there’s no white hat vs. black hat going on here, and while I wouldn’t go so far as to say “A Pox on both their houses”, if one got the business equivalent of Chicken Pox and the other got the business equivalent of Shingles I’d be ok with that.
I expect Haper Collins wishes to avoid undercutting the retail distribution chain.
I was just at a major week-long rocket launch. Two of the three rocket motor (and rocket kit) manufacturers had pavilions set up in Vender’s Row. Neither of them was selling anything; they were just showcasing products. I asked why, and they specifically said it was because they didn’t want to undercut their retailers who also were set up there. (Also, this way they avoid hassles with money-handling on the field.)
I imagine if they had chosen to sell their products, they’d have been selling at the MSRP, even though their “competition” (read “distributors”) were selling on the field at discounted rates. Why? Because it does them no good at all to undercut the supply chain, which they want to be as healthy as possible.
Ack. I don’t spot spam here, sorry. Stupid name field.
Yeah, I don’t see anything fishy with Harper Collins offering higher e-book prices on its own website, there are just too many reasons why they might want to do so, and some of the possibilities are actually really good reasons.
Another couple of possibilities to add into the mix:
* They’re being cautious about their etailer offering launch, and want to avoid their servers being overloaded by a mad rush to buy the books at cheaper-than-Amazon prices before they change their mind. By pricing higher, they’d expect a trickle-feed of purchases that they can use to test their workflow and make sure it all works before trying it under load.
* KDP terms don’t let authors offer a price better than Amazon at other venues, perhaps Harper Collins wasn’t able to negotiate their way out of a similar clause? Maybe they could get a better deal on other parts of the negotiation by letting Amazon take this point, because it just wasn’t that big a deal before?
On the supply chain idea, this also seems like a perfectly feasible and reasonable explanation, but I’d offer online hotel bookings as a counter-example. In at least the last five years, I have found that most of the time the best hotel price can be obtained directly from the hotel website. Hotel booking aggregators might be able to offer the same price, but only in rare circumstances do I see them offering lower prices for the same room – usually only in last-minute discounting cases. Travel agents have *never* been able to offer competitive hotel rates in my experience… plane tickets certainly, rail perhaps, but accommodation never. And yet the hotel supply chain seems healthy (to an outsider), despite hotels competing directly with their retailers.
@MooGiGon
This looks like it might be the RPG seller’s analysis you described (or one like it):
Hi John,
Excellent article and I agree with much of what you say. The one area which I (as a reader) find that is not taken into account on pricing model discussions is my rights as a purchaser of a hardback or eBook equivalent.
If I buy a hardback I get the book (it is mine), I can read it, put it on my shelf, lend it to a friend, sell it on the second hand market.
If I buy an eBook from Amazon and many others I can read it, I can’t lend it without permission and its highly restrictive when you can, I can’t even let my family members read it unless their kindle / iPad is on the same account.
What I buy for my money is not the same thing at all.
I know in these discussions people always talk about the reduced cost of production of eBooks is less than for hardback books and as you say its not really relevant but what is (in my view) is the rights associated with the purchase. If I got the same rights then great but I don’t so I would expect a lower cost for the fewer rights that I have acquired.
What that price is, is another whole can of worms.
The other point you make about price differentiation on a bottle of water in s supermarket or a restaurant is I think not making the point you want it is making Amazon’s point. In the case of bottles water the wholesale price is pretty much the same for the supermarket or the restaurant (there may be a differentiation due to volume but it isn’t that great). But the price charged the consumer is not based on the “wholesale” price but on what the market would bear.
In this case that would mean Hachette setting a wholesale price and letting amazon sell it at the price they want (wait a second that is what happened before Apple and Hachette et al decided to collude to increase the price – if that is what they did).
Books have always been odd in the way they are sold and when the likes of Amazon and before them the big bookshops came along and cut the price they relied on buying books at a wholesale price and selling them at a variable margin to make money. But in many markets that was controlled and the price of books set for fear that small booksellers would disappear.
Nothing has really changed but the consumer and the author are caught in the middle of this dispute.
Been following the Hachette/Amazon sword fight for awhile and with great interest as my new startup lets content creators sell fan direct in-stream on social media, and with far more favorable rev shares than Amazon. I realize that is only good for a small % of your audience and folks like Amazon really come into play with their discovery and recommendation engines to drive new reader growth, but it’s a start.
If anyone is interested in trying it out tweet us @socialEQ and we’ll connect. | https://whatever.scalzi.com/2014/07/30/amazons-latest-volley/ | CC-MAIN-2021-04 | refinedweb | 66,882 | 69.52 |
- .10 Programming with LINQ to SQL: Address-Book Case Study
Our next example implements a simple address-book application that enables users to insert rows into, locate rows from and update the database AddressBook.mdf, which is included in the directory with this chapter’s examples.
The AddressBook application (Fig. 21.30) provides a GUI through which users can query the database with LINQ. However, rather than displaying a database table in a DataGridView, this example presents data from a table one row at a time, using several TextBoxes that display the values of each of the row’s columns. A BindingNavigator allows you to control which row of the table is in view at any given time. The BindingNavigator also allows you to add rows, delete rows, and save changes to the data. We discuss the application’s functionality and the code that implements it momentarily. First we show the steps to create this application.
Fig. 21.30 Manipulating an address book.
1 // Fig. 21.30: AddressBookForm.cs 2 // Manipulating an address book. 3 using System; 4 using System.Linq; 5 using System.Windows.Forms; 6 7 namespace AddressBook 8 { 9 public partial class AddressBookForm : Form 10 { 11 public AddressBookForm() 12 { 13 InitializeComponent(); 14 } // end constructor 15 16 // LINQ to SQL data context 17 private AddressBookDataContext database = 18 new AddressBookDataContext(); 19 20 // fill our addressBindingSource with all rows, ordered by name 21 private void BindDefault() 22 { 23 // use LINQ to create a data source from the database 24 addressBindingSource.DataSource = 25 from address in database.Addresses 26 orderby address.LastName, address.FirstName 27 select address; 28 29 addressBindingSource.MoveFirst(); // go to the first result 30 findTextBox.Clear(); // clear the Find TextBox 31 } // end method BindDefault 32 33 private void AddressBookForm_Load( object sender, EventArgs e ) 34 { 35 BindDefault(); // fill binding with data from database 36 } // end method AddressBookForm_Load 37 38 // Click event handler for the Save Button in the 39 // BindingNavigator saves the changes made to the data 40 private void addressBindingNavigatorSaveItem_Click( 41 object sender, EventArgs e ) 42 { 43 Validate(); // validate input fields 44 addressBindingSource.EndEdit(); // indicate edits are complete 45 database.SubmitChanges(); // write changes to database file 46 47 BindDefault(); // change back to initial unfiltered data on save 48 } // end method addressBindingNavigatorSaveItem_Click 49 50 // load LINQ to create a data source that contains 51 // only people with the specified last name 52 private void findButton_Click( object sender, EventArgs e ) 53 { 54 // use LINQ to create a data source that contains 55 // only people with the specified last name 56 addressBindingSource.DataSource = 57 from address in database.Addresses 58 where address.LastName == findTextBox.Text 59 orderby address.LastName, address.FirstName 60 select address; 61 62 addressBindingSource.MoveFirst(); // go to first result 63 } // end method findButton_Click 64 65 private void browseButton_Click( object sender, EventArgs e ) 66 { 67 BindDefault(); // change back to initial unfiltered data 68 } // end method browseButton_Click 69 } // end class AddressBookForm 70 } // end namespace AddressBook
Step 1: Creating the Project
Create a new Windows Forms Application named AddressBook. Rename the Form AddressBookForm and its source file AddressBookForm.cs, then set the Form’s Text property to AddressBook.
Step 2: Creating LINQ to SQL Classes and Data Source
Follow the instructions in Section 21.6.1 to add a database to the project and generate the LINQ to SQL classes. For this example, add the AddressBook.mdf database from the Databases folder included with this chapter’s examples and name the file AddressBook.dbml instead of Books.dbml. You must also add the Addresses table as a data source, as was done with the Authors table in Step 1 of Section 21.6.2.
Step 3: Address node in the Data Sources window. Note that this becomes a drop-down list when you select it. Click the down arrow to view the items in the list. The item to the left of DataGridView is initially highlighted in blue, because the default control to be bound to a table is a DataGridView (as you saw in the earlier examples). Select the Details option in the drop-down list to indicate that the IDE should create a set of Label/TextBox pairs for each column-name/column-value pair when you drag and drop the Address node onto the Form. The drop-down list contains suggestions for controls to display the table’s data, but you can also choose the Customize... option to select other controls that can be bound to a table’s data.
Step 4: Dragging the Address Data-Source Node to the Form
Drag the Address node from the Data Sources window to the Form. This automatically creates a BindingNavigator and the Labels and TextBoxes corresponding to the columns of the database table. The fields are ordered alphabetically by default, with Email appearing directly after AddressID. Reorder the components, using Design view, so they are in the proper order shown in Fig. 21.30.
Step 5: Making the AddressID TextBox ReadOnly
The AddressID column of the Addresses table is an autoincre 6: Connecting the BindingSource to the DataContext
As in previous examples, we must connect the AddressBindingSource that controls the GUI with the AddressBookDataContext that controls the connection to the database. This is done using the BindDefault method (lines 21–31), which sets the AddressBindingSource’s DataSource property to the result of a LINQ query on the Addresses table. The need for a separate function becomes apparent later, when we have two places that need to set the DataSource to the result of that query. Line 30 uses a GUI element that will be created in subsequent steps—do not add this line until you create findTextBox in Step 8 or the program will not compile.
The BindDefault method must be called from the Form’s Load event handler for the data to be displayed when the application starts (line 35). As before, you create the Load event handler by double clicking the Form’s title bar.
We must also create an event handler to save the changes to the database when the BindingNavigator’s save Button is clicked (lines 40–48). Note that, besides the names of the variables, the three-statement save logic remains the same. We also call BindDefault after saving to re-sort the data and move back to the first element. Recall from Section 21.6 that to allow changes to the database to save between runs of the application, you must select the database in the Solution Explorer, then change its Copy to Output Directory property to Copy if newer in the Properties window.
The AddressBook database is configured to require values for the first name, last name, phone number or e-mail. In order to simplify the code, we have not checked for errors, but an exception (of type System.Data.SqlClient.SqlException) will be thrown if you attempt to save when any of the fields are empty.
Step 7: Running the Application
Run the application and experiment with the controls in the BindingNavigator at the top of the window. Like the previous examples, this example fills a BindingSource object (called addressBindingSource, specifically) with all the rows of a database table (i.e., Addresses). However, only a single row of the database sets the TextBox to the right of Address ID to zero. Note that if starting with an empty database, the TextBoxes will be empty and editable even though there is no current entry—be sure to create a new entry with the add Button before you enter data or saving will have no effect. After entering several address-book entries, click the Save Button to record the new rows to the database—the Address ID field is automatically changed from zero to a unique number by the database. When you close and restart the application, you should be able to use the BindingNavigator controls to browse your entries.
Step 8: Adding Controls to Allow Users to Specify a Last Name to Locate
While the BindingNavigator allows you to browse the address book, it would be more convenient to be able to find a specific entry by last name. To add this functionality to the application, we must create controls to allow the user to enter a last name, then event handlers to actually perform the search.
Go to Design view and add to the Form a Label named findLabel, a TextBox named findTextBox, and a Button named findButton. Place these controls in a GroupBox named findGroupBox. Set the Text properties of these controls as shown in Fig. 21.30.
Step 9: Programming an Event Handler that Locates the User-Specified Last Name
Double click findButton to create a Click event handler for this Button. In the event handler, use LINQ to select only people with the last name entered in findTextBox and sort them by last name, then first name (lines 57–60). Start the application to test the new functionality. When you enter a last name and click Find, the BindingNavigator allows the user to browse only the rows containing the specified last name. This is because the data source bound to the Form’s controls (the result of the LINQ query) has changed and now contains only a limited number of rows. The database in this example is initially empty, so you’ll need to add several records before testing the find capability.
Step 10: Allowing the User to Return to Browsing All Rows of the Database
To allow users to return to browsing all the rows after searching for specific rows, add a Button named browseAllButton below the findGroupBox. Set the Text property of browseAllButton to Browse All Entries. Double click browseAllButton to create a Click event handler. Have the event handler call BindDefault (line 67) to restore the data source to the full list of people. Also modify BindDefault so that it clears findTextBox (line 30).
Data Binding in the AddressBook Application
Dragging and dropping the Address node from the Data Sources window onto the AddressBookForm caused the IDE to generate several components in the component tray. These serve the same purposes as those generated for the earlier examples that use the Books database. In this example, addressBindingSource uses LINQ to SQL to manipulate the AddressBookDataContext’s Addresses table. The BindingNavigator (named addressBindingNavigator) is bound to addressBindingSource, enabling the user to manipulate the Addresses table through the GUI. This binding is created by assigning addressBindingSource to addressBindingNavigator’s BindingSource property. This is done automatically when the IDE creates them after you drag the Address data source onto the Form.
In each of the earlier examples using a DataGridView to display all the rows of a database table, the DataGridView’s DataS. In this example, the IDE binds each TextBox to a specific column of the Addresses table in the AddressBookDataContext. To do this, the IDE sets the TextBox’s DataBindings.Text property. You can view this property by clicking the plus sign next to (DataBindings) in the Properties window. Clicking the drop-down list for this property (as in Fig. 21.31) allows you to choose a BindingSource object and a property (i.e., column) within the associated data source to bind to the TextBox. Using a BindingSource keeps the data displayed in the TextBoxes synchronized, and allows the BindingNavigator to update them by changing the current row in the BindingSource.
Fig. 21.31 Data bindings for firstNameTextBox in the AddressBook application.
Consider the TextBox that displays the FirstName value—named firstNameTextBox by the IDE. This control’s DataBindings.Text property is set to the FirstName property of addressBindingSource (which refers to the Addresses table in the database). Thus, firstNameTextBox always displays the FirstName column’s value in the currently selected row of the Addresses table. Each IDE-created TextBox on the Form is configured in a similar manner. Browsing the address book with addressBindingNavigator changes the current position in addressBindingSource, and thus changes the values displayed in each TextBox. Regardless of changes to the contents of the Addresses table in the database, the TextBoxes remain bound to the same properties of the table and always display the appropriate data. The TextBoxes do not display any values if addressBindingSource is empty. | https://www.informit.com/articles/article.aspx?p=1251169&seqNum=10 | CC-MAIN-2020-50 | refinedweb | 2,024 | 52.6 |
From EPICSWIKI
Introduction
Writing software that calls Asyn is confusing until you understand the approach it expects you to use. There are very good reasons behind the design of Asyn which is intended to simplify the work involved in writing robust, portable code, but this does result in the need to learn the Asyn approach. Once you understand the choices that Asyn has made though, writing the software is actually simpler than it would be without asyn.
This document assumes that you’re writing an EPICS Device Support layer for a moderately complicated instrument or device that interfaces through an RS232 or similar serial port, and hence you need to use the asynOctet generic serial interface. If you’re wanting to use some other Asyn interface then the techniques discussed here should still apply, but some of the specific code will need to change. If the device you’re controlling is simple you may be better off using the devGpib interface layer (which also works with serial devices now that it’s been converted to use Asyn). If it’s available the Streams serial device layer from Dirk Zimoch may be even better, as it will be able to send and parse many typical serial message formats without your having to write any C code at all.
Getting Started
First, you will need to write the code for a basic device support layer. I won’t cover the details of that here but will assume that you are familiar with the device support interface. In fact I’m not really going to talk about device support very must at all, other than to explain how the design of Asyn intrinsically allows you to implement asynchronous device support. It is also possible to use Asyn to implement synchrous device support, but I’m not going to talk about that here at all.
Assuming you’ve got your basic device support framework sorted out, there are usually only a couple of places where your code will actually need to interact with the Asyn software at all: 1) In the per-record device initialization code, and 2) when actually performing the I/O. The first part is required to support the latter, so I’ll talk about the two together.
Our code needs to pull in some headers first:
#include <asynDriver.h> #include <asynOctet.h>
The first header is needed for all code that is going to call Asyn; the second only if you’re planning on using the serial handling, which we are for the purposes of this document.
What is an asynUser?
An asynUser is a structure that Asyn uses to hold information about an individual I/O transaction. Initially you will probably create an asynUser for each record, since at least in theory the user could create a database that processes all your records at once. For more complex I/O schemes where you share a single I/O transaction amongst multiple records you may reduce the number of asynUser objects you create, but in any case you will need one of these for each independent transaction that could concievably be active at once, hence typically one per record in the simple case.
Since you can’t really do anything much in Asyn without an asynUser, lets start by creating one:
asynUser *pau = pasynManager->createAsynUser(myCallback, myTimeout);
The first thing to notice is that we’re not calling a regular function, we’re actually making a call through a function pointer found in the global pasynManager structure. Why? Well apparently it simplifies matters for the authors of Asyn, or so they tell me, so that’s the way we have to do it.
Next note that we’re passing in two parameters to the call, myCallback and myTimeout. These are both names of functions that we have to provide and which will be called by Asyn at an appropriate time – we’ll come back to them again later.
It’s worth knowing that you really can’t create, copy or destroy an asynUser object yourself, because it’s actually implemented as a part of a larger structure which the asynManager routines use to hold other information related to your connection. You must use the methods provided by asynManager.
Finally don’t bother checking the pau value returned by the above method for a NULL value. If createAsynUser() wasn’t able to allocate enough memory for the structure it won’t bother to return at all; if there’s not enough spare RAM at this stage, our IOC is never going to be able to run properly (that’s the theory anyway, don’t blame me I didn’t write it…).
asynUser Internals
So we created an asynUser, what does it look like and what can we actually do with it?
The asynDriver Documentation describes the fields inside the asynDriver structure. Some of these fields we are allowed or even expected to set, whereas others should not be touched. The fields we’ll be using are:
typedef struct asynUser { /* These fields are in the wrong order */ void *userPvt; /* Ours to play with */ void *userData; /* Ditto */ double timeout; /* I/O operation time limit, in seconds */ char *errorMessage; /* Explanation for any Asyn errors */ other fields ... } asynUser;
I.e. we get two void * pointers for our code to use, and and a timeout setting which is how we we get to control how long the lower level driver waits for responses before giving up. This isn’t the only timeout we get to set, it’s just the one used by the code that implements the I/O operations. The errorMessage string is useful to display and/or log in the event that any of the routines we call gives us back an error status.
Many of the pasynManager routines take an asynUser * parameter as their first argument, and that’s how we will set up a connection to the devices we want to talk to, register exception callbacks and queue or cancel requests to perform I/O.
Connecting
If we want to talk to a real world device that already has an asyn Port registered, we have to connect to it by name:
asynStatus status = pasynManager->connectDevice(pau, portName, 0); if (status != asynSuccess) { printf("Can't connect to port %s : %s\n", portName, pau->errorMessage); pasynManager->freeAsynUser(pau); return -1; }
…
…
Spread around the source file and in the initialization routine, we’ll have bits of code like these:
typedef struct myPvt { asynUser *pau; dbCommon *prec; } myPvt;
asynUser *pau; myPvt *dpvt = (myPvt *) calloc(1, sizeof(myPvt)); if (!dpvt) goto err; dpvt->prec = precord; pau = pasynManager->createAsynUser(myCallback, myTimeout); pau->userPvt = dpvt; dpvt->pau = pau;
This document is not yet complete…
—AndrewJohnson 18:10, 19 Apr 2005 (CDT) | https://epics-controls.org/resources-and-support/documents/howto-documents/device-support-asyn-driver/ | CC-MAIN-2021-17 | refinedweb | 1,123 | 52.33 |
Skype Voice Changer
- Posted: Feb 02, 2009 at 7:19 AM
- 120,481 Views
- 51 Comments
![if gt IE 8]> <![endif]>
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements Skype conversation.
Playing back audio in a .NET application is not quite as easy as you might hope it would be. The .NET 2.0 Framework introduced the SoundPlayer component, which allows you to play back an existing WAV file. While this may be fine for many scenarios, as soon as you want to do things even slightly more advanced, such as changing the volume, or playing back from a different file-format, or pausing and repositioning, you must resort to writing P/Invoke wrappers for various Windows APIs.
Back in 2002, as I was getting started learning .NET, I create some audio-related classes of my own to compensate for the lack of audio support in the .NET Framework. I focused initially on reading and writing WAV and MIDI files, as well as playing back audio in a way that allowed real-time mixing and manipulation of the audio at a sample level. As time went by, I made this growing collection of audio classes available as an open source project, called NAudio, now hosted at CodePlex.
NAudio works by constructing an audio playback graph. Audio comes in “streams” which can be connected together and modified before eventually they go to a renderer. This might be your soundcard if you are listening to the audio, or it might be to a file on the hard disk.
In NAudio, all streams derive from WaveStream. NAudio comes with a collection of useful WaveStream derived classes such as WaveFileReader to read from WAV files or WaveStreamMixer to sum together multiple audio streams.
Audio mixing and effects are almost always performed with floating point numbers (32 bit is most common), so one of the first steps after reading audio out of a WAV file is to convert it from 16 bit to 32 bit. NAudio includes the Wave16To32ConversionStream class to do this. If the audio wasn't in PCM in the first place, for example it is MP3, then we make use of a combination of the Mp3FileReaderStream, WaveFormatConversionStream and BlockAlignmentReductionStream to read the audio out and get it into just the format we want. Here's an example showing how to create a WaveStream ready for playback.
WaveStream outStream; if (fileName.EndsWith(".mp3")) { outStream = new Mp3FileReader(fileName); } else if(fileName.EndsWith(".wav")) { outStream = new WaveFileReader(fileName); } else { throw new InvalidOperationException("Can't open this type of file"); } if (outStream.WaveFormat.Encoding != WaveFormatEncoding.Pcm) { outStream = WaveFormatConversionStream.CreatePcmStream(outStream); outStream = new BlockAlignReductionStream(outStream); // reduces choppiness }
If we simply want to play back the audio without any extra processing, we would use one of the audio output classes provided by NAudio to create an object that implements IWavePlayer. The options are WaveOut, DirectSoundOut, AsioOut and WasapiOut, each representing a different technology for audio playback in Windows. We will use WaveOut, which is the most universally supported. Here we are opening the default output device with a latency of 300ms, and instructing it to use windowed callbacks.
IWavePlayer player = new WaveOut(0, 300, true); player.Init(outStream); player.Play();
WaveOut will now repeatedly call the Read method of the output stream to get the next batch of audio samples to play. All we need to do now is to insert our audio effects into the playback chain.
To allow us to process sample level audio more simply, I have create a new WaveStream derived class called EffectStream. EffectStream will simply pass each audio sample to one or more audio effects before returning the modified audio in its Read method. The reason I have chosen to host all the effects in a single EffectSteam rather than creating one WaveStream derived class is that I want to avoid the performance penalty of converting between arrays of bytes and arrays of floating point numbers at every step. Some types of effect, particularly those involving Fourier transforms can be processor intensive, so anything we can do to speed up performance will help.
As well as the EffectStream class, we need a base Effect class, from which all of our effects can derive. Here's a simplified version of the base Effect class (minus a whole load of helper mathematical functions):
public abstract class Effect { private List<Slider> sliders; public float SampleRate { get; set; } public float Tempo { get; set; } public bool Enabled { get; set; } public Effect() { sliders = new List<Slider>(); Enabled = true; Tempo = 120; SampleRate = 44100; } public IList<Slider> Sliders { get { return sliders; } } public Slider AddSlider(float defaultValue, float minimum, float maximum, float increment, string description) { Slider slider = new Slider(defaultValue, minimum, maximum, increment, description); sliders.Add(slider); return slider; } /// <summary> /// Should be called on effect load, /// sample rate changes, and start of playback /// </summary> public virtual void Init() {} /// <summary> /// will be called when a slider value has been changed /// </summary> public abstract void Slider(); /// <summary> /// called before each block is processed /// </summary> public virtual void Block() { } /// <summary> /// called for each sample /// </summary> public abstract void Sample(ref float spl0, ref float spl1); }
The bulk of the work of the effect should be done in the overridden Sample method. The spl0 and spl1 parameters contain the current sample values to be modified for the left and right channels respectively. For example, to lower the volume we could halve the amplitude of every sample with the following code:
public override void Sample(ref float spl0, ref float spl1) { spl0 *= 0.5f; spl1 *= 0.5f; }
The Effect class contains Tempo and SampleRate values which are useful for certain types of effect. It also contains a concept of ‘Sliders' for each effect. These are the effect parameters, which allow real-time modification of the effect. So if we wanted to control the volume using a slider, we could write the following code (although bear in mind that normally volume sliders should be logarithmic not linear – see the Volume effect in the sample code for an example of how to do this):
public override void Sample(ref float spl0, ref float spl1) { spl0 *= slider1; spl1 *= slider1; }
To simplify the task of adding, removing and re-ordering effects, I created an EffectChain class, which is a simple wrapper around a List<Effect>. The EffectStream class has an EffectChain that contains all the effects it needs to run. Here is the code for the EffectStream:
public class EffectStream : WaveStream { private EffectChain effects; public WaveStream source; private object effectLock = new object(); private object sourceLock = new object(); public EffectStream(EffectChain effects, WaveStream sourceStream) { this.effects = effects; this.source = sourceStream; foreach (Effect effect in effects) { InitialiseEffect(effect); } } public EffectStream(WaveStream sourceStream) : this(new EffectChain(), sourceStream) { } public EffectStream(Effect effect, WaveStream sourceStream) : this(sourceStream) { AddEffect(effect); } public override WaveFormat WaveFormat { get { return source.WaveFormat; } } public override long Length { get { return source.Length; } } public override long Position { get { return source.Position; } set { lock (sourceLock) { source.Position = value; } } } public override int Read(byte[] buffer, int offset, int count) { int read; lock(sourceLock) { read = source.Read(buffer, offset, count); } if (WaveFormat.BitsPerSample == 16) { lock (effectLock) { Process16Bit(buffer, offset, read); } } return read; } private void Process16Bit(byte[] buffer, int offset, int count) { foreach (Effect effect in effects) { if (effect.Enabled) { effect.Block(); } } for(int sample = 0; sample < count/2; sample++) { // get the sample(s) int x = offset + sample * 2; short sample16Left = BitConverter.ToInt16(buffer, x); short sample16Right = sample16Left; if(WaveFormat.Channels == 2) { sample16Right = BitConverter.ToInt16(buffer, x + 2); sample++; } // run these samples through the effects float sample64Left = sample16Left / 32768.0f; float sample64Right = sample16Right / 32768.0f; foreach (Effect effect in effects) { if (effect.Enabled) { effect.Sample(ref sample64Left, ref sample64Right); } } sample16Left = (short)(sample64Left * 32768.0f); sample16Right = (short)(sample64Right * 32768.0f); // put them back buffer[x] = (byte)(sample16Left & 0xFF); buffer[x + 1] = (byte)((sample16Left >> 8) & 0xFF); if(WaveFormat.Channels == 2) { buffer[x + 2] = (byte)(sample16Right & 0xFF); buffer[x + 3] = (byte)((sample16Right >> 8) & 0xFF); } } } public bool MoveUp(Effect effect) { lock (effectLock) { return effects.MoveUp(effect); } } public bool MoveDown(Effect effect) { lock (effectLock) { return effects.MoveDown(effect); } } public void AddEffect(Effect effect) { InitialiseEffect(effect); lock (effectLock) { this.effects.Add(effect); } } private void InitialiseEffect(Effect effect) { effect.SampleRate = WaveFormat.SampleRate; effect.Init(); effect.Slider(); } public bool RemoveEffect(Effect effect) { lock (effectLock) { return this.effects.Remove(effect); } } }
When the Read method on EffectStream is called, we first read the requested number of bytes from our source WaveStream. This might be from a WAV or MP3 file, or from a microphone. Then, we convert it from 16 bit to 32 bit floating point audio. 16 bit audio is stored as integers going from -32,768 to 32,767, and 32 bit audio uses the range -1.0 to 1.0 to represent this range. This means we have plenty of headroom to mix together multiple signals without distorting. It is important though to remember that no samples should be greater than 1.0 before converting back to 16 bit.
Now we have a basic effect framework, it is time to create some real effects to use. There are many sources of algorithms for digital signal processing (DSP) (try musicdsp.org for a good starting point), but I have chosen to base my effects model on that provided by the REAPER digital audio workstation (DAW). This impressive application, masterminded by legendary software developer Justin Frankel, includes a text-based effects framework. These effects, known as JS effects, allow the use of a C-like syntax to quickly write your own effects. I have modelled my Effect class on the JS syntax, allowing me to quickly port effects across.
public class Tremolo : Effect { public Tremolo() { AddSlider(4,0,100,1,"frequency (Hz)"); AddSlider(-6,-60,0,1,"amount (dB)"); AddSlider(0, 0, 1, 0.1f, "stereo separation (0..1)"); } float adv, sep, amount, sc, pos; public override void Slider() { adv=PI*2*slider1/SampleRate; sep=slider3*PI; amount=pow(2,slider2/6); sc=0.5f*amount; amount=1-amount; } public override void Sample(ref float spl0, ref float spl1) { spl0 = spl0 * ((cos(pos) + 1) * sc + amount); spl1 = spl1 * ((cos(pos + sep) + 1) * sc + amount); pos += adv; } }
Some members in the Effect base class such as cos and slider1 allow me to keep the ported syntax as similar to the original JS script as possible.
REAPER ships with well over 100 of these JS Effects, so I chose about 15 of them and ported them to .NET. They are available in the download that accompanies this article. With some of them, for example pitch shifting effects, you will immediately notice the effect on the sound, while others, such as compressors, require some knowledge of how to adjust the parameters to get good results.
Obviously we need a way to pass audio through our effects, so our next task is to create a test harness that will allow us to load in audio files and listen to them with the effects applied. For this purpose I created a simple Windows Forms application that allows you to select a WAV or MP3 file to play back. After being converted into PCM using various classes from the NAudio library, the resulting WaveStream is passed through an EffectStream before being passed to the soundcard for playback. The use of an EffectChain allows us to modify the effects loaded, and their order during playback.
To make the loading of effects simpler, I used the Managed Extensibility Framework (MEF), to [DF1] make each effect a “plugin” to the test harness. Each effect is decorated with an Export attribute to indicate that it is a plugin:
[Export(typeof(Effect))] public class SuperPitch : Effect
Then I can request that MEF auto-populates a property with all the exported effects it can find:
[Import] public ICollection<Effect> Effects { get; set; }
When the user selects an effect from the list of available effects, we create a new instance of it. This is because you might want to put the same effect into the effect chain more than once:
EffectSelectorForm effectSelectorForm = new EffectSelectorForm(Effects); if (effectSelectorForm.ShowDialog(this) == DialogResult.OK) { // create a new instance of the selected effect // as we may want multiple copies of one effect Effect effect = (Effect)Activator.CreateInstance( effectSelectorForm.SelectedEffect.GetType()); audioGraph.AddEffect(effect); checkedListBox1.Items.Add(effect, true); }
To allow real-time modification of the effect parameters, I created two user controls. The first, EffectSliderPanel allows you to hook up a Windows Forms TrackBar to one of our effect's sliders and manages the minimum, maximum and granularity settings. The second user control, EffectPanel takes an Effect and creates one EffectSliderPanel for each slider in that effect. It also is responsible for calling the Slider method on the Effect whenever the user moves one of the sliders. Here's an example of what it looks like:
Now we are able to test our effects by listening to WAV files and playing them with real-time control over their parameters.
There are many interesting uses for audio effects, but it was suggested to me that I create a “voice changer” for Skype as the example program for this article. At first I didn't think that this would be possible, as you would need access to the audio samples from the microphone before Skype transmitted them over the network.
However, it turns out that Skype has a full featured SDK to allow all kinds of third-party add-ons and enhancements. The Skype API can be used in .NET via a COM object, called Skype4Com. Skype plugins are not loaded directly by the Skype application but attach to it via network sockets. The Skype4Com COM object hides much of this complexity from the user.
Having added Skype4Com as a reference to our application, we then need to connect to Skype. This is achieved by using the following code:
const int Protocol = 8; skype = new Skype(); _ISkypeEvents_Event events = (_ISkypeEvents_Event)skype; events.AttachmentStatus += OnSkypeAttachmentStatus; skype.CallStatus += OnSkypeCallStatus; skype.Attach(Protocol, false);
In the CallStatus event handler, we tell Skype that we wish to ‘capture' the microphone. This will cause it to send us the raw audio data from the microphone via a TCP socket. Then we tell it that we will send the audio to be transmitted using another TCP socket.
void OnSkypeCallStatus(Call call, TCallStatus status) { log.Info("SkypeCallStatus: {0}", status); if (status == TCallStatus.clsInProgress) { this.call = call; call.set_CaptureMicDevice( TCallIoDeviceType.callIoDeviceTypePort, MicPort.ToString()); call.set_InputDevice( TCallIoDeviceType.callIoDeviceTypeSoundcard, ""); call.set_InputDevice( TCallIoDeviceType.callIoDeviceTypePort, OutPort.ToString()); } else if (status == TCallStatus.clsFinished) { call = null; packetSize = 0; } }
I found an example Delphi application on the Skype developer website which shows how to intercept the microphone signal to boost the signal level. I used this sample as the starting point for creating my own application to intercept audio samples in Skype. The Delphi application made use of an object called TIdTCPServer, which is a multi-threaded socket server. I created a very simple .NET implementation of this class (without the multi-threading as we will only have one connection at a time):
class TcpServer : IDisposable { TcpListener listener; public event EventHandler<ConnectedEventArgs> Connect; public event EventHandler Disconnect; public event EventHandler<DataReceivedEventArgs> DataReceived; public TcpServer(int port) { listener = new TcpListener(IPAddress.Loopback, port); listener.Start(); ThreadPool.QueueUserWorkItem(Listen); } private void Listen(object state) { while (true) { using (TcpClient client = listener.AcceptTcpClient()) { AcceptClient(client); } } } private void AcceptClient(TcpClient client) { using (NetworkStream inStream = client.GetStream()) { OnConnect(inStream); while (client.Connected) { int available = client.Available; if (available > 0) { byte[] buffer = new byte[available]; int read = inStream.Read(buffer, 0, available); Debug.Assert(read == available); OnDataReceived(buffer); } else { Thread.Sleep(50); } } } OnDisconnect(); } private void OnConnect(NetworkStream stream) { var connect = Connect; if (connect != null) { connect(this, new ConnectedEventArgs() { Stream = stream }); } } private void OnDisconnect() { var disconnect = Disconnect; if (disconnect != null) { disconnect(this, EventArgs.Empty); } } private void OnDataReceived(byte[] buffer) { var execute = DataReceived; if (execute != null) { execute(this, new DataReceivedEventArgs() { Buffer = buffer }); } } #region IDisposable Members public void Dispose() { listener.Stop(); } #endregion } public class DataReceivedEventArgs : EventArgs { public byte[] Buffer { get; set; } } public class ConnectedEventArgs : EventArgs { public NetworkStream Stream { get; set; } }
Once Skype has been told the port numbers on which to connect, it will attempt to open sockets to our TcpListener classes (one for audio in, and one for audio out). We now simply need to pass the audio through our effect chain. But EffectStream needs a WaveStream derived class for its input, so I created SkypeBufferStream to which we pass the raw data received on the microphone in socket, and it returns it in its Read method. One difficulty I encountered was that Skype offers no way of querying what the sample rate of the incoming data is. On my PC it seems to be 44.1kHz, but I do not know if this is guaranteed on all computers.
class SkypeBufferStream : WaveStream { byte[] latestInBuffer; WaveFormat waveFormat; public SkypeBufferStream(int sampleRate) { waveFormat = new WaveFormat(sampleRate, 16, 1); } public override WaveFormat WaveFormat { get { return waveFormat; } } public override long Length { get { return 0; } } public override long Position { get { return 0; } set { throw new NotImplementedException(); } } public void SetLatestInBuffer(byte[] buffer) { latestInBuffer = buffer; } public override int Read(byte[] buffer, int offset, int count) { if (offset != 0) throw new ArgumentOutOfRangeException("offset"); if (buffer != latestInBuffer) Array.Copy(latestInBuffer, buffer, count); return count; } }
Now when we receive any data from the microphone socket, we pass it through the SkypeBufferStream which in turn passes it through the EffectStream and finally out on the output socket's data stream. Here's the relevant code (found in the MicInterceptor class):
NetworkStream outStream; SkypeBufferStream bufferStream; WaveStream outputStream; void OnOutServerConnect(object sender, ConnectedEventArgs e) { log.Info("OutServer Connected"); outStream = e.Stream; } void OnMicServerExecute(object sender, DataReceivedEventArgs args) { // log.Info("Got {0} bytes", args.Buffer.Length); if (outStream != null) { // give the input audio to the beginning of our audio graph bufferStream.SetLatestInBuffer(args.Buffer); // process it out through the effects outputStream.Read(args.Buffer, 0, args.Buffer.Length); // play it back outStream.Write(args.Buffer, 0, args.Buffer.Length); } }
When you run your application for the first time, you will need to grant it permission from within Skype:
To test that the effects are working in Skype is a little tricky as you will not hear the effected sound on your end of the conversation. One good way of checking the effects are working as expected is to use the Skype test call service. This is a number you can dial and it will record what you say and play it back to you.
It is a good idea to test your effect first using a local audio file, as you will not easily be able to determine whether glitches and other audio artifacts were caused by your effect, or simply due to poor network conditions.
There are a few things you should be aware of when selecting effects for use with Skype. First, the audio is mono, so there is no point using any effects such as stereo delay. Second, the audio is almost certainly down-sampled to a much lower sample rate before being transmitted to save on network bandwidth. This means any high frequency components of your sound will be lost. Third, internet telephony applications often have built in echo-suppression, so using delay-based effects might not work quite as well as you were hoping.
For silly voice effects, the most effective is pitch shifting (try the SuperPitch effect and shift either up or down about five semitones). FlangeBaby or Chorus can be used for more subtle voice changing effects. Or if you just want to be annoying, load up a Delay. Feel free to experiment with the other included effects, but bear in mind that many of them are designed with more musical uses in mind, so may not be relevant for internet telephony.
The source code for all the effects, the EffectStream and the Skype connection code described in this article is available in the provided download. It uses a recent unreleased build of NAudio, so you will also need to visit the NAudio CodePlex site if you want to get access to the full source code. The EffectStream and Effect classes will eventually be made part of the NAudio framework, once I have refined their design a bit.
How to use Effect Tester sample app:
Mark Heath is a .NET developer based in Southampton, UK. When he's not writing .NET audio applications for fun, he enjoys home studio recording, playing football, reading theology books and sword fighting with his four small children. His development blog can be genius stuff.. but! whenever i connect it to skype the tester effect box tells me that there is a problem with the software and that the program has stopped! why is that? can somebody explain it to me?
@fchopin, I think you need Skype already on then click the "Skype" button the application.
It doesn't seem to run on x64 Windows. I am getting what pepito got, Error: CLSID {830690FC-BF2F-47A6-AC2D-330BCB402664} failed with error 80040154.
Not a developer, Windows 7 HP x64 running Skype x86.
I'm in the process of updating NAudio to properly support x64. For now, set the compile options on Skype Voice Changer to target x86 and this should fix it.
Can this work on Windows 7 64-bit?
@Matt, you'll have to recompile based on Mark's comment above
@James, doing a quick glance, I think if you just use EffectStream, that should do it.
Hi,
Great stuff, I am trying to create a ligh version that will just take the microphone stream and the effect that, but i am stuggling.
Do you have a simple code block to intercept the mic and then effect it?
Thanks
Hi,
In addition to my earlier query, could you please tell me on what basis and at what intervals does this DataRecieved event get triggered (Data from the microphone)?
Thanks for your help.
Hi Mark,
In addition to my earlier queries, could you please answer the below query as well soon?
I see that the micServer's data recieved i.e., byte[] has values from 0 - 255. How could I determine that a word is spoken taking the byte[] into consideration?
Could you please help me soon?
Thanks for your help.
Hi Heath,
I need to manipulate the incoming sound. Lets say I get a child voice I would like to change it to adult voice.
And in respective places, I have added the code:
InPort ="3756"
call.set_OutputDevice(TCallIoDeviceType.callIoDeviceTypePort, InPort.ToString());
// initialized inServer
inServer = new TcpServer(InPort);
inServer.DataReceived += inServer_DataReceived;
// In dataReceived event I tried playing a wav file
args.Buffer //Lets say I have applied some manipulation inStream.Write(args.Buffer, 0, args.Buffer.Length);
I seem to have a problem to manipulate the incoming audio and then play it at my end.
Please help me soon
Thanks a lot.
@John Mangam I've forwarded the question to Mark, the author of this article. He hopefully knows where to look for this hook-in, if one exists.
Hi Heath,
Thanks for moderating my query. It would be great if you could answer it soon. I have been waiting
Thanks,
John
Hi Heath,
Thanks for your contribution. It's great. Could you please tell me how I could manipulate the skype incoming audio from an external call?
Thanks,
John
Hi John,
I think it should be possible to do, but I haven't done it myself. I used Skype4Com (developer.skype.com/accessories), which exposes all the Skype API, so it would be worth having a look at that to see if it is possible
@Bittarman looking into it if it still works with newest skype, the apps C4F builds can't provide long term support. If an external application changes their API model, our stuff will break.
This sample also came out before Visual Studio 2010 was released. VS 2010 will automatically update the solution so it will work and it should "just work". Coding4Fun has a large sum of samples and articles, updating each one becomes a very large task.
@Dear Experts,
This is almost Aug, 2010.
I cant run your example or do nothing with Skype latest verison.
Would be kind to update a this blog?? With visual studio 2010 express?
Best regards
please help me i dont know how to use dis program thanks
@linda, check out the later part of the article, basically when it goes to: "When you run your application for the first time, you will need to grant it permission from within Skype"
I was able to run this program like 2 or 3 times, but since then it seems to crash, and now I can't open it at all. Event type: clr20r3, code: 0xe0434f4d. Seems to be related to .Net, I have 2.0 and up to 4.0 installed. Is there a solution?
very good
@Nic looking into it right now. Wondering if Skype updated something that caused us to break.
I deleted all my custom fonts and scripts I use for graphic design from my Windows folder, and now the program works fine. I have another issue though. When I use SuperPitch, I like to use default settings except I move octave to 0 and semitones to 2. However, there always seems to be an echo. None of the other settings I've tried have an echo, just SuperPitch.
@Nic - glad you got it working, although it seems unlikely that fonts would be the cause. As for your super-pitch issue, ensure that you use it 100% wet (dry mix slider should be at minimum)
Thanks for the response, Mark. I have it at 100% wet but I still get the echo effect. I've played with moving all the other bars around but the echo is still there.
gostei mto
Skype voice distortion software
ola
ola
quero ter o skype
quero ter o skype
quero ter o skype
quero ter o skype
sem comentarios
reoio
sou de Macaé RJ e gosto muito de ler
gsfshahsajkkadgkscgcv
^^
olá
Skype e tudo que preciso no meu pc
Preciso muito do skype
sou homem
Oii
very good software
oiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
olá pessoal olha viver com deus no coração é a melhor coisa do mundo nós vivemos pra eles em quanto estive vivo ele ex-tara conosco a onde quem que seja que deus nos proteja e muito obrigado a todos ficar com deus no coração de cada um de vocês chau
qero sar logo...
Eu gotaria muito de ter o skype,p/manter mais conectado.grato
Gostaria de me conectar com atravez do skype.
bom
Olha..
Eu curti esse skype ae..
To com ele no PC !
muito massa...
Quando eu vo jogar algum jogo com meus amigos..
"NOIS" entra em "CONF" todo mundo... é "LOCO"
Obs: Conf = Conferencia
Remove this comment
Remove this threadclose | https://channel9.msdn.com/coding4fun/articles/Skype-Voice-Changer | CC-MAIN-2015-32 | refinedweb | 4,469 | 54.73 |
Now you can join the game, but you can't perform any actions; there are no command scripts! Well, that is remedied easily enough.
All commands in the BetterMUD inherit from a base Command class, which provides a few functions to make your life a little easier. Here's the class, which can be found in /data/ commands/PythonCommand.py:
class Command( data.bettermudscript.bettermudscript ): # Usage def Usage( self ): return self.usage # description def Description( self ): return self.description # the standard call method. def Execute( self, args ): try: self.Run( args ) except UsageError: me = BetterMUD.character( self.me ) me.DoAction( "error", 0, 0, 0, 0, "Usage: " + self.Usage() ) except TargetError, e: me = BetterMUD.character( self.me ) me.DoAction( "error", 0, 0, 0, 0, "Cannot find: " + e.value )
In addition to names , commands have usage and description strings (as you saw in Chapter 14). So when asked, the Command class returns usage and description strings (they don't exist in this class, but if you inherit from Command class and define strings on your own, the functions still work).
The Execute function is called from C++ whenever you want to execute a command in the game. To make things easier, I've inserted a try/catch block into the code, and it looks for UsageError and TargetError exceptions.
The Execute function tries calling Run with the arguments given, and if that throws a UsageError exception, the command gets an accessor to your character, and prints out an error, telling you how to use the command. This is for cases in which you type go without specifying where you would like to go, or something similar.
Since so many commands depend on finding an item to act on ( get <item> , attack <charac- ter> , and so on), it's fairly common that the designated targets cannot be found. Rather than hardcode You cannot find: <blah> into each and every command, I've enabled commands to throw a TargetError exception if you try to operate on a target that doesn't exist.
To make things even easier, I've made a special FindTarget function that attempts to find a target contained by an entity and returns the ID if it's found, or throws a TargetError if it can't be found. Here's the code:
def FindTarget( seekf, validf, getf, name ): seekf( name ) if not validf(): raise TargetError( name ) return getf()
The first three parameters are functions; seekf is a function that searches for an entity, validf checks if the result of the seek was valid, and getf returns the ID of the item that seekf found. This might seem confusing at first, so let me show you how to use it:
me = BetterMUD.character( 10 ) item = FindTarget( me.SeekItem, me.IsValidItem, me.CurrentItem, "sword" )
This code first gets a character accessor, pointing to character 10 (whoever that may be), and then gets the ID of an item with the name sword . If there is no item named sword , an exception is thrown, and this code doesn't bother catching it. Essentially, when passed into FindTarget, the code is transformed into the following snippet:
me.SeekItem( "sword" ) if not me.IsValidItem(): raise TargetError( "sword" ) item = me.CurrentItem()
You can easily perform the same trick on a room, when searching for either items or characters within that room, or a region, or whatever else you may need!
# assume r is a room, reg is a region i = FindTarget( r.SeekItem, r.IsValidItem, r.CurrentItem, "sword" ) c = FindTarget( r.SeekCharacter, r.IsValidCharacter, r.CurrentCharacter, "mithrandir" ) j = FindTarget( reg.SeekItem, reg.IsValidItem, reg.CurrentItem, "pie" )
After those lines are successfully executed, i has the ID of the first item that matched sword inside the room, c has the ID of the first person with the name mithrandir , and j has the ID of the first item in the region named pie . If any of those fail, an exception is thrown, and it's up to someone else to handle it.
Once you enter the game, you really can't do anything but use the built-in C++ commands that I've given you, so it's time to add some Python commands.
At this point, the only movement command available to you is the go command. You type go north or go south if you want to go anywhere , and that can be annoying, so as a simple test of commands, I've made additional directional commands. Here's one of them:
class north( PythonCommand.Command ): name = "north" usage = "\"north\"" description = "Attempts to move north" def Run( self, args ): c = BetterMUD.character( self.me ) self.mud.DoAction( "command", c.ID(), 0, 0, 0, "/go north" )
This is the north command, which acts as an alias to go north . You can see that the name, usage, and description are all defined first, and then the Run function grabs an accessor to your character, and commands him to /go north .
Pretty cool, huh? I've included commands for all the common directions: north, south, east, west, up, down, northeast, northwest, southeast , and southwest. I've even included aliases of aliases , which are nw , ne , se , and sw . Here's an example:
class ne( northeast ): name = "ne" usage = "\"ne\""
This simply inherits from class northeast , and redefines its name and usage. This class was created so that you can simply type ne instead of northeast to move northeast in the game.
In this particular case, you can't rely on partial matching to match ne with northeast , because the partial string ne doesn't exist in northeast . If you typed no , the game would think you're going north, and not northeast; the smallest string you could type to make the code think you want to go northeast is northe , which isn't exactly a shortcut. So, to fix this, I just created a brand new command named ne .
Now that you can freely move around, it's a good idea to create commands that modify the physical world. For this, I've implemented the get , drop and give command objects. Here's the get command:
class get( PythonCommand.Command ): name = "get" usage = "\"get <quantity> <item>\"" description = "This makes your character attempt to pick up an item" def Run( self, args ): if not args: raise PythonCommand.UsageError me = BetterMUD.character( self.me ) r = BetterMUD.room( me.Room() )
The code if not args checks to see if any arguments were passed into the function; if not, the function raises a UsageError , which causes the command to print out Usage: get <quan- tity> <item> to the player. The game doesn't know how to get items if you don't give them a name. The game retrieves accessors to your character ( me ), and to the room ( r ).
The usage string for this command is get <quantity> <item> . The bar in front of quantity means that it's an optional argument; it applies only to quantity items. If there's a sword on the ground, a player could type get sword , or if there is a pile of coins , he could type get 10 coins . If a player wants to get the entire pile (he's greedy!) he would type get coins . The next code segment tries to figure out if a player is trying to get a quantity of items or not:
quantity = 0 item = args if string.digits.find( args[0] ) != -1:
The string.digits string is a special built-in string in Python which contains the characters 0123456789. I search that string to see if the first character is a digit.
If a player is getting a quantity, the game extracts that quantity from the arguments using the split function. I've designed the split function so that it splits the string into a list of two strings, one containing the first word, and the other containing the rest of the string:
# first letter is a digit, so get quantity split = args.split( None, 1 )
So args was 10 gold coins , split[0] will be 10 , and split[1] will be gold coins . The next part converts the quantity into an integer:
try: quantity = int( split[0] ) item = split[1] except: # do nothing pass
This could fail, however. If you try converting something like 1blah into an integer, an exception is thrown. If that happens, the function catches the exception, and just nixes the idea of getting a quantity; quantity is left at 0, and item is left as it was.
If the conversion was successful, the function tries to find the item:
i = BetterMUD.item( FindTarget( r.SeekItem, r.IsValidItem, r.CurrentItem, item ) )
If the item is valid, an accessor to the item is retrieved; I need to do a little work on it however. If the quantity value is 0, yet the item in question is an actual quantity object, you want the function to get the entire quantity. So that's what it does:
if i.IsQuantity() and quantity == 0: quantity = i.GetQuantity() self.mud.DoAction( "attemptgetitem", me.ID(), r.CurrentItem(), quantity, 0, "" )
Finally, the item is retrieved, or an error is printed if the item wasn't found.
Dropping an item is almost identical; you need only search the character's inventory for the item to drop (instead of searching the room for an item), and tell the game that you dropped an item.
Giving an item away is slightly more complex, but that's because it must find a player to deliver an item, and then find the item to give to that player. Overall, the code isn't that much different from either getting or dropping an item, so I'm not going to show it here.
All three of these command modules can be found in the /data/commands/ usercommands.py script file. | https://flylib.com/books/en/4.241.1.134/1/ | CC-MAIN-2020-45 | refinedweb | 1,631 | 71.04 |
The new Release 17.12 is out! You can download binaries for Windows and many major Linux distros here .
Undefined function issue appears when it comes to debug. I know it means that I did not put the right library in the folder. But I did so, and the same message still appear.
#include "cv.h"#include "highgui.h"using namespace cv;int main(){ //Create matrix to store image. Mat image; //initialize capture. VideoCapture cap; cap.open(0); //create window to store image. namedWindow("window",1); while(1) { //copy webcam stream to image. cap>>image; //print image screen. imshow("window",image); //delay 33ms. waitKey(33); } return 0;}
Ah, sorry, I just have paid attention to the rules of this forum and I have noticed that I might be against them.What am I allowed to put on this forum ? And is my topic inlaw ?
Oh, I see. So, I can ask "how do I configure it", but not which element (library) to put.It is more like "how to use code blocks ?" and less like "how do you recommand me to code this program ?"But how do I show you how I configured it. Do I simply tell it to you ? Or I can add a file to this post.
The tell you it is missing option X; you then can come here and ask where do I enter in X.
So, the "build log" will be in both the site that support the library and in the compiler I am using. Am I right ?QuoteThe tell you it is missing option X; you then can come here and ask where do I enter in X.I do not know what you mean by missing option.And Does "X" correspond to "anything" ?
Choose your forum carefullyBeHackers blow off questions that are inappropriately targeted in order to try to protect their communications channels from being drowned in irrelevance. You don't want this to happen to you.
If you PM me once more, I will ask that you be banned from this site!!! | http://forums.codeblocks.org/index.php?topic=18535.0 | CC-MAIN-2020-10 | refinedweb | 342 | 86.6 |
Now
See Chapter 1 to learn more about GDI+ namespaces and classes..Yellow); // Create a Bitmap object Bitmap curBitmap = new Bitmap(200, 200); // Create a Graphics object from Bitmap Graphics g = Graphics.FromImage(curBitmap); // Draw and fill rectangles" %>
Now when we run our application, the output generated by Listing 12.2 or 12.3 should look like Figure 12.9.
Figure 12.9. Drawing simple graphics objects on the Web
12.2.1:
Graphics g = Graphics.FromImage(curBitmap);
Once we have a Graphics object, we can draw shapes, lines, and images. In the following code we use the DrawLine and FillRectangle methods to draw lines and a filled rectangle:
g.FillRectangle(brush, 50, 50, 100, 100); g.DrawLine(Pens.WhiteSmoke, 10, 10, 180, 10); g.DrawLine(Pens.White, 10, 10, 10, 180);
If you don't know how draw and fill methods work, you may want to look again at Chapter 3.();
12.2.2.Output enumeration type and specifies the format of the image. ImageFormat is discussed in more detail in Chapter 7 (see Table 7.4).
The Save method also allows us to save an image on a local physical hard drive. The following code saves the bitmap on the C:\ drive.
curBitmap.Save("C:\TempImg.gif", ImageFormat.Jpeg); | https://flylib.com/books/en/2.72.1/your_first_graphics_web_application.html | CC-MAIN-2019-22 | refinedweb | 211 | 68.47 |
that none of the value devices have the functionality of TIMERB. So overall, an msp430 which I will be using will have only one timer called TIMERA which is a 16-bit timer (counts from 0 to 65535). If you are eager to know about the ‘TIMERB’ functionality, you can learn about the ‘TIMERA’ from here and refer to the user guide for quick information.
Before we get into the register part, let us look at the timer’s modes. The timer of MSP430 can operate in three modes. To use which mode will depend entirely on your application.
- Up mode: – In this mode, the timer counts up to a pre-specified value and then rolls over to zero. It’s up to you to decide at which point you want the timer to run.
- Continuous mode: – In this mode, the timer counts from 0 to 65535 and then rolls over to zero.
- Up/Down mode: – In this mode, the timer first counts up to a pre-specified value, but instead of rolling over to zero, it turns around and counts to zero.
Moreover, we can also make the use of the timer as a counter by setting specified points known by the compare/capture register. The functionality of these registers we will look at in the upcoming tutorial.
Let’s head over to our mission statement for today. In the blinking led program, which we did while learning the GPIO of the controller, we used manual loops to provide the delay. Our statement remains the same, but this time we will specify the delay and use timers to do it.
The below image shows the bits of the register to control the timer. It mentions the mode and the frequency at which the timer will run.
The above information is enough for you to learn about the register TA0CTL for timer control, but if you still have any doubt, you can comment below, and I will try to answer your queries as soon as possible.
As you can see in the image, it asks us to select the clock source. For now, we will be using the internal DCO, i.e., SMCLK, but shortly we will see how we can generate longer delay accurately with the use of auxiliary and other clocks.
I will be running the controller at 1 MHz with a prescalar of 8. However, it’s your wish if you want to use prescalar or not.
The above was the control register. Apart from the control register, there are two more registers. One is the ‘TACCR0’ register, where we specified the maximum value of the timer. Remember the up or up/down mode we talked about, which mentioned the pre-specified value. The value of this register defines the pre-specified value. Now the question is how to calculate that value. The value depends on the delay (in seconds) you want to generate and the timer’s clock frequency. Remember, it’s the clock frequency of the timer and not the clock frequency of the controller. The value is given by
TIMER VALUE=(Delay*frequency)-1
Where the delay is in seconds and frequency in Hz.
For example, if you are running your microcontroller at 1MHz, and for timers you are using a prescalar of 8 and want a delay of 0.5sec, the value to be loaded in the register will be given by ((0.5110^6)/8)-1, which will provide you with a value of 62499.
Another vital register is the TAR register. If you read the value of the register at any point in time, it will report the counter’s current value. For this tutorial, we won’t be using this register.
Let’s move onto the coding part. As usual, start CCS and create a new empty project (with main.c) based on the device on your Launchpad. In my case, it’s msp430g2553. You will already find the basic code is written, which will include the main function and the code to stop the watchdog timer. We will first start by calibrating our DCO and initializing the timer and then the ports and then infinite loop.
In the last tutorial, we saw how to calibrate the DCO. Let’s say you want to run it at exactly 1 MHz. Although you could look up to the table and set the bits’ value accordingly, and TI has provided us a direct way to calibrate the DCO to run at 1 MHz, these two instructions can do it.
BCSCTL1 = CALBC1_1MHZ; DCOCTL = CALDCO_1MHZ;
Now that was quite easy. Usually, I prefer to use up mode as it is quite convenient, and in this tutorial, I will also be using up mode only. To use up mode we need a value till which the timer will run. From the above example, we saw that for a delay of 0.5sec with the timer running at a one eight of 1 MHz, the value came out to be 62499. We will be using the same value to load in the register TACCR0. The next part is to configure the control register. There are two ways to use standard hexadecimal configuration, but we will use the other way to use the TI header file for configuring the register. It’s pretty easy to do that way. For our application, the timer clock source will be SMCLK; hence, TASSEL’s value will be 10. For up mode, the value of MC will be 01, the value of ID will be 11 for prescaller of 8. Rest all will be 0. An easy way to configure the register is by giving the instruction
TA0CTL |= TASSEL_2|ID_3|MC_1|TACLR;
Everything is pretty clean except the value after the underscore. Well, this value is the decimal equivalent of bits provided in binary. So 10 is 2, 11 is 3, and 01 is 1. Also, TACLR, which is to rest the timer, has been declared as 0. The next step will be to declare port 1 as output and set it to low initially, which is quite easy.
Now comes the main part. Whenever the timer counts up to the value specified in the TACCR0 register, the TAR register rolls over to zero. When it rolls over to zero, it sets the TAIFG(bit 0 of TA0CTL) register as 1. Our job is to monitor that bit. As soon as it goes high, we toggle the led’s state and then reset the bit manually. Therefore we used an ‘if condition’ to monitor the status using a bitwise operator and then toggle the state and reset the flag. Here is the complete code along with the diagram
#include <msp430g2553.h> /* * main.c */ int main(void) { WDTCTL = WDTPW | WDTHOLD; // Stop watchdog timer BCSCTL1 = CALBC1_1MHZ; //Set DCO to 1Mhz DCOCTL = CALDCO_1MHZ; //Set DCO for 1Mhz TACCR0 = 62499; //Timer Count for value 0.5sec TA0CTL |= TASSEL_2+ID_3+MC_1+TACLR; //using SMCLK with prescalr of 8 in upmode. P1DIR |= 0x01; //Make P1.0 as output P1OUT=0x00; //make p1.0 low while(1) { if((TA0CTL&TAIFG)!=0){ //monitor the flag status P1OUT^=0x01; //toggle the led TA0CTL&=~(TAIFG); //clear the flag } } return 0; }
Well, this was a straight forward way to use the timer, monitoring every time the flag. But the better way to do it is to use interrupts, which will automatically track the flag. When it sets high, it will call the interrupt routine vector and will automatically clear the flag — more on this in the upcoming tutorial.
Do we really need to set the DCO for MSP430 here. By default it uses the same with 1.1MHz.
Actually, it’not exactly 1.1 Mhz. Referring to the datasheet, going by their calculation it comes out to be around 1.1Mhz or rather closer to 1.2 Mhz. Instead of using register configuration to set frequency you can use predefined headers to set the same.
I tried it on my board! With out configuring the clock I was able to generate 0.5 sec.
How did you measure the 0.5 second delay. Did you connect your output pin to an oscilloscope. There is not much difference in between 1.1 Mhz and 1.0 Mhz such that it effects the timer delay value. However, over the time the drift may be accountable. therefore, i request you to measure the delay when the board is set to 1Mhz and share with us here | https://embedds.com/timers-of-msp430/ | CC-MAIN-2021-17 | refinedweb | 1,411 | 73.98 |
Introduction!
In this article, we’ll be taking a high-level look at Sapper, how it helps you build full-fledged, lightweight apps, and break down a server-rendered app 😁. By the end of this article, you should know enough of Sapper to understand what makes it awesome.
With that said, it’s still a good idea to go through the documentation, as there are some concepts covered there that I left out.
What is Sapper?
Sapper is the companion component framework to Svelte that helps you build larger and more complex apps in a fast and efficient way.
In this modern age, building a web app is a fairly complex endeavor, what with code splitting, data management, performance optimizations, etc. That’s partly why there is a myriad of frontend tools in existence today, but they each bring their own level of complexity and learning curves.
Building an app shouldn’t be so difficult, right? Could it be simpler than it is right now? Is there a way to tick all the boxes while retaining your sanity? Of course there is — that was a rhetorical question!
Let’s start with the name: Sapper. I’ll just go ahead and quote the official docs on why the name was chosen:.
Hmm, makes perfect sense 🤓.
Sapper (and, by extension, Svelte) is designed to be lightweight, performant, and easy to reason about while still providing you with enough features to turn your ideas into awesome web apps.
Basically, here are the things Sapper helps take care of for you when building web apps in Svelte:
- Routing
- Server-side rendering
- Automatic code splitting
- Offline support(using Service Workers)
- High-level project structure management
I’m sure you’d agree that managing those yourself could quickly become a chore distracting you from the actual business logic.
But talk is cheap — code is convincing! Let’s walk through a small server-rendered app using Svelte + Sapper.
Hands-on experience
Instead of me telling you how Sapper helps you build apps easily, we are going to explore the demo app you get when you scaffold a new project and see how it works behind the scenes.
To get started, run the following commands to bootstrap a new project:
$ npx degit "sveltejs/sapper-template#rollup" my-app $ cd my-app $ npm install $ npm run dev
Doing that will get you a bare-bones project, but that will be enough for the purpose of this article. We should be able to explore how Sapper handles routing and server-side rendering with this simple project without going too deep.
Let’s dive in!
Project structure
Sapper is an opinionated framework, meaning certain files and folders are required, and the project directory must be structured in a certain way. Let’s look at what a typical Sapper project looks like and where everything goes.
Entry points
Every Sapper project has three entry points along with a
src/template.html file:
src/client.js
src/server.js
src/service-worker.js(this one is optional)
client.js
import * as sapper from '@sapper/app'; sapper.start({ target: document.querySelector('#sapper') });
This is the entry point of the client-rendered app. It is a fairly simple file, and all you need to do here is import the main Sapper module from
@sapper/app and call the
start method from it. This takes in an object as an argument, and the only required key is the
target.
The target specifies which DOM node the app is going to be mounted on. If you’re coming from a React.js background, think of this as
ReactDOM.render.
server.js
We need a server to serve our app to the user, don’t we? Since this is a Node.js environment, there are tons of options to choose from. You could use an Express.js server, a Koa.js server, a Polka server, etc., but there are some rules to follow:
- The server must serve the contents of the
/staticfolder. Sapper doesn’t care how you do it. Just serve that folder!
- Your server framework must support middlewares (I personally don’t know any that don’t), and it must use
sapper.middleware()imported from
@sapper/server.
- Your server must listen on
process.env.PORT.
Just three rules — not bad, if you ask me. Take a look at the
src/server.js file generated for us to see what it looks like in practice.
service-worker.js
If you need a refresher on what Service Workers are, this post should do nicely. Now, the
service-worker.js file is not required for you to build a fully functional web app with Sapper; it only gives you access to features like offline support, push notifications, background synchronization, etc.
Since Service Workers are custom to apps, there are no hard-and-fast rules for how to write one. You can choose to leave it out entirely, or you could utilize it to provide a more complete user experience.
template.html
This is the main entry point for your app, where all your components, style refs, and scripts are injected as required. It’s pretty much set-and-forget except for the rare occasion when you need to add a module by linking to a CDN from your HTML.
routes
The MVP of every Sapper app. This is where most of your logic and content lives. We’ll take a deeper look in the next section.
Routing
If you ran all the commands in the Hands-on experience section, navigating to should take you to a simple web app with a homepage, an about page, and a blog page. So far, so simple.
Now let’s try to understand how Sapper is able to reconcile the URL with the corresponding file. In Sapper, there are two types of routes: page routes and server routes.
Let’s break it down further.
Page routes
When you navigate to a page — say,
/about — Sapper renders an
about.svelte file located in the
src/routes folder. This means that any
.svelte file inside of that folder is automatically “mapped” to a route of the same name. So, if you have a file called
jumping.svelte inside the
src/routes folder, navigating to
/jumping will result in that file being rendered.
In short, page routes are
.svelte files under the
src/routes folder. A very nice side effect of this approach is that your routes are predictable and easy to reason about. You want a new route? Create a new
.svelte file inside of
src/routes and you’re golden!
What if you want a nested route that looks like this:
/projects/sapper/awesome? All you need to do is create a folder for each subroute. So, for the above example, you will have a folder structure like this:
src/routes/projects/sapper, and then you can place an
awesome.svelte file inside of the
/sapper folder.
With this in mind, let’s take a look at our bootstrapped app and navigate to the “about” page. Where do you think the content of this page is being rendered from? Well, let’s take a look at
src/routes. Sure enough, we find an
about.svelte file there — simple and predictable!
Note that the
index.svelte file is a reserved file that is rendered when you navigate to a subroute. For example, in our case, we have a
/blogs route where we can access other subroutes under it, e.g.,
/blogs/why-the-name.
But notice that navigating to
/blogs in a browser renders a file when
/blogs is a folder itself. How do you choose what file to render for such a route?
Either we define a
blog.svelte file outside the
/blogs folder, or we would need an
index.svelte file placed under the
/blogs folder, but not both at the same time. This
index.svelte file gets rendered when you visit
/blogs directly.
What about URLs with dynamic slugs? In our example, it wouldn’t be feasible to manually create every single blog post and store them as
.svelte files. What we need is a template that is used to render all blog posts regardless of the slug.
Take a look at our project again. Under
src/routes/blogs, there’s a
[slug].svelte file. What do you think that is? Yup — it’s the template for rendering all blog posts regardless of the slug. This means that any slug that comes after
/blogs is automatically handled by this file, and we can do things like fetching the content of the page on page mount and then rendering it to the browser.
Does this mean that any file or folder under
/routes is automatically mapped to a URL? Yes, but there’s an exception to this rule. If you prefix a file or folder with an underscore, Sapper doesn’t convert it to a URL. This makes it easy for you to have helper files inside the routes folder.
Say we wanted a helpers folder to house all our helper functions. We could have a folder like
/routes/_helpers, and then any file placed under
/_helpers would not be treated as a route. Pretty nifty, right?
Server routes
In the previous section, we saw that it’s possible to have a
[slug].svelte file that would help us match any URL like this:
/blogs/<any_url>. But how does it actually get the content of the page to render?
You could get the content from a static file or make an API call to retrieve the data. Either way, you would need to make a request to a route (or endpoint, if you think only in API) to retrieve the data. This is where server routes come in.
From the official docs: “Server routes are modules written in
.js files that export functions corresponding to HTTP methods.”
This just means that server routes are endpoints you can call to perform specific actions, such as saving data, fetching data, deleting data, etc. It’s basically the backend for your app so you have everything you need in one project (you could split it if you wanted, of course).
Now back to our bootstrapped project. How do you fetch the content of every blog post inside
[slug].svelte? Well, open the file, and the first bit of code you see looks like this:
<script context="module"> export async function preload({ params, query }) { // the `slug` parameter is available because // this file is called [slug].html const res = await this.fetch(`blog/${params.slug}.json`); const data = await res.json(); if (res.status === 200) { return { post: data }; } else { this.error(res.status, data.message); } } </script>
All we are looking at is a simple JS function that makes a GET request and returns the data from that request. It takes in an object as a parameter, which is then destructured on line 2 to get two variables:
params and
query.
What do
params and
query contain? Why not add a
console.log() at the beginning of the function and then open a blog post in the browser? Do that and you get something like this logged to the console:
{slug: "why-the-name"}slug: "why-the-name"__proto__: Object {}
Hmm. So if we opened the “why-the-name” post on line 5, our GET request would be to
blog/why-the-name.json, which we then convert to a JSON object on line 6.
On line 7, we check if our request was successful and, if yes, return it on line 8 or else call a special method called
this.error with the response status and the error message.
Pretty simple. But where is the actual server route, and what does it look like? Look inside
src/routes/blogs and you should see a
[slug].json.js file — that is our server route. And notice how it is named the same way as
[slug].svelte? This is how Sapper maps a server route to a page route. So if you call
this.fetch inside a file named
example.svelte, Sapper will look for an
example.json.js file to handle the request.
Now let’s decode [slug].json.js, shall we?` })); } }
What we’re really interested in here begins from line 8. Lines 3–6 are just preparing the data for the route to work with. Remember how we made a GET request in our page route:
[slug].svelte? Well, this is the server route that handles that request.
If you’re familiar with Express.js APIs, then this should look familiar to you. That is because this is just a simple controller for an endpoint. All it is doing is taking the slug passed to it from the
Request object, searching for it in our data store (in this case,
lookup), and returning it in the
Response object.
If we were working with a database, line 12 might look something like
Posts.find({ where: { slug } }) (Sequelize, anyone?). You get the idea.
Server routes are files containing endpoints that we can call from our page routes. So let’s do a quick rundown of what we know so far:
- Page routes are
.sveltefiles under the
src/routesfolder that render content to the browser.
- Server routes are
.jsfiles that contain API endpoints and are mapped to specific page routes by name.
- Page routes can call the endpoints defined in server routes to perform specific actions like fetching data.
- Sapper is pretty well-thought-out.
Server-side rendering
Server-side rendering (SSR) is a big part of what makes Sapper so appealing. If you don’t know what SSR is or why you need it, this article does a wonderful job of explaining it.
By default, Sapper renders all your apps on the server side first before mounting the dynamic elements on the client side. This allows you to get the best of both worlds without having to make any compromises.
There is a caveat to this, though: while Sapper does a near-perfect job of supporting third-party modules, there are some that require access to the
window object, and as you know, you can’t access
window from the server side. Simply importing such a module will cause your compile to fail, and the world will become a bit dimmer 🥺.
Not to fret, though; there is a simple fix for this. Sapper allows you to import modules dynamically (hey, smaller initial bundle sizes) so you don’t have to import the module at the top level. What you do instead will look something like this:
<script> import { onMount } from 'svelte'; let MyComponent; onMount(async () => { const module = await import('my-non-ssr-component'); MyComponent = module.default; }); </script> <svelte:component this={MyComponent}
On line 2, we’re importing the
onMount function. The
onMount function comes built into Svelte, and it is only called when the component is mounted on the client side (think of it like the equivalent of React’s
componentDidMount).
This means that when only importing our problematic module inside the
onMount function, the module is never called on the server, and we don’t have the problem of a missing
window object. There! Your code compiles successfully and all is well with the world again.
Oh, and there’s another benefit to this approach: since you’re using a dynamic import for this component, you’re practically shipping less code initially to the client side.
Conclusion
We’ve seen how intuitive and simple it is to work with Sapper. The routing system is very easy to grasp even for absolute beginners, creating an API to power your frontend is fairly straightforward, SSR is very easy to implement, etc.
There are a lot of features I didn’t touch on here, including preloading, error handling, regex routes, etc. The only way to really get the benefit is to actually build something with it.
Now that you understand the basics of Sapper, it’s time for you to go forth and play around with it. Create a small project, break things, fix things, mess around, and just get a feel for how Sapper works. You just might fall in love “Exploring Sapper + Svelte: A quick tutorial”
Hey thanks for this thorough and well written article. I’ve made a few videos about Sapper tutorials if anyone is interested:
Hi, thanks for this article. Is Sapper ready for production applications? As of this writing, Sapper is still on v0.27.8
Hey btw there aren’t any line numbers on the code samples | https://blog.logrocket.com/exploring-sapper-svelte-a-quick-tutorial/ | CC-MAIN-2019-39 | refinedweb | 2,753 | 74.79 |
Fold line breaks after update to Notepad++ 7.2.2
I’m using Notepad++ x32 on Windows 10 14393.479 x64.
After installing update to 7.2.2, Folding line breaks except Python language.
It happens almost every languages.
For example, this is what i expected,
And this is what it happens now
What should i do to recover this?
Can you paste a snippet of code that has the issue?
Of course. But first, i have to edit original post.(Too late to edit the original one :-( )
Folding line breaks on some code.
It happens when i paste code from Visual Studio Community 2015 Update 2
(Korean Language pack installed).
Yeah, that’s my mistake. Sorry about that :-P
Anyway, Here’s the file.
I tried changing EOL type to Windows, Unix, and mac but that didn’t worked.
looks like the C lexer has problems with line commented fold blocks.
It is working by either using a space between // and } or block comments /* */.
Didn’t check if it is already announce to scintilla devs.
Cheers
Claudia
This is a Scintilla “feature” rather than a bug, though not a very intuitive feature as it has confused several people.
See:
Supposedly it can be controlled with the
fold.cpp.explicit.endand
fold.cpp.explicit.startproperties for the C++ lexer.
Looks like C lexer accepts this as well.
A quick test setting
editor.setProperty('fold.cpp.comment.explicit', '0')
looks ok.
Cheers
Claudia
Thanks for reply, everyone!
Ok, I learned
//}caused this bug, oh not a bug, it’s feature XD
Anyway how can i apply
editor.setProperty('fold.cpp.comment.explicit', '0')?
you would need a plugin like python script or lua script to automate this task
becasue it is not a general setting but per buffer setting. Each time you open
a new c/cpp file the property needs to be set.
A python version would look like this
def setFoldProperty(): if editor.getLexerLanguage() == 'cpp' and editor.getProperty('fold.cpp.comment.explicit') != '0': editor.setProperty('fold.cpp.comment.explicit', '0') def callbackLANGCHANGED(args): setFoldProperty() def callbackBUFFERACTIVATED(args): setFoldProperty() notepad.clearCallbacks() notepad.callback(callbackBUFFERACTIVATED, [NOTIFICATION.BUFFERACTIVATED]) notepad.callback(callbackLANGCHANGED, [NOTIFICATION.LANGCHANGED])
If you haven’t installed python script plugin already than I would suggest
to download the msi from here instead using plugin manager.
If you want to have this script executed each time you start npp you need to change the python script configuration from LAZY to ATSTARTUP and the content of this script needs to be in a file called
startup.py.
Note, both, c and cpp lexer are identified as cpp.
Cheers
Claudia | https://notepad-plus-plus.org/community/topic/12907/fold-line-breaks-after-update-to-notepad-7-2-2/5 | CC-MAIN-2019-18 | refinedweb | 436 | 59.3 |
Forum Index
This is the feedback thread for the first round of Community Review of DIP 1034, "Add a Bottom Type (reboot)".
===================================
*34 here:
The review period will end at 11:59 PM ET on May 20, or when I make a post declaring it complete. Feedback posted to this thread after that point may be ignored.
At the end of this review round, the DIP will be moved into the Post-Community Round 1 state. Significant revisions resulting from this review round may cause the DIP manager to require another round of Community Review, otherwise the DIP will be queued for the Final Review.
================================== Wednesday, 6 May 2020 at 11:05:30 UTC, Mike Parker wrote:
> This is the feedback thread for the first round of Community Review of DIP 1034, "Add a Bottom Type (reboot)".
>
> [...]
Typo in sentence "Note that rules 1 to 4 don not naturally follow from rule 0 "
"do not" instead of "don not".
On Wed, May 06, 2020 at 11:05:30AM +0000, Mike Parker via Digitalmars-d wrote: [...]
>
[...]
Under the section "Standard name", last point under "Counter arguments", second last word: "immdediately" should be spelt "immediately".
T
--
Three out of two people have difficulties with fractions. -- Dirk Eddelbuettel
On Wed, May 06, 2020 at 11:05:30AM +0000, Mike Parker via Digitalmars-d wrote: [...]
>
[...]
Under "Description", "(3) Implicit conversions from noreturn to any other type are allowed.", 2nd bullet point: "respsectively" should be "respectively".
T
--
Lottery: tax on the stupid. -- Slashdotter
Copying this over from the discussion thread:
noreturn x0; // compile error, must have bottom value
noreturn[1] x4; // compile error, init value is [assert(0)]
struct S {int x; noreturn y;} // definition is fine
S x5; // compile error, must have bottom value
enum E : noreturn {x = assert(0), y = assert(0)}
E e; // compile error, must have bottom value
The problem is that these require special cases in generic code. If these common cases cause compile errors, then every template will either have to have a `if (!is(T == noreturn))`, or allow itself to fail (possibly deep inside a stack of instantiated templates).
std.algorithm.group, for example, returns a Group struct, defined as follows:
struct Group(alias pred, R)
if (isInputRange!R)
{
import std.typecons : Rebindable, tuple, Tuple;
private alias E = ElementType!R;
static if ((is(E == class) || is(E == interface)) &&
(is(E == const) || is(E == immutable)))
{
private alias MutableE = Rebindable!E;
}
else static if (is(E : Unqual!E))
{
private alias MutableE = Unqual!E;
}
else
{
private alias MutableE = E;
}
private R _input;
private Tuple!(MutableE, uint) _current;
...
}
But if R is noreturn[], then this becomes:
struct Group(alias pred, R)
if (isInputRange!R)
{
private noreturn[] _input;
private Tuple!(noreturn, uint) _current;
}
And because Tuple is itself a struct, and Tuple!(noreturn, uint) has a field of type noreturn, the following code will fail to compile:
[].group()
With a confusing error message inside Tuple.
I think these rules should at least be reconsidered or relaxed, if not removed.
Wherever there is "=> return" it's wrong. Drop the "return" keyword.
On Wednesday, 6 May 2020 at 11:05:30 UTC, Mike Parker wrote:
> [snip]
If the compiler will really say "element type could not be inferred from empty array" from trying to append to array of `noreturn`, it's a bad error message. It would appear to complain about the initialization, when only the appending in the following line is illegal. A better message would be something like "Attempt to append int to noreturn[], which must always be empty".
Perhaps the `noreturn` name, or whatever it will be, can be better defined at DRuntime level, like `size_t`? It'll still be standard, but can be done without without adding a keyword. It could be what Walter suggested (`alias noreturn = typeof(assert(false));`), but doing `alias noreturn = typeof(&null)` instead might be a bit easier to implement.
I generally liked the rejected proposal of Walter, and I like this one even more. Good luck with the remaining work!
* Mangling: prefer to stick with alpha_numeric characters for mangling. It doesn't need to be short, as I expect it to be rare
* Mention the conversion level, should be "convert".
* For comparisons, I'd use a more popular language than zig.
* Don't put a space between DIP and 1034 - this is so tag-based search works on it. Like #DIP1034
Great DIP!
In the breaking changes, it says "Any code assuming `is(typeof([]) == void[])` will break."
However, this is not exactly correct. If code *assumes* it is typed as void[], it will probably work.
For example:
void foo(void[])
{
}
foo([]); // I'm assuming it's void[], but it will still work.
The point being made is that the result of the expression `typeof([])` is changing. And any code that depends on that specific relationship for declarations will break.
I would change it to "Any code using the expression `typeof([])` for declarations or for template parameters might break. Any code that infers the type of a variable using the expression `[]` will also likely break."
I agree with the rest, because why would you write typeof([]) instead of void[]?
One possible exception:
auto x = []; // instead of typing void[] x, saving 2 characters!
x ~= ...; // now an error.
Another thing that would break -- if for some reason you have a type named "noreturn", and you import that from another module, there will be an ambiguity between object.noreturn and somemodule.noreturn.
I bet the chances of this are almost nil, but it is a possible breakage.
-Steve
I personally like the Scala's concept to name the bottom type "Nothing". I would therefore suggest to similarily name it "nothing" in D as well.
To me it sounds more natural and self-explanatory. noreturn look more like a keyword also due to its similarity to return keyword.
noreturn suggests clearly that the function will not return, but it is natural only for this use of the bottom type, and as the DIP states, there are more uses.
Although returning nothing is a little bit less clear that noreturn, everyone knows void, so it has to be something different. And nothing[] looks much more in place then noreturn[].
So - pun intended - nothing is a thing! ;) | https://forum.dlang.org/thread/arcpszmdarekxtnsnwfl@forum.dlang.org | CC-MAIN-2021-25 | refinedweb | 1,034 | 65.32 |
Details
Description
Activity
- All
- Work Log
- History
- Activity
- Transitions
not an issue AFAIK
you're right. I wonder if I was thinking of nested <bean> elements within the <camelContext/> - can't remember
lets just close this issue for now
Isn't camel-example-spring doing exactly this successfully?
The only difference is the beanServer="mbeanServer" missing. And there is obviously no element with the id:{}
mbeanServer".
I think this bug was raised to try help folks figure out issues caused by the xmlns being redeclared. e.g. something like this
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <camelContext id="camel" useJmx="false" mbeanServer="mbeanServer" xmlns=""> <package>org.apache.camel.example.spring</package> </camelContext> <!-- DOES THIS FAIL? --> <bean id="foo" class="whatnot"/> </beans>
I can't remember if the xmlns redeclaration of the default namespace on the <camelContext/> disappears again after the </camelContext>. Does something like the above work? If not can we generate some kinda useful error message somehow?
I am not sure this is a valid issue yet. I suspect the xml is not correct but I only have briefly looked into it. If I understand correctly this is related to the use of refs.
It's ok for the xml doc to have one default namespace and than one element (<camelContext>) to redefine the default namespace to be something else. However, since we are dealing with qnames, the camelContext element should define a different namespace mapping (and a prefix) for the default namespace of the document (the spring namespace) and use that to qualify the element it refs, so that resolution can happen.
But then again, I may have misunderstood the issue completely. I'll look into it soon though.
Closing all 1.5.0 issues | https://issues.apache.org/jira/browse/CAMEL-390?focusedCommentId=12950695&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-40 | refinedweb | 291 | 57.37 |
In a previous example, “Creating a toggleable LinkButton control in Flex”, we saw how you could create a toggleable Flex LinkButton control by extending the mx.skins.halo.LinkButtonSkin and adding custom “selectedUpSkin”, “selectedOverSkin”, “selectedDownSkin”, and “selectedDisabledSkin” skin states.
The following example shows how you can create a toggleable MX LinkButton control in Flex 4 by creating a custom MX LinkButton skin and specifying a background fill for the selected (“selectedUp”, “selectedOver”, “selectedDown”, and “selectedDisabled”).
[HaloSparkSkins]
And the custom MX LinkButton skin class, skins/MyLinkButtonSkin “Creating a toggleable MX LinkButton control in Flex 4”
Dropbox had mangled the uploaded SWF file :-/
@JabbyPanda,
The SWF looks right to me (Win7/Chrome). What looks mangled on your end?
Peter
Hi Peter, looks like it Dropbox fault, the service went down for couple of minutes, everything is back to normal now.
Hi Peter,
I’m trying to set the rollOverColor property on a Halo LinkButton in MXML, without success.
e.g.,
The rollover colour displays correctly in Flex SDK 3.5, but seems to be ignored in Flex 4 (Beta 2 release).
Thanks,
Chris
@Chris,
This code seems to work for me in Flex 4.0.0.14288. Rolling over the LinkButton control gives me a red background, pressing (and holding) the mouse button over the LinkButton control gives me an orange background:
Peter
Although I did notice this issue;
Not sure if you’re running in to that. Looks like the MX LinkButton control’s skin’s up state doesnt handle non-default sized hit areas very well.
Peter
Cheers Peter,
Works with build 4.0.0.14288 in Flex 3 Builder (I couldn’t compile in current Flash Builder beta because of the Halo namespace change).
I hadn’t noticed the issue with hit area until now :) | http://blog.flexexamples.com/2010/02/04/creating-a-toggleable-mx-linkbutton-control-in-flex-4/ | CC-MAIN-2017-22 | refinedweb | 296 | 61.26 |
Hi Robert,
Your problem does seem familiar. I ran across an issue like that with the Solaris compiler which seems to be kind of like the MS compiler in the respect that even when you use a C++ include file like <cmath>, which instead of to ‘inserting’ functions into the std namespace, simply inserts them into the global namespace, and then “doesn’t know” about the same functions when std:: is prepended to their call…. The following fixed my issue:
Change src/xalanc/PlatformSupport/DoubleSupport.hpp as follows:
- add an #if defined(SOLARIS) / #include <math.h> / #endif section
- in the isNAN(double) function put an #if defined(SOLARIS) / return isnan(theNumber) != 0; / #else / and #endif around the return std::isnan(theNumber) != 0; line,
Rob | http://mail-archives.apache.org/mod_mbox/xalan-c-users/201211.mbox/raw/%3CCBAC34FA9B9B94438B06C8A6557D7AB739ADF1D056@nlbrnex11%3E/2 | CC-MAIN-2016-18 | refinedweb | 124 | 55.24 |
Community tutorialsContribute a tutorial Search tutorials
Contributed by Google employees.
One of the things I find most time consuming when starting on a new stack or technology is moving from reading documentation to a working prototype serving HTTP requests. This can be especially frustrating when trying to tweak configurations and keys, as it can be hard to make incremental progress. However, once I have a shell of a web service stood up, I can add features, connect to other APIs, or add a datastore. I'm able to iterate very quickly with feedback at each step of the process. To help get through those first set up steps I've written this tutorial to cover new project to keep this separate and easy to tear down later. After creating it, be sure to copy down the project ID as it is usually different then the project name.
Getting project credentials
Next, set up a service account key, which Terraform will use to create and manage resources in your Google Cloud Create.
This downloads a JSON file with all the credentials that will be needed for Terraform to manage the resources. This file should be located in a secure place for production projects, but for this example move the downloaded JSON file to the project directory.
Setting up Terraform
Create a new directory for the project to live and create a
main.tf file for
the Terraform config. The contents of this file describe all of the Google Cloud
resources that will be used in the project.
// Configure the Google Cloud provider provider "google" { credentials = file("CREDENTIALS_FILE.json") project = "flask-app-211918" region = "us-west1" }
Set the project ID from the first step to the
project property and point the
credentials section to the file that was downloaded in the last step. The
provider “google” line indicates that you are using the
Google Cloud Terraform provider
and at this point you can run
terraform init to download the latest version of
the provider and build the
.terraform directory.
terraform init Initializing provider plugins... - Checking for available provider plugins on... - Downloading plugin for provider "google" (1.16: version = "~> 1.16" Terraform has been successfully initialized!
Configure Engine instance resource "google_compute_instance" "default" { name = "flask-vm-${random_id.instance_id.hex}" machine_type = "f1-micro" zone = "us-west1-a" boot_disk { initialize_params { image = "debian-cloud/debian-9" } } // Make sure flask is installed on all new instances for later steps metadata_startup_script = "sudo apt-get update; sudo apt-get install -yq build-essential python-pip rsync; pip install flask" network_interface { network = "default" access_config { // Include this section to give the VM an external ip address } } }
The
random_id Terraform plugin
allows you to create a somewhat random instance name that still complies with the Google Cloud instance
naming requirements but requires an additional plugin.
To download and install the extra plugin, run
terraform init again.
Validate the new Compute Engine instance
You can now validate the work that has been done so far. Run
terraform plan
which will:
- Verify the syntax of
main.tfis correct
- Ensure the credentials file exists (contents will not be verified until
terraform apply)
- Show a preview of what will be created
Output:
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: + google_compute_instance.default id: [computed] ... + random_id.instance_id id: [computed] ... Plan: 2 to add, 0 to change, 0 to destroy.
Now it's time to run
terraform apply and Terraform will call Google Cloud = "INSERT_USERNAME:${file("~/.ssh/id_rsa.pub")}" } }
Be sure to replace
INSERT_USERNAME with your username and then run
terraform plan and verify the output looks correct. If it does, run
terraform apply to apply the changes.
The output shows that it will modify the existing compute instance:
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: ~ google_compute_instance.default metadata.%: "0" => "1" … Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
Use output variables for the IP address
Use a Terraform output variable to act as a helper to expose the instance's ip address. Add the following to the Terraform config:
// A variable for extracting the external IP.
ssh `terraform output ip`
Building the Flask app
You will be building a Python Flask app for this
tutorial so that you can have a single file describing your web server and test
endpoints. Inside the VM instance, add the following to a new file called
app.py:
from flask import Flask app = Flask(__name__) @app.route('/') def hello_cloud(): return 'Hello Cloud!' app.run(host='0.0.0.0')
Then run. Add the following to the config and proceed to run plan/apply to create the firewall rule.
resource "google_compute_firewall" "default" { name = "flask-app-firewall" network = "default" allow { protocol = "tcp" ports = ["5000"] } }
Congratulations! You can now point your browser to the instance's IP address and port 5000 and see your server running.
Cleaning up
Now that you are finished with the tutorial, you will likely want to delete
everything that was created so that you don't incur any further costs.
Thankfully, Terraform will let you remove all the resources defined in the
configuration file with
terraform destroy:
terraform destroy random_id.instance_id: Refreshing state... (ID: ZNS6E3_1miU) google_compute_firewall.default: Refreshing state... (ID: flask-app-firewall) google_compute_instance.default: Refreshing state... (ID: flask-vm-64d4ba137ff59a25) An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: - destroy Terraform will perform the following actions: - google_compute_firewall.default - google_compute_instance.default - random_id.instance_id ... google_compute_firewall.default: Destroying... (ID: flask-app-firewall) google_compute_instance.default: Destroying... (ID: flask-vm-64d4ba137ff59a25) google_compute_instance.default: Still destroying... (ID: flask-vm-64d4ba137ff59a25, 10s elapsed) google_compute_firewall.default: Still destroying... (ID: flask-app-firewall, 10s elapsed) google_compute_firewall.default: Destruction complete after 11s google_compute_instance.default: Destruction complete after 18s random_id.instance_id: Destroying... (ID: ZNS6E3_1miU) random_id.instance_id: Destruction complete after 0. | https://cloud.google.com/community/tutorials/getting-started-on-gcp-with-terraform?hl=nb-NO | CC-MAIN-2021-43 | refinedweb | 987 | 57.67 |
Java has an Object class in the java.lang package.
All Java classes extend the Object class directly or indirectly.
All Java classes are a subclass of the Object class and the Object class is the superclass of all classes.
The Object class itself does not have a superclass.
A reference variable of the Object class can hold a reference of an object of any class.
The following code declares a reference variable
obj
of the Object type:
Object obj;
The Object class has nine methods, which are available to be used in all classes in Java.
The following code shows how to reimplement the toString() method of the Object class.
public class Test { public String toString() { return "Here is a string"; } }
Every object in Java belongs to a class.
The getClass() method of the Object class returns the reference of the Class object.
The following code shows how to get the reference of the Class object for a Cat object:
Cat c = new Cat(); Class catClass = c.getClass();
The Class class is generic and its formal type parameter is the name of the class that is represented by its object.
We can rewrite the above statement using generics.
Class<Cat> catClass = c.getClass(); | http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0160__Java_Object_Class.htm | CC-MAIN-2017-22 | refinedweb | 204 | 72.76 |
Ben Collins-Sussman wrote:
> + if (apr_err)
> + {
> + return svn_error_wrap_apr
> + (apr_err, _("Can't change working directory to '%s'"), base_dir);
> + }
>
> No need for curly braces around a single-line block.
>
[...]
> if (sys_err != 0)
> - /* Extracting any meaning from sys_err is platform specific, so just
> - use the raw value. */
> - return svn_error_createf(SVN_ERR_EXTERNAL_PROGRAM, NULL,
> - _("system('%s') returned %d"), cmd, sys_err);
> + {
> + /* Extracting any meaning from sys_err is platform specific, so just
> + use the raw value. */
> + err = svn_error_createf(SVN_ERR_EXTERNAL_PROGRAM, NULL,
> + _("system('%s') returned %d"), cmd,
> sys_err);
> + }
>
>
> Same thing here.
Actually, Ben, this has always been something we've left up to individual
preferences -- we have no stylistic convention on this matter, and never
have. For one thing, given a block of code like:
if (some_cond)
/* This is a comment */
return (some_stuff);
some would think of that as a one-liner inside the if(), others think of
"lines" like physical lines of text. Arguably, it'd be easier to read this
"one-liner" if it was wrapped in braces.
if (some (some_thing);
Other thoughts (YMMV): including the braces up front means that if the body
of the condition grows to more than one line, the diff need only contain the
new logic, not the new formatting of the old logic, too (because indentation
levels change). And there are other consideration, too, like how to format
else blocks -- do you pair an unbraced if() with a braced else, or keep them
visually balanced. Stuff like that. (See even the example in hacking.html
immediately inside the "Coding style" section, as well as inconsistent
examples inside "Exception handling".)
All this to say, try to avoid passing off your personal preference as
Subversion-wide convention. Though, in this case, I suspect you just really
thought such a convention existed.
--
C. Michael Pilato <cmpilato@collab.net>
CollabNet <> <> Distributed Development On Demand
This is an archived mail posted to the Subversion Dev
mailing list. | http://svn.haxx.se/dev/archive-2007-07/0591.shtml | CC-MAIN-2014-41 | refinedweb | 313 | 61.26 |
My first function returns (a, b). The second function must control it if (a, b) are True or False. If it is False call again the first function until the user put in right numbers.
It dose not work. Is it a good idea to do that in this way or not?
def lasin (): a=int( input ("Put in a 1<digit<53\n")) b=int( input ("put in a 1<digit<53\n")) return a,b def kontroll(a,b): while True: try: if a>1 and a<53: break elif b>1 and b<53: break except NameError: print "Try again" lasin()
Thank alot for help and tips! | https://www.daniweb.com/programming/software-development/threads/125704/true-or-false | CC-MAIN-2018-13 | refinedweb | 110 | 89.28 |
of that every once in a while and write something useful and fun, because if I don’t I’ll probably go insane. Fortunately, there’s a whole community full of people talking about Django these days, so I’m not short on topics.
Most recently, I’ve been quietly watching some discussion in the Django ticket tracker and on IRC about how to build “dynamic forms” in Django, where a “dynamic” form is one whose behavior or fields change depending on, say, the current user or some other bit of current information. This isn’t something that’s really documented, but personally I think that’s because it falls into a sort of gray area; like many useful things you can do with Django, it’s not actually Django-specific. It just requires a little bit of knowledge of Python and of some standard programming practices. In a nutshell, there are two ways you’ll want to go about this:
- Write a form class which knows how to adapt itself, say by doing some shuffling around in
__init__()(and possibly taking some custom arguments to tell it what sort of shuffling to do).
- Write a factory function which knows how to build and return the type of form you want.
So let’s dive in.
Constructing in your constructor
The easiest and most common way to do a little bit of dynamic customization in any Python class is by overriding its
__init__() method and adding your own special-case logic there. For sake of an example, let’s consider a contact form: you want a form that lets people get in touch and send you messages. But since there are a lot of evil spammers out there, you also want to avoid automated submissions. Suppose you put together a CAPTCHA system and implement it as a Django form field, maybe called
CaptchaField; then you’d just add it to the definition of your form, like any other field.
But let’s add a twist to this: let’s say that we only want to have this field when the form is being filled out by an anonymous visitor, and not when it’s coming from an authenticated user of the site. How could we do that?
One way would be to write two form classes — one with the
CaptchaField and one without — and have the view select one after checking
request.user.is_authenticated(). But that’s messy and leaves duplicated code lying around (even if you have one form subclass the other), and Django’s all about helping you avoid that sort of thing. So how about a form which automatically sprouts a
CaptchaField as needed? We can do that pretty easily with a custom
__init__() method and a single extra argument: we’ll have the view pass in the value of
request.user, and work out what to do based on that.
Here’s how it might look:
class ContactForm(forms.Form): def __init__(self, user, *args, **kwargs): super(ContactForm, self).__init__(*args, **kwargs) if not user.is_authenticated(): self.fields['captcha'] = CaptchaField()
That’s not so hard. All it requires is a little bit of knowledge of Python (namely, overriding
__init__() to change the way instances of the class are set up, and using
super() to get the parent class’ behavior as a starting point), and a tiny bit of knowledge about how a Django form works: it has a dictionary attached to it, named
fields, whose values are the names of the form fields and whose values are the fields themselves. Once we have that, it’s easy to just stick things in
self.fields as needed.
Similarly, you can use this to modify the behavior of a field in the form; if you wanted to change the
max_length of a
CharField, for example, you could simply pull it out of
self.fields and make the changes you wanted. You could also overwrite things there with completely new fields according to what your form will need. And you can even compare the fields as they currently exist with the fields as they were originally defined on the form class: the base set of fields defined for the class lives in a dictionary called
base_fields and isn’t overwritten when you change
fields, so comparing the contents of
self.fields to the contents of
self.base_fields lets you see quickly what’s changed (and unless you really know what you’re doing, don’t mess around in
base_fields; you’ll thank me for that advice later).
These sorts of techniques can take you a long way; I’ve written some fairly interesting and complex forms using nothing but
__init__() tricks (features at work like this one are constructed using some extremely-dynamic form systems, for example), and generally it’s pretty easy and pretty useful.
Form, function and factory
Of course, there are limits to what’s practical to do in an overridden
__init__(); sooner or later the amount of code required (and the gymnastics you’ll hve to do for things like custom validation methods) simply gets to be too much and you find yourself needing something a little more powerful. Typically, the solution for those cases is a factory function (or factory method, if you prefer) which knows how to build form classes.
One good example is in django-profiles, which needs to generate forms based on the user-profile model you’re using, and also needs to ensure that certain fields never show up in them (since, for example, we don’t want you being able to change the
User instance associated with your own profile). This is solved by two functions in django-profiles: one knows how to retrieve the currently-active user-profile model, and the other knows how to build a form for it, minus the
User field. The second function is the one we’re interested in, and the code for it is short and sweet:
def get_profile_form(): """ Return a form class (a subclass of the default ``ModelForm``) suitable for creating/editing instances of the site-specific user profile model, as defined by the ``AUTH_PROFILE_MODULE`` setting. If that setting is missing, raise ``django.contrib.auth.models.SiteProfileNotAvailable``. """ profile_mod = get_profile_model() class _ProfileForm(forms.ModelForm): class Meta: model = profile_mod exclude = ('user',) # User will be filled in by the view. return _ProfileForm
You’ll notice that the docstring for this function is almost as long as the actual function itself; all it does is fetch the user-profile model (that’s what
get_profile_model() does), then define a
ModelForm subclass based on that model and exclude the
user field (because of the way user profiles work in Django, the field must be named
user; if that requirement didn’t exist, introspection of the profile model could just as easily determine which field to exclude). Then it returns that newly-defined class.
Then, whenever a form for editing user profiles is needed, the view which will display the form can simply do something like this (the
get_profile_form() method lives in a module in django-profiles named
utils):
form_class = utils.get_profile_form() if request.method == 'POST': form = form_class(data=request.POST, files=request.FILES) if form.is_valid(): # Standard form-handling behavior goes here...
This works because Python supports a few features (most notably closures, a topic far too big to tackle here) which make it easy to define and work with new classes while your code is running. And in general, this is an extremely powerful feature to have: it means that instead of writing huge, complex classes which try to anticipate everything you might need to do, you can usually just write one or two simple functions which build and return brand-new tailor-made classes that do exactly what you want.
The example above was fairly easy because it just defined a
ModelForm subclass, and those are typically short and easy to write, but it works just as well with more complex forms; you can define a full form, with fields, custom validation and whatever else you like, inside a function and then return it to be used immediately. There’s also a useful shortcut you can use if you know you’re just going to slap together a collection of fields and don’t want to bother with writing them all out: Python’s built-in
type() function, which puts together a class on the fly. The
type() function takes three arguments:
- The name you want to give to your class.
- A tuple of one or more classes it will inherit from, in order.
- A dictionary whose keys and values will end up as the basic attributes of the class.
Let’s return to our original example: a contact form which sprouts a CAPTCHA field if the user isn’t authenticated. We could write a function which takes a
User as an argument, and then returns either of two possible form classes, like so:
def make_contact_form(user): # The basic form class _ContactForm(forms.Form): name = forms.CharField(max_length=50) email = forms.EmailField() message = forms.CharField(widget=forms.Textarea) if user.is_authenticated(): return _ContactForm class _CaptchaContactForm(_ContactForm): captcha = CaptchaField() return _CaptchaContactForm
This defines the form class on the fly, and if the user isn’t authenticated immediately defines a subclass which includes the CAPTCHA. But this is pretty ugly; let’s see how it works using
type():
def make_contact_form(user): fields = { 'name': forms.CharField(max_length=50), 'email': forms.EmailField(), 'message': forms.CharField(widget=forms.Textarea) } if not user.is_authenticated(): fields['captcha'] = CaptchaField() return type('ContactForm', (forms.BaseForm,), { 'base_fields': fields })
That’s much nicer;
type just wants to know the name of our new class (
ContactForm), what classes, if any, it should inherit from (when doing this, use
django.forms.BaseForm instead of
django.forms.Form; the reasons are a bit obscure and technical, but any time you’re not declaring the fields directly on the class in the normal fashion you should use
BaseForm) and a dictionary of attributes. Recall from our earlier look at fiddling with a form’s fields that
base_fields is the attribute which holds the baseline set of fields for the form class, so our attribute dictionary simply supplies
base_fields with a collection of form fields, and adds the CAPTCHA only when needed.
This is a bit more advanced in terms of Python programming, but is an extremelyuseful technique; Django uses it all over the place to build classes on the fly, and you can too.
Where to go from here
There is one further way to dynamically control the way a class is set up in Python, and it gives you absolute control over every step of the process; writing a “metaclass”. But, in the words of Python guru Tim Peters: “Metaclasses are deeper magic than 99% of users should ever worry about. If you wonder whether you need them, you don’t.” If you ever do get to the point where you’re convinced you do need them, there are plenty of tutorials on Python metaclasses floating around the Web. Django uses them in a few places (in fact, in more places than it strictly needs to), but that doesn’t mean you have to; doing work in
__init__() or writing a factory function which knows how to build the class you want will easily cover 99% or more of all the cases you’re like to run into in the real world.
And though here we’ve been talking about Django forms specifically, I’d like to point out once again that this isn’t really Django-specific; these approaches work well with lots of different types of probems in Python and even in programming in other languages (although not every language makes it as easy to, for example, define new classes on the fly). Reading up on Python and getting to know some standard idioms and programming patterns will drastically improve your ability to reason out solutions like the ones I’ve given above, so if you’re interested in further reading, curl up with either the Python documentation or some material on programming practice; they’ll do you good. | https://www.b-list.org/weblog/2008/nov/09/dynamic-forms/ | CC-MAIN-2018-47 | refinedweb | 2,022 | 55.68 |
On Thu, 2009-02-19 at 13:20 +0200, Dominique Leuenberger wrote: > Hi, > > this code snipet produces a new warning (especially on newer compilers): > > static av_always_inline int dv_guess_dct_mode(DVVideoContext *s, > uint8_t *data, int linesize) { > if (s->avctx->flags & CODEC_FLAG_INTERLACED_DCT) { > int ps = s->ildct_cmp(NULL, data, NULL, linesize, 8) - 400; > if (ps > 0) { > int is = s->ildct_cmp(NULL, data , NULL, > linesize<<1, 4) + > s->ildct_cmp(NULL, data + linesize, NULL, > linesize<<1, 4); > return (ps > is); > } > } else > return 0; > } > The 'issue' is reaching a non-void function without returning a value. > At first sight, of course you would say this is nonsense and a wrong > warning by the compiler, but by looking closer you can see that the > compiler is right: > > in case of the nested if (ps > 0) not being evaluated to true, no > return value is specified. the 'final' else is never reached. and thus > no return value given at all. > > I think the 'simplest' solution would be to just drop the else and > have return 0; unconditional happening at the end. On the other hand > I'm not sure if this is what it is supposed to do. It is. I'll fix it in a moment. Sorry for the trouble. Thanks, Roman. | http://ffmpeg.org/pipermail/ffmpeg-devel/2009-February/065579.html | CC-MAIN-2016-40 | refinedweb | 204 | 56.18 |
Hello every one
I'm very new 32-bit MCU and now I am doing a project on MPC551x MCU. I badly need you all guys help.
1. I have the MPC5510 EVB with me.
2. I created a new project using the "MPC55xx New Project Wizard" in Codewarrior of version 5.9.0
3. I selected the Target for my project as "internal_FLASH"
4. Now, I want to understand how to implement a basic Interrupt Service Routine for any interrupt source. I choose to implement a Periodic Interrupt Timer T1 for every 10ms. So, I initialize the Periodic Interrupt Timer module registers as follows.
-----------------------------------------------------------------------------------------------
PIT.CTRL.B.MDIS = 0; /* enable the clocks for PIT timer1 to timer 8 */
PIT.TLVAL1.B.TSV = 0x00003E7F; /* load this value to generate 10ms with default 16Mhz IRC */
PIT.INTEN.B.TIE1 = 1; /* enable the PIT Timer 1 interrupt*/
PIT.INTSEL.B.ISEL1 = 1; /* PIT timer 1 generates interrupt if enabled */
PIT.EN.B.PEN1 = 1; /* enable the PIT Timer 1 timer */
-----------------------------------------------------------------------------------------------
5. And now, I wrote a simple ISR for the Timer 1 interrupt where i am just clearing the corresponding timer interrupt flag (TIF1) and incrementing a test variable.
6. Every thing is fine, but I could not able to figure out how to link this ISR to the default vector table or I couldnt make this ISR to execute when the interrupt occurs.
FYI, I have already worked with the HCS & HCS(X) mcu's before where we can link the ISR to the internal process by adding the "ISR name and vector number" to the .prm file and using modifier "interrupt" and "#pragma CODE_SEG NON_BANKED" statements at the location of Interrupt service routine definition.
But, what is the procedure in this MPC551x MCU environment ? I know the INTC module has two modes. One is Software vector mode and the other is Hardware vector mode and the default mode will be software vector mode and the correspoding ivor registers is IOVR #4.
*** I tried reading all the default files in the project, but still could not make this happen. I hope your ideas will be greatly helpful to me. I have also tried analyzing and understand this process using the example projects installed in Freescale folder. i.e. 551x_Cookbook project examples which is again difficult for me as almost all the projects are in assembly code but not in C code. Because I'm not familiar with the assembly code.
*** I have attached this simple project file with this post, please have a look at it.. and I would like to request you to please make the necessary modifications to .lcf files or some other startup files or anything that is required to my project file. So, that I can download and debug and check to compare and analyze how to implement the ISR. By the way, my debugger is the default Nexus Debugger with USB Power PC NEXUS Multilink.
Hi,
In order to write and ISR in 'C' language in a default project created for MPC551x family, we just need to add the below line
in the initialization phase of the project i.e. before the infinite loop of the main() function.
INTC_InstallINTCInterruptHandler(PIT_ISR, 149, 1);
where PIT_ISR here is the isr name, 149 is the vector number and '1' is the priority level.
for example....
/**************************************************************/
/* start of file (main.c) */
/**************************************************************/
#include "MPC551x.h"
#include "IntcInterrupts.h"
/**************************************************************/
/* end of file */
/**************************************************************/ | https://community.nxp.com/thread/53484 | CC-MAIN-2019-26 | refinedweb | 570 | 73.58 |
gettimeofday(2) gettimeofday(2)
NAME [Toc] [Back]
gettimeofday - get the date and time
SYNOPSIS [Toc] [Back]
#include <sys/time.h>
int gettimeofday(struct timeval *tp, void *tzp);
DESCRIPTION [Toc] [Back]
The gettimeofday() function obtains the current time, expressed as
seconds and microseconds since Epoch, and stores it in the timeval
structure pointed to by tp.
The resolution of the system clock is one microsecond.
PARAMETERS [Toc] [Back]
Programs should use this time zone information only in the absence of
the TZ environment variable.
tp A pointer to a timeval structure in which the current time is
returned.
The timeval structure includes the following members:
time_t tv_sec /* Seconds. */
long tv_usec /* Microseconds. */
tzp If this parameter is not a null pointer, it is interpreted as a
pointer to a timezone structure under HP-UX. The timezone
structure has the following fields:
tz_minuteswest The number of minutes that the local time zone is
west of Coordinated Universal Time (UTC) or Epoch.
tz_dsttime A flag that, if nonzero, indicates that Daylight
Savings Time (DST) applies locally during the
appropriate part of the year.
RETURN VALUE [Toc] [Back]
gettimeofday() returns the following values under HP-UX:
0 Successful completion.
-1 Failure. errno is set to indicate the error.
ERRORS [Toc] [Back]
If gettimeofday() fails, errno is set to the following value under
HP-UX:
[EFAULT] An argument address referenced invalid memory.
Hewlett-Packard Company - 1 - HP-UX 11i Version 2: August 2003
gettimeofday(2) gettimeofday(2)
EXAMPLES [Toc] [Back]
The following HP-UX example calls gettimeofday() twice. It then
computes the lapsed time between the calls in seconds and microseconds
and stores the result in a timeval structure:
struct timeval first,
second,
lapsed;
struct timezone tzp;
gettimeofday (&first, &tzp);
/* lapsed time */
gettimeofday (&second, &tzp);
if (first.tv_usec > second.tv_usec) {
second.tv_usec += 1000000;
second.tv_sec--;
}
lapsed.tv_usec = second.tv_usec - first.tv_usec;
lapsed.tv_sec = second.tv_sec - first.tv_sec;
WARNINGS [Toc] [Back]
Relying on a granularity of one microsecond may result in code that is
not portable to other platforms.
AUTHOR [Toc] [Back]
gettimeofday() was developed by the University of California,
Berkeley, and HP.
SEE ALSO [Toc] [Back]
date(1), ftime(2), settimeofda(2), stime(2), time(2), ctime(3C).
Hewlett-Packard Company - 2 - HP-UX 11i Version 2: August 2003 | https://nixdoc.net/man-pages/HP-UX/man2/gettimeofday.2.html | CC-MAIN-2020-29 | refinedweb | 373 | 54.12 |
You may want to take a look at another option in Hackage, the control- monad-exception package. The control-monad-exception library provides the building blocks for * Explicitly Typed exceptions (checked or not) * which are composable * and even provide stack traces (experimental feature) On 19/10/2009, at 01:00, Michael Snoyman wrote: > (Sorry,. > >. > I believe that control-monad-exception solves this tension between composability and explicit exceptions. You can have explicit exceptions which are composable: > > :t eval > eval :: (Throws DivideByZero l, Throws SumOverflow l) => Expr -> EM l Double > :t eval `catch` \ (e::DivideByZero) -> return (-1) > .... :: Throws SumOverflow l => Expr -> EM l Double > :t runEM(eval `catch` \ (e::SomeException) -> return (-1)) > .... : Expr -> Double > >. > >. I am with Henning on 'fail'. It must not be used as a replacement for throw, only for failed pattern matches which are programming errors and thus unchecked exceptions. | http://www.haskell.org/pipermail/haskell-cafe/2009-October/067959.html | CC-MAIN-2013-48 | refinedweb | 142 | 52.49 |
On 26 Nov 2001, Prabhu Ramachandran wrote: > > VTK (Martin et al) - 3D visualization on GTK - personal copyright > > Well, I guess that is a typo but FWIW I'd like to mention that VTK > does not depend on GTK and does not use it at all. Quite so, it is a typo. That's what I get for posting from a Microsoft machine late at night. (: > Thanks for the compliments on MayaVi. :) My thesis topic is in computer-mediated technical collaborations, specifically collaborative scientific visualization. My Sinister Ulterior Motive (TM) is that when I get around to implementing my ideas, I would like to do it in Python. The presence of so much good-quality scientific code gives me hope I can do that; if I can get at it in one place easily, I can concentrate on the computer science instead of reinventing the wheel. If not... there are fallback plans involving Java and Matlab, C and Octave, because I really am here to do research, not hack. It is my hope I won't have to use them. [snip] > That is why I chose the GPL. However, the GPL does force everyone > else who links to it to be GPL. I guess LGPL might also work but I > really don't know. Thanks for considering this idea. > Anyway, I am not sure you want to put all packages into one huge super > package. It would be a nightmare to package/distribute! I'd really > pity the person who'd have to maintain such a beast. Distutils are your friend. (: As for maintainers, one of the better ways of handling this is for everyone involved to pile on to scipy.org, each maintaining the code they brought to the party. The initial assimilation phase would indeed be painful, but once all the namespaces and package structures and the like were unified, the pain would subside. Where there are existing projects, such as MayaVi or Scigraphica, the core library would include them as opt-in parts - much like the current Numeric treats FFTW or LAPACK, and as Scientific treats Numeric. So long as the relevant maintainers are at least talking to each other (and the core library maintainers shoulder the responsibility of writing any impedance-matching code) it's all good. Cheers, Horatio | https://mail.python.org/pipermail/python-list/2001-November/081350.html | CC-MAIN-2017-30 | refinedweb | 382 | 62.88 |
Much ado about NULL: Exploiting a kernel NULL dereference
By nelhage on Apr 12, 2010
Last time, we took a brief look at virtual memory and what a
NULL pointer really means, as well as how we can use the
mmap(2) function to map the
NULL page so that we can safely use a
NULL pointer. We think that it's important for developers and system administrators to be more knowledgeable about the attacks that black hats regularly use to take control of systems, and so, today, we're going to start from where we left off and go all the way to a working exploit for a
NULL pointer dereference in a toy kernel module.
A quick note: For the sake of simplicity, concreteness, and conciseness, this post, as well as the previous one, assumes Intel x86 hardware throughout. Most of the discussion should be applicable elsewhere, but I don't promise any of the details are the same.
nullderef.ko
In order to allow you play along at home, I've prepared a trivial kernel module that will deliberately cause a
NULL pointer derefence, so that you don't have to find a new exploit or run a known buggy kernel to get a
NULL dereference to play with. I'd encourage you to download the source and follow along at home. If you're not familiar with building kernel modules, there are simple directions in the README. The module should work on just about any Linux kernel since 2.6.11.
Don't run this on a machine you care about – it's deliberately buggy code, and will easily crash or destabilize the entire machine. If you want to follow along, I recommend spinning up a virtual machine for testing.
While we'll be using this test module for demonstration, a real exploit would instead be based on a
NULL pointer dereference somewhere in the core kernel (such as last year's
sock_sendpage vulnerability), which would allow an attacker to trigger a
NULL pointer dereference -- much like the one this toy module triggers -- without having to load a module of their own or be root.
If we build and load the
nullderef module, and execute
echo 1 > /sys/kernel/debug/nullderef/null_read
our shell will crash, and we'll see something like the following on the console (on a physical console, out a serial port, or in
dmesg):
BUG: unable to handle kernel NULL pointer dereference at 00000000 IP: [<c5821001>] null_read_write+0x1/0x10 [nullderef]
The kernel address space
e
We saw last time that we can map the
NULL page in our own application. How does this help us with kernel
NULL dereferences? Surely, if every application has its own address space and set of addresses, the core operating system itself must also have its own address space, where it and all of its code and data live, and mere user programs can't mess with it?
For various reasons, that that's not quite how it works. It turns out that switching between address spaces is relatively expensive, and so to save on switching address spaces, the kernel is actually mapped into every process's address space, and the kernel just runs in the address space of whichever process was last executing.
In order to prevent any random program from scribbling all over the kernel, the operating system makes use of a feature of the x86's virtual memory architecture called memory protection. At any moment, the processor knows whether it is executing code in user (unprivileged) mode or in kernel mode. In addition, every page in the virtual memory layout has a flag on it that specifies whether or not user code is allowed to access it. The OS can thus arrange things so that program code only ever runs in "user" mode, and configures virtual memory so that only code executing in "kernel" mode is allowed to read or write certain addresses. For instance, on most 32-bit Linux machines, in any process, the address
0xc0100000 refers to the start of the kernel's memory – but normal user code is not allowed to read or write it.
Since we have to prevent user code from arbitrarily changing privilege levels, how do we get into kernel mode? The answer is that there are a set of entry points in the kernel that expect to be callable from unprivileged code. The kernel registers these with the hardware, and the hardware has instructions that both switch to one of these entry points, and change to kernel mode. For our purposes, the most relevant entry point is the system call handler. System calls are how programs ask the kernel to do things for them. For example, if a programs want to write from a file, it prepares a file descriptor referring to the file and a buffer containing the data to write. It places them in a specified location (usually in certain registers), along with the number referring to the
write(2) system call, and then it triggers one of those entry points. The system call handler in the kernel then decodes the argument, does the write, and return to the calling program.
This all has at least two important consequence for exploiting
NULL pointer dereferences:
First, since the kernel runs in the address space of a userspace process, we can map a page at
NULL and control what data a
NULL pointer dereference in the kernel sees, just like we could for our own process!
Secondly, if we do somehow manage to get code executing in kernel mode, we don't need to do any trickery at all to get at the kernel's data structures. They're all there in our address space, protected only by the fact that we're not normally able to run code in kernel mode.
We can demonstrate the first fact with the following program, which writes to the
null_read file to force a kernel
NULL dereference, but with the
NULL page mapped, so that nothing goes wrong:
(As in part I, you'll need to
echo 0 > /proc/sys/vm/mmap_min_addr as root before trying this on any recent distribution's kernel..)
#include <sys/mman.h> #include <stdio.h> #include <fcntl.h> int main() { mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED, -1, 0); int fd = open("/sys/kernel/debug/nullderef/null_read", O_WRONLY); write(fd, "1", 1); close(fd); printf("Triggered a kernel NULL pointer dereference!\n"); return 0; }.
Putting it together
To put it all together, we'll use the other file that
nullderef exports,
null_call. Writing to that file causes the module to read a function pointer from address
0, and then call through it. Since the Linux kernel uses function pointers essentially everywhere throughout its source, it's quite common that a
NULL pointer dereference is, or can be easily turned into, a
NULL function pointer dereference, so this is not totally unrealistic.
So, if we just drop a function pointer of our own at address
0, the kernel will call that function pointer in kernel mode, and suddenly we're executing our code in kernel mode, and we can do whatever we want to kernel memory.
We could do anything we want with this access, but for now, we'll stick to just getting root privileges. In order to do so, we'll make use of two built-in kernel functions,
prepare_kernel_cred and
commit_creds. (We'll get their addresses out of the
/proc/kallsyms file, which, as its name suggests, lists all kernel symbols with their addresses)
struct cred is the basic unit of "credentials" that the kernel uses to keep track of what permissions a process has – what user it's running as, what groups it's in, any extra credentials it's been granted, and so on.
prepare_kernel_cred will allocate and return a new
struct cred with full privileges, intended for use by in-kernel daemons.
commit_cred will then take the provided
struct cred, and apply it to the current process, thereby giving us full permissions.
Putting it together, we get:
#include <sys/mman.h> #include <stdio.h> #include <stdlib.h> #include <fcntl.h> struct cred; struct task_struct; typedef struct cred *(*prepare_kernel_cred_t)(struct task_struct *daemon) __attribute__((regparm(3))); typedef int (*commit_creds_t)(struct cred *new) __attribute__((regparm(3))); prepare_kernel_cred_t prepare_kernel_cred; commit_creds_t commit_creds; /* Find a kernel symbol in /proc/kallsyms */ void *get_ksym(char *name) { FILE *f = fopen("/proc/kallsyms", "rb"); char c, sym[512]; void *addr; int ret; while(fscanf(f, "%p %c %s\n", &addr, &c, sym) > 0) if (!strcmp(sym, name)) return addr; return NULL; } /* This function will be executed in kernel mode. */ void get_root(void) { commit_creds(prepare_kernel_cred(0)); } int main() { prepare_kernel_cred = get_ksym("prepare_kernel_cred"); commit_creds = get_ksym("commit_creds"); if (!(prepare_kernel_cred && commit_creds)) { fprintf(stderr, "Kernel symbols not found. " "Is your kernel older than 2.6.29?\n"); } /* Put a pointer to our function at NULL */ mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED, -1, 0); void (**fn)(void) = NULL; *fn = get_root; /* Trigger the kernel */ int fd = open("/sys/kernel/debug/nullderef/null_call", O_WRONLY); write(fd, "1", 1); close(fd); if (getuid() == 0) { char *argv[] = {"/bin/sh", NULL}; execve("/bin/sh", argv, NULL); } fprintf(stderr, "Something went wrong?\n"); return 1; }
(
struct cred is new as of kernel 2.6.29, so for older kernels, you'll need to use this this version, which uses an old trick based on pattern-matching to find the location of the current process's user id. Drop me an email or ask in a comment if you're curious about the details.)
So, that's really all there is. A "production-strength" exploit might add lots of bells and whistles, but, there'd be nothing fundamentally different.
mmap_min_addr offers some protection, but crackers and security researchers have found ways around it many times before. It's possible the kernel developers have fixed it for good this time, but I wouldn't bet on it.
One last note: Nothing in this post is a new technique or news to exploit authors. Every technique described here has been in active use for years. This post is intended to educate developers and system administrators about the attacks that are in regular use in the wild. | https://blogs.oracle.com/ksplice/tags/mmap | CC-MAIN-2015-14 | refinedweb | 1,703 | 55.68 |
CoverDetailLevel
Since: BlackBerry 10.3.0
#include <bb/cascades/CoverDetailLevel>
A class representing cover detail level options.
By default, a cover created for use in a MultiCover has the detail level Default, which currently gets transleted to High upon usage. The detail level controls which cover will get displayed on the screen when the application is minimized. A High detail level will be used when a large cover is displayed, and a Medium one will be used whe a smaller cover is displayed.
Overview
Public Types Index
Public Types
Specifies the cover detail level.
BlackBerry 10.3.0
- Default 0
The default value, which is typically the same as High.
- High
Specifies that the detail level should be High.Since:
BlackBerry 10.3.0
- Medium
Specifies that the detail level should be Medium.Since:
BlackBerry 10.3.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/cascades/bb__cascades__coverdetaillevel.html | CC-MAIN-2016-40 | refinedweb | 154 | 52.76 |
Customise Tensei resources with fields
A field represents a database column.
Each Tensei resource has a
fields method. This method takes in an array of fields that define the columns on the table this resource maps to.
Fields are much more than just columns. They can represent any way of collecting data from the user. Tensei ships with a variety of fields out of the box, including fields for text inputs, booleans, dates, file uploads, Markdown, Wysiwyg editors, Rich editors and more.
To create a field, first you need to identify the type of data this field would be storing. Let's take a
Post resource for example. A Post might have a
Title field and a
Published field. The title field would be a
varchar column, and the
published field would be a
boolean column.
If you're using a NoSQL database, the title would be a
String field, and published would be a
Boolean field.
import { tensei, resource, text, boolean } from '@tensei/core' export default tensei() .resources([ resource('Post') .fields([ text('Title'), boolean('Published') ]) ])
Tensei will "snake case" the name of the field to determine the underlying database column. For example, the
text('Example Title') field would connect to the
example_title column. However, if necessary, you may pass the column name as the second argument to the field's method:
text('Title', 'custom_title_column')
As mentioned above, the defined fields would not only be used to determine the database columns, but also the types of input components to display on the dashboard.
Often, you will only want to display a field in certain situations. For example, there is typically no need to show a
Password field on a resource index listing. Likewise, you may wish to only display a
Created At field on the creation / update forms. Tensei makes it a breeze to hide / show fields on certain screens.
The following methods may be used to show / hide fields based on the display context:
showOnIndex
showOnDetail
showOnCreate
showOnUpdate
hideOnIndex
hideOnDetail
hideOnCreate
hideOnUpdate
onlyOnForms
exceptOnForms
You may chain any of these methods onto your field's definition in order to instruct Tensei where the field should be displayed:
resource('Post') .fields([ text('Title') .hideOnIndex() .hideOnUpdate() ])
The Title field would be hidden from the resource index table because we called
hideOnIndex(), and will be hidden when updating a
Post because we called
hideOnUpdate().
Sometimes you want to completely prevent a field from being exposed to the API. You may use a combination of these methods to control where fields show up in the API:
hideFromCreateApi()
hideFromUpdateApi()
hideFromDeleteApi()
hideFromFetchApi()
hideFromShowApi()
Let's say you want to prevent a secret token field from being exposed to all data fetching endpoints, but be exposed to the create and update endpoints:
text('Secret') .hideFromFetchApi() .hideFromShowApi()
You may also use the
hideFromApi() method to completely hide a field from showing up in any calls to the API.
When attaching a field to a resource, you may use the
sortable method to indicate that the resource index may be sorted by the given field:
resource('Post') .fields([ text('Title') .sortable() ])
You may use the
searchable method to indicate that this field can be searched. When the search box on the resource index is used, only the searchable fields would be queried. All searchable fields would also be indexed in the database.
resource('Post') .fields([ text('Title') .searchable() ])
This portion of the documentation only discusses non-relationship fields. To learn more about relationship fields, check out their documentation.
Tensei ships with a variety of field types. So, let's explore all of the available types and their options:
The
Boolean field may be used to represent a boolean / "tiny integer" column in your database. For example, assuming your database has a boolean column named
active, you may attach a
Boolean field to your resource like so:
import { resource, boolean } from '@tensei/core' resource('Customer') .fields([ boolean('Active') ])
A
Boolean field would show up as a checkbox on dashboard forms.
The
Date field may be used to store a date value (without time).
import { resource, date } from '@tensei/core' resource('Employee') .fields([ date('Birthday') ])
You can default this field to the current time using
.defaultToNow() method. If a default value is not defined when creating a resource, this value would default to the current date.
date('Birthday') .defaultToNow()
You may customize the display format of your
Date fields using the
format method. This format won't affect how the date is saved to the database. The format must be a format supported by Day.js:
date('Birthday') .format('do MMM yyyy, hh:mm a'),
The
DateTime field may be used to store a date-time value. It will also show a date/time picker on dashboard forms.
import { resource, dateTime } from '@tensei/core' resource('Employee') .fields([ dateTime('Started On') ])
You may customize the display format of your
DateTime fields using the
format method. This format won't affect how the date is saved to the database. The format must be a format supported by Day.js:
dateTime('Started On') .format('do MMM yyyy, hh:mm a'),
The
Timestamp field may be used to store a timestamp value. It will also show a date/time picker on dashboard forms.
import { resource, timestamp } from '@tensei/core' resource('Employee') .fields([ timestamp('Last Logged In At') ])
Just like the date and dateTime fields, you can format timestamps for the dashboard using the
.format() method.
The
Integer field provides an
input control with a
type attribute of
number:
import { resource, integer } from '@tensei/core' resource('Book') .fields([ integer('Price') ])
You may use the
min,
max, and
step methods to set their corresponding attributes on the generated
input control. This also applies to other number types such as
bigInteger,
float, and
double.
integer('Price').min(1).max(1000).step(0.01)
The
bigInteger field provides an
input control with a
type attribute of
number. It will also store the data in a
bigInt database column. It defaults to a number column if the database being used does not support bigint.
import { resource, bigInteger } from '@tensei/core' resource('Book') .fields([ bigInteger('Price') ])
The
double field provides an
input control with a
type attribute of
number. It will also store the data in a
double database column.
import { resource, double } from '@tensei/core' resource('Book') .fields([ double('Price') ])
The
float field provides an
input control with a
type attribute of
number. It will also store the data in a
float database column.
import { resource, float } from '@tensei/core' resource('Book') .fields([ float('Price') ])
The
Select field may be used to generate a drop-down select menu. The select menu's options may be defined using the
options method:
import { select } from '@tensei/core' resource('Book') .fields([ select('Category') .options([{ label: 'Postgresql', value: 'pg' }, { label: 'SQL database', value: 'sql' }]) ])
The
Text field provides an
input control with a
type attribute of
text:
import { text } from '@tensei/core' resource('Book') .fields([ text('Title') ])
Text fields may be customized further by setting any attribute on the field. This can be done by calling the
htmlAttributes method:
text('Title') .htmlAttributes({ placeholder: 'Enter your email' })
The
Slug field provides a read-only input control that automatically generates a url-friendly slug based on another field.
slug('Slug') .from('Title') .type('date')
The above slug would be automatically generated
from() the
Title field. As you type the title on the CMS, you should see this change reflect.
You may call the
.editable() method if you do not want to slug to be read-only. You'll be able to update it to whatever you need.
The
Textarea field provides a
textarea control:
import { textarea } from '@tensei/core' resource('Book') .fields([ textarea('Description') ])
By default, Textarea fields will not display their content when viewing a resource on its detail page. It will be hidden behind a "Show Content" link, that when clicked will reveal the content. You may specify the Textarea field should always display its content by calling the
alwaysShow method on the field itself:
textarea('Description').alwaysShow()
You may also specify the textarea's height by calling the
rows method on the field:
textarea('Description').rows(7)
Textarea fields may be customized further by setting any attribute on the field. This can be done by calling the
htmlAttributes method:
textarea('Description').htmlAttributes({ placeholder: 'Some description of this book ...' })
If you want to add a not nullable constraint on a column, use the
.notNullable() method:
text('Title').notNullable()
If you want the opposite, you may call the
.nullable() method to make a field nullable.
Sometimes you want to add a unique constraint to a database field. You may do so by calling the
.unique() method on the field:
text('Email') .unique()
To set a database default value for a field, you may use the
.default() method.
bigInt('Views') .default(1000)
If you want to set the default to a database function such as
current_timestamp, use the
.defaultRaw() method instead:
timestamp('Expires At') .defaultRaw('current_timestamp(3)')
If you would like to place "help" text beneath a field on the create / update form, you may use the
description method. This description will also show up on API documentation.
text('Description').description('Provide a clear description of this book.')
You can modify a field before it is inserted into the database. An example use case is hashing a password. You can pass a callback to the
.onCreate() method that returns the modified value of the field.
text('Password') .onCreate((user) => Bcrypt.hashSync(user.password))
You can modify a field value before an update query is made to the database. An example use case is an
updated_at timestamp. We may want this to update to the current date everytime the database row is updated.
field('Updated At') .onUpdate((user) => new Date()) | https://tenseijs.com/docs/fields | CC-MAIN-2022-21 | refinedweb | 1,624 | 65.32 |
I have knocked up some code on my Linux Mint machine to communicate with the my WICE interface. So far all look good.
But sorry folks, I did not use C code not even Assembly.
I did it in factor, I have been using this language for 5 years now, I like its forthy-ness, interactive interface and the library of functions.
First I need open serial, this took some time work, how to do this on linux environment. I use library "io.serial" use "termios" and "streams" libraries. I had some issue on read operation, transmission was the easiest, only thing to remember is use the "flush" function to send the data out after "write" function.
: wice-start ( -- ) "/dev/ttyUSB0" 115200 <serial-port> [ break wice-ack drop wice-status drop wice-reset drop wice-read-memory drop wice-read-minc drop wice-read-saddress drop wice-close drop 0 wice-open drop wice-reset drop 0 wice-write-memory drop wice-reset drop 0x55 wice-write-minc drop wice-read-u30 drop wice-read-u4 drop wice-read-u5 drop wice-read-u6 drop wice-reset drop 16 wice-dump drop ] with-serial-port-fix ;
wice-start basically sets up a serial port tuple. The with-serial-port-fix opens the serial port into a stream namespace and then executes all function in the quotation [ ] .
: with-serial-port-fix ( serial-port quot -- ) break [ open-serial ] dip [ [ serial-modify ] keep ] dip [ [ stream10 seconds over set-timeout drop ] keep ] dip ! [ [ stream dup in>> buffer>> 1 >>size drop drop ] keep ] dip ! [ [ dup serial-fd F_SETFL 0 fcntl drop drop ] keep ] dip [ stream ] dip with-stream ; inline
So in the quotation I run some function like wice-ack. Which sends out $00 on the serial port and read 1 byte back, does a test to see if it is zero.
! acknowlge the device : wice-ack ( -- ? ) 0 1byte-array write flush 1 read-partial ?first 0 = ;
I test each command and then I do a memory read function of 16 bytes x 16 lines.
! read memory addressed by address counter and increment : wice-read-minc ( -- d ) 4 1byte-array write flush 1 read-partial ; ! dump one inline : wice-read-marray ( n -- array ) <byte-array> [ drop wice-read-minc first ] map ; : wice-dump ( n -- array ) f <array> [ drop 16 wice-read-marray ] map ;
The result in an array of 16 byte arrays read from the WICE all values are in decimal, next I will do print that out as hex dump. Then I will try to do write array to memory.
So far factor has made testing very easy.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In. | https://hackaday.io/project/12867-wice-4m/log/184782-factorise-or-factorize | CC-MAIN-2022-40 | refinedweb | 453 | 63.49 |
I'm working on an algorithm which needs to generate millions of numbers as fast as possible. Actually I found out that the rand() function of my algorithm takes 75% of the process time.
So I'm looking for something faster. And I don't need a big range at all. (I only need integer numbers below 1000)
Do you know something I could use ?
Thanks !
Edit :
I use this numbers for shuffling groups of less than 1000 entities.
I found out more about the "fast rand". And there is SSE version version which is even faster and generates 4 numbers at a time.
static unsigned int g_seed; // Used to seed the generator. inline void fast_srand(int seed) { g_seed = seed; } // Compute a pseudorandom integer. // Output value in range [0, 32767] inline int fast_rand(void) { g_seed = (214013*g_seed+2531011); return (g_seed>>16)&0x7FFF; } | https://codedump.io/share/w5U5DSabjOFJ/1/faster-than-rand | CC-MAIN-2018-22 | refinedweb | 142 | 76.11 |
Subject: Re: [boost] questions regarding GCC visibility
From: Robert Ramey (ramey_at_[hidden])
Date: 2015-04-01 11:06:21
Rob Stewart wrote
>> Funny thing is:
>> doesn't seem to be a problem on my machine
>> doesn't seem to occur with all exported symbols - only select ones -
>> often but not always desctructors
> [snip]
>> Any constructive suggestions would be appreciated.
>
> I haven't looked at the source, but I'll mention a few things that may be
> helpful. Default visibility can be set on an entire class or on individual
> member functions. Compiler generated special member functions will be
> hidden if the class is hidden. Types for which RTTI is needed should have
> default visibility, as should exception types.
>
> Does your local build succeed after cleaning all build artifacts?
On my local system, I made the change to the Jamfile and rebuilt but nothing
happened. I'm guessing that b2 doesn't consider the Jamfile itself as a
dependency. So I then deleted all previous build files for the compiler in
question - gcc 4.9. Then I rebuilt the library and ran all the tests.
Everything looked good. Then I pushed the change to git hub and watched the
development results - and they are starting to show failures. So that is how
I got to where I am.
I should note that the same attribute tags, macros, etc are used to
export/import symbols for the windows system and this has been working well
for many years. The only change in that was I that I used the more recent
BOOST_SYMBOL_EXPORT/IMPORT macros to make things more readable. Since the
windows tests seem to work well, I'm feel comfortable in concluding that
this change hasn't had any ill effects.
I'm guessing that are some subtle differences in the rules between the two
platforms. The gcc documentation on the subject is suspiciously complicated
compare to the windows explanation.
I could restore the previous of the jamfile which comments out the
visibility=hidden switch. But I feel that I'm pretty close to getting it
right and I think that excluding extraneous stuff from the shared library
objects make for a better product. So I'll spend a little more time on it
before I give up.
Thanks to all for the suggestions.
Robert Ramey
PS - one post mentioned the possibility of assigning a visibility attribute
to a whole namespace. This looks interesting but I never saw it anywhere.
It's also unclear whether or not it's portable
-- | https://lists.boost.org/Archives/boost/2015/04/221256.php | CC-MAIN-2020-34 | refinedweb | 419 | 63.8 |
A realtor wishes to track her commissions of house sales. The information that she collects is as follows:
house address a string
house type a string ( split level, bungalow, etc.)
sale price a double
She wishes to calculate the commission for each house as well as find the total commission earned for all houses. Commission is calculated as 7% of the first $100,000 and 3.5% of any amount over $100,000. Create an application that will accept an array of house objects and return the total commission. The application should also be able to calculate the commission earned on each house.
***Use the following test data:
123 Main Street, Split level, 250000.00
456 Elm Street, Bilevel, 150000.00
789 Oak Ave, Bungalow, 95000.00
***
i already made a class House and it compiles with no errors
public class House { private String houseAddress; private String houseType; private double sellingPrice; public House( String address, String type, double price) { setAddress(address); setType(type); setPrice(price); } public void setAddress(String add ) { houseAddress = add; } public String getAddress() { return houseAddress; } public void setType(String typ ) { houseType = typ; } public String getType() {return houseType; } public void setPrice(double p ) { sellingPrice = p; } public double getPrice() {return sellingPrice; } }but i attempted the HouseCommissions class and i am encountering some errors, can u please explain what i did wrong,, here is the code below
import java.util.*; public class HouseCommissions { final private double commissionrate1= 0.07; final private double commissionrate2 = 0.035; double commission1; double commision2; Scanner in = new Scanner(System.in); House[] h= new House[3]; h[0]= new House{123 Main Street, Split level, 250000.00}; h[1]= new House{456 Elm Street, Bilevel, 150000.00}; h[2]= new House{789 Oak, Bungalow, 95000.00}; public double Commission(House[] h) { if( House[] h.getPrice<=100000.0) { commission1= (commissionrate1*h); System.out.println(commission1); } else { commision2= commissionrate2*h; System.out.println(commision2); } } }
*** Edit ***
Please use code tags when posting code.
This post has been edited by GunnerInc: 25 November 2012 - 08:38 PM
Reason for edit:: Added elusive code tags | http://www.dreamincode.net/forums/topic/301600-java-assignment-stuck-midway-having-errors-with-code-can-you-assist/ | CC-MAIN-2018-13 | refinedweb | 343 | 57.47 |
Disappearing Web Part Trick
Discussion in 'ASP .Net Web Controls' started by AL, Aug 21, 2006.:
- 367
- Tim Mackey
- Jul 27, 2004
disappearing web methods and error: using tempuri.org, when I specified a unique namespaceMike, Sep 24, 2005, in forum: ASP .Net
- Replies:
- 0
- Views:
- 524
- Mike
- Sep 24, 2005
Disappearing web controlsasianavatar, Jun 27, 2005, in forum: ASP .Net Web Controls
- Replies:
- 0
- Views:
- 175
- asianavatar
- Jun 27, 2005
Web Service with disappearing Web MethodsPeter Cook, Sep 12, 2005, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 140
- Peter Cook
- Sep 12, 2005
Variable displays at one part while does not in another part in aJack, May 9, 2005, in forum: ASP General
- Replies:
- 8
- Views:
- 487
- Jack
- May 10, 2005 | http://www.thecodingforums.com/threads/disappearing-web-part-trick.778743/ | CC-MAIN-2016-07 | refinedweb | 124 | 70.53 |
I have no idea what in the world is going on with my exchange 2007. It has run beautiful since the transition up until about an hour ago. Outlook 2007 clients can no longer connect to the server. Cannot send or receive. I can send/receive emails through Exchange 2007 via my Blackberry, and use OWA, but internal connections are shot for some reason. When I check the event logs, there are all sorts of issues and its trying to find the Ex 2003 SBS server for the connector, but the connector has been removed. I can't remove SBS yet because it still hold 2 major necessities for my organization. It still exists is still a DC and GC, is still in the old First Administrative Group (not seen by 07), and the connectors are gone. EX2007 also refuses RDP connections by the way.
Any help you can offer up would be appreciated. Dont know how long my bullet proof vest is gonna hold up he.
20 Replies
Feb 16, 2010 at 10:17 UTC
This shows up on the old Exchange Server:
When sending mail to following address myfileserver.domain.local, we have found the connector with target domain * matching destination address exists in DS. However, we have no way of getting there. Possibly, you need to check your topology and add appropriate connectors among Routing Groups.
Feb 16, 2010 at 10:19 UTC
On a send/receive from Outlook 2007, I am greeted with this error:
Task 'Microsoft Exchange' reported error (0x8004011D) : 'The server is not available. Contact your administrator if this condition persists.'
Task 'Microsoft Exchange - Sending' reported error (0x80040115) : 'The connection to Microsoft Exchange is unavailable. Outlook must be online or connected to complete this action.'
Feb 16, 2010 at 10:22 UTC
I would have to ask - What has changed in the last two days?
Did you do an update, did you add HW/SW
Did you reconfig the DC or the DNS
Made a change to the firewall
Any of these could have an adverse effect on the exchange server
From what you said it sounds like this is your scenario:
You have an SBS server running your DC, DNS and AD
This SBS used to run your exchange, but you have migrated to a new stand alone exchange server.
Sound like you are in the process of removing the SBS as the primary.
Do I have this right?
Feb 16, 2010 at 10:24 UTC
Just a stab in the dark, but have you rebooted the SBS and the new Exchange recently? If not, you might just try bring them down, shutting down the users, and bring them back up. Then log one user and see if it works.
Feb 16, 2010 at 10:26 UTC
check the system time on exchange sever compared to SBS and clients first. a difference of more than a few minutes can do this.
Have you tried accessing other PC's from exchange server? can you ping exchange server? checked the services?
Feb 16, 2010 at 10:31 UTC
The RDP symptom may be closer to your root cause than you might imagine. If RDP is enabled in your System properties, but you can't connect, look towards your network setup. Perhaps a firewall issue, etc. Whatever's blocking RDP may in fact be blocking Exchange.
Feb 16, 2010 at 10:31 UTC
I am in the process of removing SBS. I just removed the Small Business Connector from Exhcnage 2007 this morning. I did find that the Exchange 07 box turned its firewall on. I disabled the service and that didnt help. I have been working on a DFS namespace all morning (on 2 DCs which will replace the SBS), but have not altered any Default GPO's. I have been testing folder redirection for 2 users, and it has been successful, although GPMC says the printer policy from the new print server has applied to these users, the printers still do not show up. I think SBS is getting bitchy because its on the way out, but Exchange email is still accessible from OWA and blackberries. Here is one of the events in Exchange 2007 log:
The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003 server OldExchangeServer.domain.local in Routing Group CN=first routing group,CN=Routing Groups,CN=first administrative group,CN=Administrative Groups,CN=LAURENTIDEINC,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=mydomain,DC=local in routing tables with the timestamp 2/16/2010 3:31:45 PM.
And:
The Active Directory topology service could not discover any route to connector CN=SmallBusiness SMTP connector,CN=Connections,CN=first routing group,CN=Routing Groups,CN=first administrative group,CN=Administrative Groups,CN=MyDomain,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=MyDomain,DC=local in the routing tables with the timestamp 2/16/2010 3:31:45 PM. This connector will not be used.
Feb 16, 2010 at 10:34 UTC
How much free disk space do you have?
Feb 16, 2010 at 10:41 UTC
check the system time on exchange sever compared to SBS and clients first. a difference of more than a few minutes can do this.
Have you tried accessing other PC's from exchange server? can you ping exchange server? checked the services?
I can ping the Exchange. Cannot RDP via name or IP to it. Cannot RDP from Exchange to clients.
Feb 16, 2010 at 10:42 UTC
Just a stab in the dark, but have you rebooted the SBS and the new Exchange recently? If not, you might just try bring them down, shutting down the users, and bring them back up. Then log one user and see if it works.
I have rebooted Exchange but not SBS yet. All time is Synchronized.
Feb 16, 2010 at 10:44 UTC
SBS root drive is small as usual but has almost 12GB free. Ex 07 has about 80GB free on root and 90 GB free on secondary. If I'm not mistaken this occurred right around the time I turned on the LCR feature in Exchange for the First Storage Group on 07 by the way. I turned it back off just to make sure.
Feb 16, 2010 at 10:46 UTC
Outlook Web Access just stopped working as well now. WTH?!
Feb 16, 2010 at 10:47 UTC
I CAN access OWA directly from the server via I.
Feb 16, 2010 at 10:49 UTC
I jsut rebooted SBS it should be back up soon.
Feb 16, 2010 at 11:09 UTC
On reboot SBS threw up about 30 errors in red:
1-
Process INETINFO.EXE (PID=1876). Topology Discovery failed, error 0x80040952.
Feb 16, 2010 at 11:10 UTC
A ton of these showed up. It looks like a group policy issue to me. Thoughts?
Feb 16, 2010 at 11:13 UTC
After a reboot of the old SBS email is now connecting again and OWA seems to be working as well. Thanks PChiodo. Is this common when in the middle of the transition and migration from SBS? I kind of expected something like this to occur, just not so unexpectedly. Thanks again
Feb 16, 2010 at 4:18 UTC
SBS is a funny animal - I fought with it at a Health Care Facility for about a year and then migrated them to a Standard W2003 setup with hosted exchange.
I just knew that SBS in finicky, especially when it is the DC and the AD - it is some kind of authentication issue tied to licensing.
With SBS the cals cover exchange as well as standard cal for windows, but with exchange stand alone, you need cals for exchange and for Windows Server, and they are different from the SBS cals. So somewhere in the migration, it gets confused and does not auth the license or the user and drops communication. Probably because it grabbed the wrong CAL.
A reboot in the proper order, realigns the cals and the auths and low and behold it works again... for a while.
You will probably run into this again, just remember to reboot in the right order.
Glad it is working again, and complete your migration ASAP.
Feb 17, 2010 at 7:30 UTC
Well actually with Server 2008 and Exchange 2007 they did away with CALs. You are still required to purchase them but they do it on the honor system. Basically you don't add the CALs you buy, you just own them to be legit. ;)
Not saying it didn't have something to do with the old SBS CALs, just saying the new stuff doesn't utilize CALs in any way
This discussion has been inactive for over a year.
You may get a better answer to your question by starting a new discussion. | https://community.spiceworks.com/topic/89280-email-on-exchange-2007-stopped-communicating-with-outlook-2007-for-no-reason | CC-MAIN-2017-26 | refinedweb | 1,479 | 72.36 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.